Re: [問題] Homework 1

看板CS_SLT2005作者 (@_@"""")時間20年前 (2005/09/25 13:03), 編輯推噓0(000)
留言0則, 0人參與, 最新討論串2/4 (看更多)
※ 引述《hometoofar (家太遠了)》之銘言: : I have read "A Practical Guide to Support Vector Classification", but I am : still confused about how to do homework 1. Homework 1 is using k-nearest neighbor method, not SVM. Just forget it.The dataset is a multilabel one, i.e., one instance can have more than one target value (class labels). Since kNN is original designed for one target value, it is up to you that how to apply it on multilabel dataset. : In the guide, the proposed procedure is to use the RBF kernel (linear is fine : for homework right?) and use cross validation to find the best parameter C and : gamma. : The parts I don't understand are : 1. The guide has this line - Each instance in the training set contains one : “target value”(class labels) and several“attributes”(features). Does this : mean that all instances in a training set should have the same label? Then the : following line in the guide - The goal of SVM is to produce a model which : predicts target value of data instances in the testing set which are given only : the attributes. Does that mean we need a model for each label? And if we have : multiple labels for an instance, each combination of labels needs a separate : model? ie. Label1 needs a model, Label1,3 needs another. : 2. How do I interpret the result of kernel function. If I simply sub in xi and : xj to the kernel function, I get a number. What does that number mean? : 3. How do I use k-nearest neighborhood to train the model? The guide suggests : that a grid search of C and gamma to identify the best C and gamma. What : k-nearest neighborhood should do here? If I am using linear model, there is no : parameter C and gamma. -- ※ 發信站: 批踢踢實業坊(ptt.cc) ◆ From: 140.112.90.75
文章代碼(AID): #13DY-9Qt (CS_SLT2005)
討論串 (同標題文章)
本文引述了以下文章的的內容:
以下文章回應了本文
完整討論串 (本文為第 2 之 4 篇):
文章代碼(AID): #13DY-9Qt (CS_SLT2005)