scikit-learn cross_val_predict准确性得分如何计算?

如下代码所示,使用 k 折方法的cross_val_predict(见doc,v0.18)是否可以计算出每折的精度并最终取平均?

__

cv = KFold(len(labels), n_folds=20)

clf = SVC()

ypred = cross_val_predict(clf, td, labels, cv=cv)

accuracy = accuracy_score(labels, ypred)

print accuracy

回答:

不,不是的!

根据交叉验证文档页面,cross_val_predict不返回任何分数,而仅返回基于某种策略的标签,如下所述:

函数cross_val_predict具有与cross_val_score类似的接口,

。只能使用将所有元素完全一次分配给测试集的交叉验证策略(否则会引发异常)。

因此,通过致电,accuracy_score(labels, ypred) 与真实标签相比

即可。再次在同一文档页面中指定:

然后,这些预测可用于评估分类器:

predicted = cross_val_predict(clf, iris.data, iris.target, cv=10)

metrics.accuracy_score(iris.target, predicted)

如果您需要不同倍数的准确性得分,则应该尝试:

>>> scores = cross_val_score(clf, X, y, cv=cv)

>>> scores

array([ 0.96..., 1. ..., 0.96..., 0.96..., 1. ])

然后对于所有折痕的平均准确度,请使用scores.mean()

>>> print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))

Accuracy: 0.98 (+/- 0.03)


回答:

对于计算Cohen Kappa coefficient和混淆矩阵,我假设您的意思是真实标签与每个折痕的预测标签之间的kappa系数和混淆矩阵:

from sklearn.model_selection import KFold

from sklearn.svm.classes import SVC

from sklearn.metrics.classification import cohen_kappa_score

from sklearn.metrics import confusion_matrix

cv = KFold(len(labels), n_folds=20)

clf = SVC()

for train_index, test_index in cv.split(X):

clf.fit(X[train_index], labels[train_index])

ypred = clf.predict(X[test_index])

kappa_score = cohen_kappa_score(labels[test_index], ypred)

confusion_matrix = confusion_matrix(labels[test_index], ypred)


回答:

它使用KFold将数据拆分为多个k部分,然后进行i=1..k迭代:

  • i'th部分作为测试数据和其他所有部分作为训练数据
  • 使用训练数据训练模型(除以外的所有部分i'th
  • 然后通过使用经过训练的模型,预测i'th零件的标签(测试数据)

在每次迭代中,i'th将预测部分数据的标签。最后,cross_val_predict合并所有部分预测的标签,并将它们作为最终结果返回。

此代码逐步显示了此过程:

X = np.array([[0], [1], [2], [3], [4], [5]])

labels = np.array(['a', 'a', 'a', 'b', 'b', 'b'])

cv = KFold(len(labels), n_folds=3)

clf = SVC()

ypred_all = np.chararray((labels.shape))

i = 1

for train_index, test_index in cv.split(X):

print("iteration", i, ":")

print("train indices:", train_index)

print("train data:", X[train_index])

print("test indices:", test_index)

print("test data:", X[test_index])

clf.fit(X[train_index], labels[train_index])

ypred = clf.predict(X[test_index])

print("predicted labels for data of indices", test_index, "are:", ypred)

ypred_all[test_index] = ypred

print("merged predicted labels:", ypred_all)

i = i+1

print("=====================================")

y_cross_val_predict = cross_val_predict(clf, X, labels, cv=cv)

print("predicted labels by cross_val_predict:", y_cross_val_predict)

结果是:

iteration 1 :

train indices: [2 3 4 5]

train data: [[2] [3] [4] [5]]

test indices: [0 1]

test data: [[0] [1]]

predicted labels for data of indices [0 1] are: ['b' 'b']

merged predicted labels: ['b' 'b' '' '' '' '']

=====================================

iteration 2 :

train indices: [0 1 4 5]

train data: [[0] [1] [4] [5]]

test indices: [2 3]

test data: [[2] [3]]

predicted labels for data of indices [2 3] are: ['a' 'b']

merged predicted labels: ['b' 'b' 'a' 'b' '' '']

=====================================

iteration 3 :

train indices: [0 1 2 3]

train data: [[0] [1] [2] [3]]

test indices: [4 5]

test data: [[4] [5]]

predicted labels for data of indices [4 5] are: ['a' 'a']

merged predicted labels: ['b' 'b' 'a' 'b' 'a' 'a']

=====================================

predicted labels by cross_val_predict: ['b' 'b' 'a' 'b' 'a' 'a']

以上是 scikit-learn cross_val_predict准确性得分如何计算? 的全部内容, 来源链接: utcz.com/qa/406977.html

回到顶部