Skip to main content

Table 4 Comparative results obtained by HL-KAOG, CART, KNN, and SVM on ten real-world data sets

From: Network-based data classification: combining K-associated optimal graphs and high-level prediction

 

HL-KAOG

CART

KNN

SVM

Data set

Acc. ± Std.

Acc. ± Std.

Acc. ± Std.

Acc. ± Std.

Iris

97.33 ± 3.52 (λ1)

93.60 ± 5.59

96.37 ± 4.63

96.28 ± 4.02

Glass

70.78 ± 9.16 (λ1)

64.12 ± 9.33

72.64 ± 8.09

68.61 ± 7.78

Balance

95.71 ± 2.40 (λ1)

88.20 ± 4.25

89.77 ± 1.96

99.97 ± 0.08

Monks-2

96.53 ± 2.43 (λ1)

95.67 ± 2.48

81.26 ± 5.02

93.79 ± 3.22

Ecoli

84.90 ± 5.73 (λ2)

80.78 ± 5.55

85.99 ± 5.11

87.23 ± 5.22

Append.

83.54 ± 7.27 (λ1)

77.24 ± 9.95

86.99 ± 8.71

85.72 ± 8.15

Thyroid

97.30 ± 3.16 (λ1)

96.64 ± 2.95

93.58 ± 4.74

97.19 ± 2.59

Sonar

83.75 ± 8.07 (λ1)

74.14 ± 9.69

81.78 ± 8.08

86.06 ± 7.43

Digits

98.75 ± 0.35 (λ1)

90.27 ± 1.27

98.79 ± 0.37

99.26 ± 0.33

SPECTF

80.07 ± 5.42 (λ2)

75.41 ± 6.20

77.90 ± 6.78

78.01 ± 3.92

  1. ‘Acc’ and ‘Std’. denote, respectively, the average of accuracy and the standard deviation over 30 runs using the stratified 10-fold cross-validation process. In HL-KAOG, the classification result is obtained from the best value between λ1 = 0.2 and λ2 = 0.6. Italic values denote the best predictive performance among the techniques for each data set.