Skip to main content

Table 3 Model comparison results

From: Automated freezing of gait assessment with marker-based motion capture and multi-stage spatial-temporal graph convolutional neural networks

Model

F1@10

F1@25

F1@50

F1@75

MCC

Bi-LSTM\(\dagger\)

25.9 ± 8.40

21.8 ± 9.03

15.0 ± 5.60

11.9 ± 6.26

62.4 ± 23.2

Bi-LSTM

63.7 ± 21.7

63.2 ± 22.0

50.8 ± 25.4

40.9 ± 28.4

78.8 ± 21.1

TCN

45.4 ± 16.8

42.7 ± 18.6

35.8 ± 14.8

27.0 ± 16.6

81.1 ± 12.9

ST-GCN

53.2 ± 21.2

51.5 ± 21.7

46.7 ± 22.5

37.6 ± 26.6

83.0 ± 11.5

MS-TCN

68.2 ± 29.4

66.8 ± 29.3

60.2 ± 30.5

54.9 ± 33.1

77.3 ± 22.2

MS-GCN

77.8 ± 15.3

77.8 ± 15.3

74.2 ± 21.0

57.0 ± 30.1

82.7 ± 15.5

  1. Overview of the FOG segmentation performance in terms of the segment-wise F1@50 and sample-wise MCC for MS-GCN and the four strong baselines. The \(\dagger\)denotes the sliding window FOG detection scheme. The best score is denoted in bold. All results were derived from the test set, i.e., subjects that the model had never seen