Skip to main content

Table 1 Overview of studies categorized under the section applications of Kinect in elderly care

From: Systematic review of Kinect applications in elderly care and stroke rehabilitation

Author

Year

Population

Significant findings

Elderly care > Fall detection

Kepski et al.

2012

Study type: methodologyParticipants: unspecifiedAge: unspecified

The study utilized a fuzzy inference system which combined data from the Kinect and a wearable accelerometer and gyroscope, and was run on PandaBoard ES in real-time. Unobtrusive fall detection with experimental results indicating high effectiveness of fall detection even in environments lacking visible light were reported.

Planinc et al.

2013

Study type: methodologyParticipants: 2 (unspecified gender)Age: unspecified

Eighteen different sequences consisting of ten true falls and eight non-falls were examined. A comparison to previous fall detection methods, audio-based and 2D sensor-based, using 3D Image Coordinates (IC) and 3D using world coordinates (WC) resulted in: Recall (defined as: TruePositive TruePositive + FalseNegative ): IC = 78%, WC = 93%; Precision (defined as: TruePositive TruePositive + FalsePositive ): IC = 100%, WC = 100%; F-score (defined as: 2× Recall × Precision Recall + Precision : IC = 87%, WC = 96%; True Negative (defined as: TrueNegative TrueNegative + FalsePositive ): IC = 100%, WC = 100%, and Accuracy (defined as: TruePositive + TrueNegative TruePositive + FalseNegative + FalsePositive + FalseNegative ) IC = 86%, WC = 96%

Rougier et al.

2011

Study type: methodologyParticipants: unspecifiedAge: unspecified

After examining 79 videos: 30 sitting down, 25 falls (including 7 totally occluded), and 24 crouching (including 6 totally occluded), an overall fall detection success rate of 98.7% was observed using the centroid height relative to floor level and velocity of a moving body methodology. All ‘not occluded’ events were correctly classified, but in the case of a total occlusion, utilizing body velocity remains unverified in discriminating a person who falls from a person who brutally sits.

Lee et al.

2012

Study type: researchParticipants: unspecifiedAge: unspecified175 video segments of walking, standing, crouching down, standing up, fallingforward

Algorithm capable of monitoring shadow filled or completely dark environments. The system used three features: bounding box ratios, normalized 2-D velocity variations from the centroids, and Kinect-gathered depth information. The algorithm was then validated by applying it to 175 video segments of walking, standing, crouching down, standing up, falling forward, falling backward, falling to the right, and falling to the left; resulting in an overall accuracy of 97% and a minimal false positive rate of 2%.

Mastorakis et al.

2012

Study type: researchParticipants: 8 (unspecified gender) Age: unspecified

A 3D bounding box methodology was utilized to detect falls using 184 recorded videos: 48 falls (backward, forward and sideways), 32 seating activities, 48 lying activities on the floor (backward, forward and sideways) and 32 “picking up an item from the floor.” Other miscellaneous activities that change the size of the 3D bounding box were also performed (i.e. sweeping with a broom, dusting with a duster). The system was reported as 100% accurate with respect to fall detection with no observed false positives or false negatives; however, due to the unique method of fall detection utilized, if an item (i.e. chair) wasmoved, a new bounding box was created for the item and if it subsequently fell over, a false fall detection could be triggered.

Zhang et al.

2012

Study type: researchParticipants: 5 (unspecified gender)Age: unspecifiedUtilized 200 recorded videos(condition 1 = 100, condition2 = 50, condition 3 = 50.)

System used two models: the appearance model, a method of extracting data from 2D images when subject was out of range of the Kinect’s depth sensing, and the kinematic model using data derived from the Kinect’s 3D world coordinates readings. The model was trained using data captured under three different conditions: 1) less than 4 meters distance - normal illumination; 2) subject in range of depth sensor - without enough illumination; and 3) greater than 4 meters distance - normal illumination. Comparisons were conducted between: falling from a chair (L1); falling from standing (L2); standing (L3); sitting on a chair (L4), and sitting on the floor (L5). Under condition #1, the appearance model resulted in: L1 = 90%, L2 = 60%, L3 = 70%, L4 = 60%, L5 = 100% accuracy, whereas the kinematic model model resulted in: L1 = 100%, L2 = 90%, L3 = 100%, L4 = 100%, L5 = 100% accuracy. Under condition #2, the appearance model resulted in: L1 = 80%, L2 = 30%, L3 = 70%, L4 = 80%, L5 = 10% accuracy, whereas the kinematic model resulted in: L1 = 100%, L2 = 80%, L3 = 100%, L4 = 90%, L5 = 100% accuracy. The appearance approach performed at a speed of 0.0074s. The kinematic approach performed at a speed of 0.0194s.

Elderly care > Fall risk reduction

Parajuli et al.

2012

Study type: methodologyParticipants: unspecifiedAge: unspecified

Four data sets used 1) normal walking; 2) abnormal walking; 3) standing, and 4) sitting. Nine methods utilizing various combinations of the following variables were used: Z-coordinate, absolute height, arms coordinates, and a Support Vector Machine (SVM). Correct detection of normal and abnormal walking, sitting, and standing of a C-SVM (SVM using C-Support Vector Classification) increased from (≈71% to ≈99%) with the use of scaling SVM data. This lead to the conclusion that SVM scaling of data is critical for accuracy within algorithms such as this. Both posture and gait recognition were observed to follow a similar pattern of accuracy.

Gabel et al.

2012

Study type: methodologyParticipants: 23 (m = 19, f = 4)Age: 26 to 56

The study conducted a full body gait analysis of Kinect readings, compared to two pressure sensors (FlexiForce, A2013) and a gyroscope (ITG-3200 by InvenSense4) and resulted in the following (units in ms):

   

Left stride: avg strides captured = 1169; mean difference (kinect v. baseline) = 8; SD = 62;

   

Right stride: avg collected = 1130; mean difference (kinect v. baseline) = 2; SD = 46;

   

Left stance: avg collected = 634; mean difference (kinect v. baseline) = -8; SD = 110;

   

Right stance: avg collected = 595; mean difference (kinect v. baseline) = -20; SD = 90;

   

Left swing: avg collected = 518; mean difference (kinect v. baseline) = 6; SD = 115;

   

Right swing: avg collected = 541; mean difference (kinect v. baseline) = 27; SD = 104;

   

Angular velocity of arm resulted in a correlation coefficient between the Kinect-based prediction and the gyroscope-based true value of >0.91 for both arms with an avg difference of (units in °/second): left arm = 1.52; right arm = -0.86 (SD L = 48.36 R = 44.63)

Stone et al.

2011

Study type: methodologyParticipants: 3 (unspecified gender) Age: unspecified18 total walking sequences - two walks were collected for each speed: slow, normal, and fast for each participant.

The calculated percentage difference between the Kinect systems readings and the Vicon system readings for walking speed, average stride time, and average stride length measurements are as follows (Mean (M), Standard Deviation (SD), Maximum (MAX)): Kinect #1 (parallel to sensor): walking speed: M = -4.1%, SD = 1.9%, MAX = 9.6%; stride time: M = 1.9% SD = 2.5%, MAX = 4.1%; stride length: M = -1.9%, SD = 2.5%, MAX = 11.7%. Kinect #2 (away from sensor): walking speed: M = -1.9%, SD = 1.2%, MAX = 4.9%; stride time: M = 0.7%, SD = 1.3, MAX = 8.4%; and stride length: M = -1.1, SD = 2.5, MAX = 9.4%. A secondary artefact noted during this study: typically Kinect-gathered data at a relatively long range becomes unusable; however, utilizing this system, initial data showed little change in accuracy at long range (up to 8.1 meters). A validation of this unusual result has yet to substantiate these initial findings.

Stone et al.

2012

Study type: methodologyParticipants: 7 (m = 4, f = 3)Age: 75–95

Unobtrusively identified walking sequences and automatically generated habitual, in-home gait parameter estimates. The following is representative data for participant 1: Avg. speed (cm/sec): 62.2, computed avg. speed: 61.0; True stride time (sec): 1.17, computed stride time (sec): 1.17; True avg. stride length (cm): 71.6, computed avg. stride length (cm): 70.1; True height(cm): 162.1, computed height (cm): 161.8.

Elderly care > Kinect gaming

Marston et al.

2012

Study type: review

Narrative review of the current technologies viable for game-based solutions to enable enhanced quality of life in the elderly. The use of videogames for health related purposes demands game classification systems which take into account their player-base’s physical, cognitive, and social requirements, which can include a wide range of impairments.

Smith et al.

2012

Study type: review

Provides an overview of the main systems for in-home motion capture and some of the preliminary uses in elderly care, stroke rehabilitation, and assessment and/or training of functional ability of the elderly.

Staiano et al.

2011

Study type: review

Review paper which provides an overview of the measurement capabilities of exergames to derive viable data for clinical data pertaining to physical health, caloric expenditure, duration of use, balance, and other categories of interest.

Tanaka et al.

2012

Study type: review

Comparison of the Kinect, EyeToy, and Wii systems including technical specifications, the motion sensing capabilities of each interface, and the motion required to support therapeutic activity types. Discussion focuses on the unique research implications of using these three motion capture tools.

Wiemeyer et al.

2012

Study type: review

Specific challenges for game design presented: 1) selection of appropriate movements to offer meaningful exercise contexts for older subjects; 2) utilization of devices offering options that combine challenge and support; 3) determining appropriate game-based ‘dosage’; 4) randomized controlled trials to corroborate effects, and 4) development and evaluation of adequate training settings.

Arntzen et al.

2011

Study type: methodologyParticipants: Elderly care workersand one researcherAge: Unspecified

Presented concepts and requirements for developing Kinect-based games for the elderly and presents seven important issues that each game should consider during controller-free game development: visual, hearing, motion, technological acceptance, enjoyment, and emotional response.

Golby et al.

2011

Study type: methodology

The proposed system’s aim is to present occupational therapists with a tool that provides a range of motion analysis which enables gathering of patients’ range of motion from remote locations and the comparison of this gathered data with the range of motion required for a variety of activities of daily living.

Garcia et al.

2012

Study type: methodology

Proposes a system for clinically viable data capture of participants balance level utilizing a Choice Step Reaction Time mini game which requires participants to step on targets in a variety of ways.

Maggiorini et al.

2012

Study type: methodology

Description of a prototype game-based rehabilitation paradigm to enable home-based rehabilitation exercises for the elderly which can be monitored by caretakers of various sorts. The system includes: a distributed software architecture comprising of end systems, elderly users, caretakers, a core server, and a communication system.

Gerling et al.

2012

Study type: researchParticipants: 15 (institutionalizedolder adults, m = 8, f = 7)Age: Range 60 to 90, mean = 73.72 (SD = 9.90)

Investigated how elderly participants responded to game-based gestures. Results were compiled with the positive and negative affect scale (PANAS), mean (M), standard deviation (SD). Overall, the positive emotional affect was slight (before: M = 3.34, SD = 0.64, after: M = 3.88, SD = 0.79, (t11 = -2.92, p <0.01), whereas the negative emotional affects were less notable: before: M = 1.72, SD = 0.78, after: M = 1.68, SD = 0.86. (t11 = 0.28, p = 0.79)

Chiang et al.

2012

Study type: researchExperimental Group:Participants: 22Age: 78.55 (± 6.70)

The Vienna Test System, the Soda Pop test, and a Mann-Whitney non-parametric test were used to evaluate beneficial effects of Kinect usage on reaction time and hand-eye coordination. Reaction time (units in milliseconds Vienna Test System): - Experimental group: a median improvement of 167.51, and a decrease in SD of 362.66. - Control group: a median decline of -202.9, and an increase in SD of 183.56.

  

Control Group:Participants: 31 Age: 79.97 (± 7.00)

Hand-eye coordination time(units in seconds, Soda Pop test): - Experimental group: a median improvement of 6.01, and a decrease in SD of 0.34. - Control group: a median decline in 1.61, and an increase in SD of 5.49

Chen et al.

2012

Study type: researchExperimental Group:Participants: 21 (m = 3 f = 19) Age: 65–92 Control Group: Participants: 39 (m = 15, f = 24)Age: 65–92

22 out of the 61 participants volunteered to be in the experimental group for a 4-week course of training which involved three 30 minute sessions per week - 5-minute warm up, 20-minute interactive gaming, and 5-minute cool down. Health-Related Quality of Life (HRQOL), SF-8 (Quality Metric) questionnaire of General health (GH); Physical Function (PF); Role Physical (RP); Body Pain (BP); Vitality (VT) Social Functioning (SF); General Mental Health (MH); Role Emotional (RE), was employed in this study and an ANCOVA analysis was done. In the physical component summary of the HRQOL improvements were noted in the categories of general health, physical function, role physical, and body pain (p <0.05). The mental component summary; however, in general showed no significant differences between experimental and control groups (p <0.05). Results are out of 100: Experimental Group: GH = 48.69 to 54.49; PF = 50.73 to 52.34; RP = 51.91 to 52.70; BP = 52.90 to 57.44; VT = 57.16 to 57.04; SF = 52.85 to 55.50; MH = 56.14 to 55.53, RE = 51.19 to 51.83. Control Group: GH = 48.99 to 46.64; PF = 47.76 to 47.90; RP = 48.00 to 47.92; BP = 54.04 to 51.75; VT = 52.51 to 51.12; SF = 47.49 to 47.04; MH = 51.96 to 50.41, RE = 47.10 to 49.67.

Pham et al.

2012

Study type: researchParticipants: 24 (older adults m = 7, f = 17) Age: mean = 74, SD = 6.4

A comparison of button-based, mixed button/gesture-based, and gesture-based controllers was conducted through surveys aiming to identify user preference. The gesture-based controller was most preferred (42%) with the Mixed Controller next (25%) and the button controller last (8%); however, 21% did not care either way, and 4% enjoyed all types equally. Completion times were lower for mixed button and gesture systems, compared to the standalone Button Controller or Gesture Controller (Wilks’ Lambda =.16, F(2,22) = 54.98, p <0.05).

Hassani et al.

2011

Study type: researchParticipants: 12 (m = 5, f = 7) Age: mean = 77.17 (SD = 7.19)range: 71 to 96.

7-point Likert scale (7 - max agreement) on standard deviation for Effort, Ease and Anxiety (EEA) which measures how easily people think they can adapt and learn how to work with the technology, overcoming eventual anxieties and Performance and Attitude (PA) which measures how respondents ‘see themselves’ both practically and socially in the light of the new technology: EEA for Gestures: mean = 6.13, SD = 1.02; EEA for Touch = 6.18, SD = 1.01; PA for Gestures: mean = 6.01, SD 1.43; PA for Touch = 6.00, SD = 1.84.

Sun et al.

2013

Study type: researchParticipants: 23 (m = 12, f = 11) Age: 21 to 30

This study explored how Kinect-based balance training exercises influenced balance control ability and tolerable intensity level of the player. The results showed that varying evaluation methods of player experience could easily result in different findings making it hard to accurately study the design of those exergames for training purposes. This was accomplished by requiring a participant to stand on one leg within a posture frame (PF) and evaluating the resulting balance control ability in both static and dynamic gaming modes using a 6-axis AMTI force plate. The game would move various body-outline shapes toward the player’s avatar, and the player would then have to imitate the body-outline shape in order to pass through it without touching the outline. Force plate data - Fx, Fy, Fz, Mx, My, and Mz - was preprocessed and MATLAB was used for calculations. The following parameters were analyzed: small frame 1-second travel time (SF1S), large frame 1-second travel time (LF1S), small frame 2-second travel time (SF2S), large frame 2-second travel time (LF2S): Mean distance-anterior posterior: SF1S = 0.77(± 0.25); LF1S = 0.70(± 0.18); SF2S = 0.97(± 0.25); LF2S = 0.94(± 0.29) Mean distance-medial lateral: SF1S = 1.98(± 1.16); LF1S = 1.99(± 1.16); SF2S = 1.94(± 1.47); LF2S = 1.72(± 1.32) Total excursions: SF1S = 53.98(± 15.57); LF1S = 53.68(± 17.28); SF2S = 53.68(± 16.32); LF2S = 51.52(± 17.87) Sway area: SF1S = 0.07(± 0.06); LF1S = 0.06(± 0.05); SF2S = 0.06(± 0.05); LF2S = 0.06(± 0.02)