Skip to main content

The effects of error-augmentation versus error-reduction paradigms in robotic therapy to enhance upper extremity performance and recovery post-stroke: a systematic review

Abstract

Despite upper extremity function playing a crucial role in maintaining one’s independence in activities of daily living, upper extremity impairments remain one of the most prevalent post-stroke deficits. To enhance the upper extremity motor recovery and performance among stroke survivors, two training paradigms in the fields of robotics therapy involving modifying haptic feedback were proposed: the error-augmentation (EA) and error-reduction (ER) paradigms. There is a lack of consensus, however, as to which of the two paradigms yields superior training effects. This systematic review aimed to determine (i) whether EA is more effective than conventional repetitive practice; (ii) whether ER is more effective than conventional repetitive practice and; (iii) whether EA is more effective than ER in improving post-stroke upper extremity motor recovery and performance. The study search and selection process as well as the ratings of methodological quality of the articles were conducted by two authors separately, and the results were then compared and discussed among the two reviewers. Findings were analyzed and synthesized using the level of evidence. By August 1st 2017, 269 articles were found after searching 6 databases, and 13 were selected based on criteria such as sample size, type of participants recruited, type of interventions used, etc. Results suggest, with a moderate level of evidence, that EA is overall more effective than conventional repetitive practice (motor recovery and performance) and ER (motor performance only), while ER appears to be no more effective than conventional repetitive practice. However, intervention effects as measured using clinical outcomes were under most instance not ‘clinically meaningful’ and effect sizes were modest. While stronger evidence is required to further support the efficacy of error modification therapies, the influence of factors related to the delivery of the intervention (such as intensity, duration) and personal factors (such as stroke severity and time of stroke onset) deserves further investigations as well.

Background

Stroke, also referred to as cerebrovascular accident (CVA), is one of the leading causes of disablement among adults [1, 2]. It is estimated that stroke costs the Canadian, United States and United Kingdom economy around $3.6 billion [3], $34 billion [4] and £9 billion [5] a year respectively in medical services, personal care and lost productivity. The disabilities resulting from stroke can affect all aspects of life including gross and fine motor ability, walking, activities of daily living (ADLs), speech and cognition [6]. Motor impairments are some of the most prevalent issues post stroke and restoring upper extremity function is one of the top priorities of people with stroke [7]. Compared to the lower extremity impairments, the upper extremity impairments are more likely to result in activities limitations (see International Classification of Functioning, Disability and Health (ICF) in Appendix 1) because tasks that involve the arm and hand often require a high level of fine motor control [8]. In fact, severe upper extremity impairments post-stroke often hinder the ability to take care for oneself and perform ADLs [9]. Although restoration of upper extremity motor functions is crucial for stroke patients to regain their independence, studies have shown that only 35 to 70% of people with stroke recover to the level of arm ability that is considered functional [10,11,12] while more than 50% have persistent upper extremity impairments [13].

Studies in both human and animal models demonstrate the importance of motor learning in the process of motor recovery following an acquired brain lesion as both learning and recovery processes can induce cortical changes and reorganization [14]. Motor learning, which is “a set of processes associated with practice or experience that leads to relatively permanent changes in the ability to produce skilled action” [15], relies on an experience-dependent neural plasticity that is modulated by various factors such as task specificity, repetition, intensity, timing, salience, etc. [16]. Amongst different factors influencing the acquisition of motor skills, feedback is believed to be one of the key factors [15]. Feedback is the information that an individual receives as a result of his or her performance [17]. It can be either intrinsic or extrinsic, where intrinsic feedback is that experienced by the performer (e.g. sensory, visual feedback, etc.) and extrinsic (augmented) feedback is that provided by an external source, such as a therapist providing verbal or physical guidance [18, 19]. Extrinsic feedback can inform the performer about a success or failure on a task (knowledge of results) or about the quality of movement or task performance (knowledge of performance) [15].

Robotics is one of the advanced technologies that is increasingly used in post-stroke upper extremity rehabilitation [20]. Compared to conventional approaches, it offers the advantages of high convenience when providing task-oriented practice, as well as high accuracy in measuring outcomes of motor performance (e.g. trajectory straightness, movement speed, range of joint movement [21]). The latter outcomes can in turn be used to provide knowledge of performance as a source of feedback [22]. Two main paradigms of training on the use of feedback, arising from the literature on robotics, were proposed and tested as means to facilitate motor learning and improve motor performance: the error reduction (ER) paradigm and error augmentation (EA) paradigm. The ER paradigm, also known as haptic guidance, is to reduce the performance errors of a subject during a motor task [23], namely via the assistance provided by a robot so that the performer can stay within the optimal movement trajectory determined by the non-paretic arm or by the therapist [24]. This approach is based on the hypothesis that by demonstrating the correct movement trajectory to a person, he/she will be able to learn it by imitation [25]. The discovery of “mirror neurons” that were first identified using microelectrode recordings of single neurons in area F5 of monkey premotor cortex [26] prompted the researchers to believe that a similar mirror neuron system exists in humans, and that this mirror neuron system could play an important role in learning through imitation [27]. Furthermore, the theory of reinforcement-based learning suggests that positive/successful feedback is essential for motor learning to occur [28]. The ER paradigm also assumes that there is a unique optimal movement trajectory and any deviation from it is considered to be an error. According to the principle of abundance and the theory of use-dependent learning, however, having variance in how a motor action is performed does not necessarily impede the overall motor performance [29, 30].

A whole body of literature also suggests that motor learning can be an error driven process, a postulate that can be explained and supported by motor control theories such as the internal model theory [31] and the equilibrium point hypothesis [32]. In the internal model theory, it is hypothesized that subjects form an ‘internal model’ based on their anticipation of the effects of the environment on their motor actions, therefore the internal model acts as a feed-forward component of the motor control [31]. The detection of errors that occur during the motor performance play the role of a feedback component, as errors prompt the existing internal model to adapt in order to reduce errors [33,34,35,36]. In the equilibrium point hypothesis, the errors occur in the subsequent movements following a change in the environment, but the motor system is able to correct these errors by adjusting the control variables based on information about the current motor system, joint positioning of the limbs, etc., thus resetting the activation thresholds (λ) of muscle and forming a new equilibrium point [32, 37]. Given the role of errors in motor learning, it was hypothesized that artificially increasing the performance error would cause learning to occur more quickly [25], an idea that is the foundation of the EA paradigm. In robotics, one of the commonly used technique to artificially increase performance error is to create a force-field that disturbs the limb motion during the movement [38].

While the theories and ideas that support ER vs. EA paradigms are distinct, both are currently being used, primarily in the form of haptic feedback, as part of clinical intervention studies for populations with deficits in motor recovery. Until this day, there is no consensus as to which of the two paradigms provides superior treatment effects in upper extremity motor recovery and performance among stroke survivors. Furthermore, while systematic reviews on the use of error modification in upper extremity rehabilitation after stroke were published in the recent years [39, 40], these exclusively focused on the EA paradigm and did not allow for a comparison between the two approaches. In this study, we conducted a systematic review on the use of EA and ER paradigms in the form of haptic feedback to enhance upper extremity motor recovery and performance in stroke survivors. The main research questions that were addressed are listed in PICO format (Population, Intervention, Comparison, and Outcome) and read as follows:

  1. 1.

    Among stroke survivors (P), to which extent do interventions involving EA paradigm (I1) or ER paradigm (I2) compared to interventions without error modification (C) enhance the upper extremity motor recovery and performance respectively (O).

  2. 2.

    Among stroke survivors (P), to which extent does the EA paradigm (I) compared to ER paradigm (C) enhance the upper extremity motor recovery and performance (O).

For the purpose of clarification, the comparison component of the first research question, “training without error modification,” refers to standard repetitive practice that does not involve any external force (reducing or amplifying errors) that provides feedback on the performance. The outcomes of both research questions, “upper extremity motor recovery and performance,” can include clinical measures of both upper extremity impairment and disability and kinematic measures of motor performance (for more details, refer to the section of inclusion and exclusion criteria).

Methods

Search strategy

The following databases which are available through McGill University library were systematically searched using their online search engines: Ovid MEDLINE, CINAHL, EMBASE, AMED, PsychoInfo, and PEDro. There was not a start date limit on the search criteria of the database, and the end date was August 1st 2017. The overall search strategy which was determined by the two reviewers (L.Y.L. and Y.L.) involved multiple search entries with keywords listed in the following, and the corresponding Medical Subject Headings (MeSH) terms were selected and ‘exploded’ (* for truncation):

  • Search 1: error amplifica*, error augment*, error enhance*, error enhancing, negative viscosity, haptic guidance, haptic*, active assist* (all keywords were combined with OR operator).

  • Search 2: stroke/ or stroke rehabilitation (MeSH), post-stroke (all keywords were combined with OR operator).

  • Search 3: upper extremity/or arm (MeSH), upper-extremity, upper arm, motor learn*, reaching (all keywords were combined with OR operator).

  • Final search: all three previous searches were combined with AND operator.

Following the electronic database search, a manual search of all relevant studies was performed to ensure the completeness of the search.

Study selection process

All search results found in the databases were saved into EndNote X7 reference manager (1988–2013 Thomson Reuters), and the duplicates were removed by the software. Each of the two reviewers carried out the study selection process separately. In addition, the study selection process involved the following steps: (1) Screen the remaining articles by their titles and abstracts; (2) Remove studies that do not meet the inclusion criteria or meet the exclusion criteria; (3) Review the full text of the remaining articles and; (4) Remove studies that do not meet the inclusion criteria or meet the exclusion criteria. Following step 4, the two reviewers compared their results. They discussed about the discrepancy between the results and decided together which articles were to be selected and the process of data extraction began.

Inclusion and exclusion criteria

The following were the inclusion criteria:

  1. 1.

    The population of the study is people with stroke who have upper extremity hemiparesis. The severity and onset of stroke may vary.

  2. 2.

    The design of the studies can be randomized controlled trial, crossover trials, quasi-experimental trials and pilot studies. The studies have to be intervention-oriented and not observation-oriented or review-oriented.

  3. 3.

    The upper-extremity tasks involved in the experimental procedure can be reaching, moving arm in circular trajectory, timing-oriented, grasping, or other functional movements.

  4. 4.

    The interventions of the studies have to involve either EA, ER or both paradigms. If the interventions only contain feedback and not any error modification, they are not included.

  5. 5.

    The interventions have to be mainly based on haptic feedback, but other feedback such as visual and auditory can be used as supplement. The reason to focus on haptic feedback is because based on previous review papers, most studies on EA and ER paradigms were in the field of robotics and haptic feedback was mainly involved. Therefore, to facilitate the comparison process only studies involving haptic feedback are included.

  6. 6.

    The studies can either compare EA to ER or compare either EA or ER to standard repetitive practice training that does not involve error modification.

  7. 7.

    The outcomes of the studies can be either kinematics or clinical outcomes. The kinematic outcomes have to measure the quality of movement such as trajectory straightness, smoothness, timing error, etc. The clinical outcomes can measure either the impairment level (i.e: range of motion, spasticity, level of motor recovery) or the motor disability level. The assessment tools have to be validity-proven such as Fugl-Meyer Assessment [41], Chedoke-McMaster Stroke Assessment [42], etc.

The following were the exclusion criteria:

  1. 1.

    The language of publication is not English.

  2. 2.

    The age of population studied is under 21 years old. Stroke in pediatric population may differ in aetiology, presentation and response to intervention and including this age range could introduce several confounding variables in this study.

  3. 3.

    The number of participants is less than 5, in order to control the statistical certainty of the results. Therefore, case studies are excluded.

  4. 4.

    The articles that are listed as conference abstracts are excluded.

  5. 5.

    The main outcomes are not related to motor performance (as defined in the introduction) or recovery of upper extremity.

Methodological quality assessment

The Physiotherapy Evidence Database (PEDro) scale [43] was chosen for the quality assessment of all articles selected, as studies have shown that the validity and reliability of PEDro scale are well established [44,45,46]. The scale consists of 11 items: eligibility criteria specified, randomized allocation, concealed allocation, baseline similarity, blinded subjects, blinded therapists, blinded assessors, adequate follow-up, intention to treat analysis (an analysis was performed as if the subjects received the treatment as allocated even if they received a different treatment), comparison between groups, point estimates and variability [45]. One point is awarded when a criterion is clearly satisfied, except the first criterion ‘eligibility criteria specified’ which is not considered for the calculation of score, therefore the total score is out of 10. PEDro scores are interpreted as follows: 6–10 indicates high methodological quality, 4–5 corresponds to fair quality, and less than 4 indicates poor quality [47]. The two reviewers (L.Y.L and Y.L) rated each of the selected studies separately, and the agreement among the two was calculated using Cohen’s kappa for each of the eleven items of PEDro scale. Then they compared and discussed their scores to decide the final score for each of the articles.

Risk of bias assessment

The risk of bias was evaluated using the Cochrane Collaboration’s risk of bias tool [48] by the reviewer L.Y.L. This tool was developed in 2005 by the Cochrane Collaboration’s Methods Group as the new strategy for addressing the quality of randomized trials [49]. The Cochrane Collaboration’s risk of bias tool involves the assessment of the risk of bias arising from each of six domains: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [48, 49].

Data extraction

The studies selected were divided into three categories based on their interventions and comparisons: (1) EA compared to training without error modification, (2) ER compared to training without error modification and (3) EA compared to ER. For each category of studies, a description table was used. The following data were abstracted from the selected studies:

  • Study design, and in case of a clinical trial, indicating if the trial is registered in ClinicalTrials.gov (run by the United States National Library of Medicine)

  • Number of participants in the experimental and control groups

  • Demographic and clinical information of the participants

  • Equipment used

  • Experimental protocol including the parameters of training

  • Main outcomes measures and assessment tools used

  • Results of the study including the significant levels and interpretations

  • Effect sizes of the results

  • Methodological quality scores of the study calculated using the PEDro scale.

Data analysis and synthesis

Outcomes were considered as significant if: (1) the reported p-value was less than 0.05 or (2) the 95% confidence interval did not contain 0. To calculate the effect size, the Cohen’s d formula: d = Meangroup1-Meangroup2/standard deviationpooled was used. If d is between 0.2 and 0.5, the effect size was considered small; between 0.5 and 0.8, it was medium and above 0.8, it was large [50]. If the numerical values of the results were not reported in a particular study, a textual explanation would be stated in the results column or effect size column of the tables. In order to synthesize the results, ratings of level of evidence from Evidence Based Medicine were used (Appendix 3) [51].

Results

Study selection

Figure 1 illustrates the selection process of the studies included in this paper using the PRISMA 2009 flowchart. The overall search results consisted of 259 articles from the databases, and 10 from the manual search. Among the 269 studies, 80 duplicates were removed using EndNote X7, and 138 were excluded based on title and abstract screening. Furthermore, following full text reviews, 44 studies were excluded (see Appendix 2) such that 13 remaining articles were retained for the data extraction and synthesis. Among these 13 articles, 6 compared the effects of EA to training without error modification, 3 compared the effects of ER to training without error modification, and 4 compared EA to ER.

Fig. 1
figure 1

The selection process of studies using PRISMA 2009 flowchart

Study designs

Table 1 (EA only), Table 2 (ER only) and Table 3 (EA and ER) described all 13 studies as well as their results. Among the 13 selected studies, there are six randomized controlled trials (RCT) [52,53,54,55,56,57], five crossover studies [52, 54, 58,59,60], one quasi-experimental study [24], two randomized comparative study [61, 62] and one pilot study [63]. Among all thirteen studies, only four could be found in clinical trials registry [52, 56, 57, 63].

Table 1 Summary of studies that compared EA to training without error modification
Table 2 Summary of studies that compared ER to training without error modification
Table 3 Summary of studies that compared EA to ER

Participants

Besides two studies that included healthy subjects in their control groups [24, 60], all other studies only included stroke survivors [52,53,54,55,56,57,58,59, 61,62,63]. Eleven out of thirteen selected studies [24, 52,53,54,55,56,57, 59,60,61,62] recruited participants with a chronic stroke (i.e. more than six months post-stroke’ [64]) with a mean value of 74.5 ± 46.8 months, one study [63] recruited participants with an acute stroke (i.e. less than a month post-stroke [65]), and one study did not specify in which stage of stroke the participants were situated [58]. The number of participants varied from study to study, ranging from 7 [63] to 34 [61] with a mean value of 19.6. The age of the participants varied greatly among studies, and almost every study included young as well as seniors who were 65 and above. The mean age of all 13 studies is 55.04 ± 11.3 years. In terms of baseline clinical assessments, nine studies used Fugl-Meyer (FM) assessment scores [24, 52, 53, 56, 57, 60,61,62,63], three studies used Chedoke-McMaster Stroke Assessment (CM) scores [54, 55, 58], and one did not include any baseline clinical information [59]. Among the studies that used FM scores, five studies included stroke survivors with the average FM scores ranging between 30 and 40 (the lowest being 15 and the highest being 50) [24, 52, 53, 60]. One study [63] recruited participants with stroke with a mean FM score around 53–54, which indicated a higher functional level. Four other studies used the Arm Motor Fugl-Meyer (AMFM) scores which correspond to the upper extremity section of the FM and reported mean AMFM scores between 30 and 40 [62], 40–50 [56], 50–60 [57], and 60–66 [61]. Among the studies that used CM (scores ranging from 1 (lowest) to 7 (highest)), the mean CM stages were respectively 3.3 [55], 4.4 [58], and 4.56 [54].

Experiment protocols

Among the five crossover studies, two [52, 60] involved a protocol in which participants crossed between experimental intervention (all of them are related to EA paradigm) and control intervention (no distorted error feedback); two studies [54, 58] had participants crossing between EA interventions and ER interventions; one study had participants crossing between EA force alone and EA combined with positive limb inertia [59]. One study [56] divided the participants into two groups, the first one receiving ER throughout the experiment and the second one receiving the control intervention (no assistance) for the first half of the experiment and ER for the second half of the experiment. In the study of Patton and colleagues (2006) there were three groups: stroke experimental, stroke control and healthy experimental. Half of the stroke experimental group experienced EA and the other half experienced ER, but it was unclear which intervention the healthy experimental group received [24]. Rozario and colleagues (2009) also recruited healthy subjects in the study as control group, but likewise, it was unclear which interventions did the healthy subjects receive [60]. The duration of the experiment varied greatly among the studies. Eight studies had protocols that involved multiple sessions over three to eight weeks [52, 54, 56,57,58, 60, 62, 63]. Four studies only had one single session [24, 53, 55, 61] and one study had three session in total [59].

Outcomes measures

All studies included clinical outcome assessments except two (Patton et al. 2006; Huang and Patton 2013 [59, 61]). The AMFM and CM impairment inventory were the most frequently used clinical assessment scale, as they were used in nine [24, 52, 53, 55,56,57, 60, 62, 63] out of eleven studies that included clinical outcome measures. The Box and Blocks Test was used in three studies [52, 56, 60], the Wolf Motor Function Test (WMFT - functional ability scale (FAS) and time measures) in two studies [52, 60], range of motion (ROM) in two studies [52, 58], Motor Status Score (MSS) in two studies [54, 58], Modified Ashworth Scale (MAS) in two studies [24, 58] and Action Research Arm Test (ARAT) in two studies [56, 57]. For data analysis and synthesis purposes, clinical scales are prioritized in the following way: (1) for motor impairments, AMFM>CM > MSS > MAS > ROM; (2) for motor disabilities, WMFT>MAL > Motor Assessment Scale>ARAT>Box and Blocks. Eight studies [24, 53,54,55, 58, 59, 61, 63] further included kinematic outcomes. While the kinematic outcomes used were different from study to study, most of them were related to spatial, timing or velocity deviation errors [24, 53, 59, 61, 63]. One study used movement accuracy and smoothness as its main kinematic outcome [58], and one study included trajectory of movement [54]. Other kinematic outcomes such as distance of reach [55] and speed of movement [55] were also used. It is to be noted that Takahashi and colleagues (2008) included electromyography (EMG) and functional magnetic resonance imaging (fMRI) as outcome assessment tools, but the results of the imaging techniques were not the focus point of this review and will not be discussed.

Methodological quality of trials

The information on the agreement between the two reviewers using Cohen’s kappa can be found in Table 4. The mean±1 standard error of Cohen’s kappa of all items of PEDro scale was 0.423± 0.202, which could only be considered “moderate” [66] although the mean observed agreement percentage (Po) was high (78.32%). This could be due the fact that the mean expected agreement percentage (Pe) was 63.15% which is also considered to be medium-high. Table 5 summarizes the final score of PEDro scale of the selected studies after a comparison of results and discussion between the two reviewers. Five studies [52, 54, 57, 61, 62] were considered to be of ‘high quality’ which represents a score of 6/10 or above [47]. Four studies [53, 55, 56, 60] were considered to be of ‘fair quality’ which indicates a score between 4/10 and 5/10 [47]. At last, four studies [24, 58, 59, 63] were considered of ‘poor quality’ due to having a score less than 4/10 [47]. The parameters that received the lowest scores were ‘blinded therapists’ (one out of fourteen studies), ‘concealed allocation’ (two out of fourteen studies), and ‘intention to treat analysis’ (three out of fourteen studies). Total scores on the quality of trials were also included in Tables 1, 2 and 3.

Table 4 Assessment of agreement among the reviewers on the ratings of PEDro scale using Cohen’s kappa
Table 5 Methodological quality assessment of the studies using PEDro scale

Assessment of risk of bias

The risk of bias of the selected studies was assessed using Cochrane Collaboration’s risk of bias tool (Table 6). It is to be noted that two studies [24, 63] had high risk of bias in four of the six domains and four studies [55, 56, 58, 59] were considered of having high risk of bias in three of the six domains. The domain that received the highest risk of bias is ‘allocation concealment’ (twelve out of fourteen studies). In the domain of ‘other bias’, two most common biases were ‘small sample size’ which was present in seven of the thirteen studies [53,54,55,56, 58, 60, 63] as well as ‘short training protocol’ which was found in five of the thirteen studies [24, 53, 55, 59, 61].

Table 6 Assessment of risk of bias of the studies using Cochrane Collaboration’s risk of bias tool

Data analysis and synthesis

EA compared to training without error modification

As shown in Table 1, two high quality [52, 62], two fair quality [53, 60] and two poor quality [59, 63] studies investigated the effectiveness of EA compared to standard repetitive practice. In the first high quality RCT of Abdollahi and colleagues (2014) [52], the EA group showed significantly higher improvement with a medium effect size over the control group in AMFM score during the first phase of training. In the second phase, the difference was of low effect size and not significant [52]. When examining the results of WMFT FAS, the EA group showed higher improvement in the first phase, but the opposite was seen in the second phase [52], and this might be due to the EA training having a stronger cross-over effect. The effect size of both phases were medium, but the levels of significance were unknown. The results of WMFT timing measures were in favor of the EA group in both phases, but the effect sizes were low/very low and the levels of significance were unknown. In the Box and Block Test, no significant difference was found [52]. In the second high quality study of Majeed and colleagues (2015) [62], the AMFM scores were not found to be different between the EA and control group. It is to be noted that in this study, the training period was considerably shorter than the one in Abdollahi et al. (2014). However, the EA group showed significantly better retention in AMFM at one week follow-up with a medium effect size [62].

In the two fair quality studies, Patton and colleagues (2006) and Rozario and colleagues (2009) [53, 60], the EA group showed higher improvement than the control group in movement and ROM errors. The effect sizes were medium, but the levels of significance were unknown (possibly insignificant because the sample sizes of the two studies were small: 15 and 10).

In the pilot study of Givon-Mayo and colleagues (2014) [63], the EA group showed higher improvement of medium effect size over the control group in Motor Assessment Scale scores, but the level of significance was unknown (possibly insignificant because the sample size was really small: 7). It was demonstrated that the EA group also improved greatly over the control group in velocity deviation error (a measure of velocity error expressed as deviation from the optimal smooth acceleration), and the result had a very large effect size and was significant [63]. In the study of Huang and Patton (2013), the EA group was the only group to have a significant improvement in radial deviation (a measure of movement error expressed as the distance between handle and template track in a circular movement task) compared to the control and the EA combined with inertia groups, though the effect size was small [59].

In summary, the following conclusions were drawn:

  1. 1.

    There is moderate evidence (Level 1b) from one high quality study [52] that the EA training paradigm is more effective than standard repetitive practice without error modification at improving upper extremity motor impairments (as measured by AMFM) among people with chronic stroke.

  2. 2.

    There is moderate evidence (Level 1b) from one high quality study [62] that the EA training paradigm shows more retention of improvement than standard repetitive practice without error modification for upper extremity motor impairments (as measured by AMFM) among people with chronic stroke.

  3. 3.

    There is moderate evidence (Level 1b) from one high quality study [52] and one pilot study [63] that the EA training paradigm is more effective than standard repetitive practice without error modification at improving upper extremity functional disability (as measured by WMFT and Motor Assessment Scale) among people with chronic stroke.

  4. 4.

    There is limited evidence (Level 2a) from two fair quality studies [53, 60], one pilot study [63], and one poor quality study [59] that EA training paradigm is more effective than standard repetitive practice without error modification at improving reaching trajectory deviation and control (measured by kinematic outcomes such as movement errors, velocity errors, etc) among people with chronic stroke.

ER compared to training without error modification

One high quality RCT [57] and two fair quality RCTs [55, 56] were included when comparing ER to training without error modification (Table 2). In the high quality study of Timmermans and colleagues (2014) [57], the control group consistently showed more improvement than the ER group at every outcome measure (AMFM, ARAT, and Motor Activity Log), but the differences in scores between the two groups were not significant and the effect sizes were either small or very small.

In the fair quality study of Kahn and colleagues (2006) [55], the ER group showed more improvement than the control group in supported fraction of range (the reaching range of the affected arm, while supported by the robotic device, normalized to the same measure of the unaffected side) and supported fraction of speed (the reaching speed of the affected arm normalized to the same measure of the unaffected side), but the opposite result was seen in unsupported fraction of speed (the reaching speed of the affected arm without the support of the robotic device) and CM assessment. All results in the study had small or very small effect sizes, and none was significant [55]. However, in another fair quality study of Takahashi and colleagues (2008), the full ER group had higher improvement of very large effect size over the half ER/half control group at ARAT and AMFM scores, and the differences were significant [56]. In that same study, no change was found in the Box and Block Test.

The following conclusions were drawn:

  1. 1.

    There is moderate evidence (Level 1b) from one high quality study [57] that the ER training paradigm is not more effective than standard repetitive practice without error modification at improving upper extremity motor impairments (as measured by AMFM) or at improving upper extremity functional disability (as measures by ARAT and MAL) among people with chronic stroke.

  2. 2.

    There is limited evidence (Level 2a) from one fair quality study [55] that ER training paradigm is not more effective than standard repetitive practice without error modification at improving reaching trajectory control (measured by kinematic outcomes such as supported range and supported speed) among people with chronic stroke.

EA compared the ER

Two high quality studies [54, 61] as well as two poor quality studies [24, 58] were included in the analysis (Table 3). In the high quality study of Bouchard and colleagues (2016) [61], the ER group had an improvement in absolute timing errors while the EA group had a deterioration, but the difference between the two groups was not significant and the effect size was small. In the high quality study of Tropea and colleagues (2013) [54], the ER group had a non-significant difference of improvement in Modified Ashworth Scale (MAS) and Motor Status Score (MSS) compared to the EA group, and the effect sizes were small to medium. However, the EA group had a significantly smoother and straighter trajectory than the ER group [54].

In the study of Cesqui and colleagues (2008) [58], similar results were found in terms of difference between EA and ER groups in MAS and MSS as in the study of Tropea et al. (2013). In the quasi-experimental study of Patton and colleagues (2006), the EA group showed a very large effect size at improvement in initial direction error over the ER group, and the result was significant [24].

The following conclusions were drawn:

  1. 1.

    There is moderate evidence (Level 1b) from one high quality study [54] that the EA training paradigm is not more effective than the ER training paradigm at improving upper extremity spasticity (as measured by MAS) and motor impairment (as measured by MSS) among people with chronic stroke. It is to be noted however, that in this study the baseline stroke severity between the two groups was different.

  2. 2.

    There is moderate evidence (Level 1b) from one high quality study [61] that the EA training paradigm is not more effective than ER training paradigm at improving movement timing (measured by absolute timing error) during a wrist flexion movement among people with chronic stroke.

  3. 3.

    There is moderate evidence (Level 1b) from one high quality study [54] and one quasi-experimental study [24] that the EA training paradigm is more effective than ER training paradigm at improving reaching trajectory control (as measured by kinematic outcomes such as trajectory smoothness, straightness and initial direction errors) among people with chronic stroke.

Overall, results suggested that EA induces larger improvement in clinical and kinematic outcomes compared to standard repetitive practice without error modification. Furthermore, results also unveiled the new findings that (i) there is a lack of evidence supporting the superiority of ER over standard repetitive practice in terms of improvement in clinical and kinematic outcomes; and (ii) EA is only superior to ER at improving kinematic outcomes. These findings were supported, globally, with a moderate level of evidence.

Discussion

This study completed, for the first time, a systematic review of interventions studies that compared the effectiveness of the EA training paradigm to standard repetitive practice without error modification, the ER paradigm to standard repetitive practice, and EA to ER at enhancing upper extremity motor recovery and performance in individuals with stroke. Thirteen studies were included in the review. The reason why EA was found to more effective than standard repetitive practice while ER was not could be due to the fact haptic guidance and assistive therapy are more effective in the initial stage of motor learning while error-based learning is more used in the later stage of learning. Indeed, it has been shown that in the initial stage of motor learning, motivation and positive reinforcement are believed to play a much more important role than being able to identify errors [28]. Since most participants in the reviewed studies are people with chronic stage of stroke, it is believed that they have already gone through the initial stage of motor relearning.

While some differences in clinical outcomes between training paradigms were statistically significant, it is also important to assess their clinical relevance and effect size in order to address the objectives of this review. Amongst clinical tests that assess motor recovery, the AMFM shows a minimal detectable change (MDC) of 5.2 [67] and a minimally clinically important difference (MCID) of ranging from 4.25 to 7.25 [68]. None of the reviewed studies on EA presented intervention gains that met the MDC or MCID for this test. In fact, only Takahashi and colleagues (2014) [56] who compared ER to standard practice had results that met the MDC and MCID for the AMFM, in both intervention groups. For the WMFT FAS and the WMFT time measure which reflect motor abilities in functional and timed tasks, none of the studies reviewed met the MCID (WMFT FAS ranging from 0.2 to 0.4 point; WMFT time measure ranging from 1.5 to 2.0 s [69]). The MCID for the ARAT (5.7 [70]) was attained only in Timmermans and colleagues’ study (2014) [57], both by the ER and standard practice groups. It is to be noted that no established MCID was found in Motor Assessment Scale, Motor Activity Log and Motor Status Score. Spasticity, as measured by the MAS, showed intervention induced changes that reached the MCID (1 point [71]) for ER and EA in two studies that compared the latter two approaches [54, 58]. The Box and Blocks test and ROM did not see any significant change in any of the intervention groups in the thirteen studies reviewed, presumably because arm trajectory control was specifically targeted in the interventions, as opposed to manual dexterity and joint mobility. In addition, the effect sizes of the differences in clinical outcomes in all thirteen studies were for most moderate or small. Collectively, these observations suggest that while EA was found to have superior effects over standard repetitive practice to improve upper extremity motor impairments and functional disability, it yet has to demonstrate that it can yield clinically meaningful changes in clinical outcomes of motor impairment and function. Such observations also raise important questions, being whether the intervention was delivered optimally (e.g. in terms of training intensity, duration, feedback sensory modality, stroke chronicity and baseline level of motor recovery, etc.) and whether the selected outcomes were actually best suited to capture the improvements brought up by the intervention.

To that effect, the EA training paradigm was further found to be more effective at improving kinematic outcomes that measure reaching trajectory control compared to both ER and standard repetitive practice. Indeed, two studies showed very large effect sizes on the difference between EA and standard repetitive practice, and between EA and ER [24, 63]. Furthermore, when comparing EA to ER, the only statistically significant difference that emerged was in the kinematic outcomes which were in favor of the EA group. In fact, although EA showed larger improvement than standard practice and although ER did not show significant difference compared to standard practice in terms of clinical outcomes, EA surprisingly did not appear to be better than ER at improving clinical outcomes. It has been shown that kinematic variables are highly responsive to changes in motor performance following training intervention [72] and that they can capture the quality of the movement which is another important aspect of motor abilities [73]. In the context of this study, this could suggest that EA is actually better than ER at improving the quality of movement which is mostly measured by the kinematic outcomes, but such improvement could not be detected by most of the examined clinical outcomes. From a broader perspective, these observations emphasize the need to deeply understand the mechanisms of action of error modification interventions and select outcome measures accordingly.

Besides factors related to the intervention itself (intensity, duration, etc), personal-related factors such as the site of lesion, stroke severity and chronicity also are factors that may have influenced the results of studies reviewed in this manuscript and ensuing conclusions. Unfortunately, most studies did not provide information on brain lesion location. Among the three studies that did provide this information [24, 52, 53], participants suffered stroke in a variety areas (e.g. cortical, sub-cortical, thalamus, basal ganglia, brain stem, etc.) and the distribution of the different sites of lesion amongst groups was not reported, making it impossible to analyse the effects of lesion location. As for stroke severity, among the studies that compared EA to repetitive practice, baseline AMFM scores did not seem to influence the results because participants who had AMFM scores ranging from 15 to 55 [52, 53, 60, 62, 63] all demonstrated larger improvement with the EA training. However, it was difficult to draw definite conclusions on ER vs. standard repetitive practice and EA vs. ER, as the number of studies in these two categories was small and studies used different outcome measures to assess stroke severity. Lastly, most of the studies only recruited chronic stroke survivors, making it difficult to appraise the effects of stroke chronicity while limiting the generalization of findings mainly to chronic stroke survivors.

Results of this review also highlighted contradictions across studies which could be due to an influence of participants’ personal factors on intervention outcomes. For instance, Takahashi and colleagues (2008) [56] suggested that full ER practice was better than half ER/half standard repetitive practice at improving AMFM and ARAT scores, a finding that was in contradiction with that of other studies [55, 57]. The full ER group, however, had an average onset of stroke of 1.2 years compared to 4.8 years for the other intervention group, and this suggests that time of stroke onset might be a factor that influences the motor recovery [56]. Moreover, the full ER group also had nine points less in baseline average AMFM scores compared to the other group [56], possibly leaving more room for improvement in the former group. We therefore suggest that at this point in time, a deeper investigation of patient-related factors on the intervention outcomes is warranted.

This systematic review has some limitations. The risk of bias among the selected studies is high as most of the selected studies have either short training period or small sample size. Another limitation lies in the fact that many studies did not provide numerical values for the standard deviations of their results, or the standard deviations had to be estimated from tables or figures, which may have affected the calculation of some effect sizes. Only one out of 13 studies [57] reported the effects of intervention on the arm use which is an important predictor of upper extremity motor recovery. It should also be noted that 6 out of 10 studies involving EA trainings may come from the same research group [24, 52, 53, 59, 60, 62]. Moreover, the main methodological quality assessment was done using the PEDro scale. Like many checklist-style appraisal tools, PEDro has a disadvantage of giving the same weighing (1 point) to every category of source of bias. However, depending on the types of study, not all sources of bias affect the internal validity equally. Finally, before starting this systematic review, the authors have planned to conduct experimental studies on the use of EA and ER on motor learning in the future, therefore this could act as a source of bias, although unwillingly.

Conclusion

In response to the research questions posed in this paper, the following conclusions were drawn with regards to the population of chronic stroke: (1) Interventions involving an EA paradigm were more effective compared to interventions without error modification at improving upper extremity impairments, disabilities and reaching trajectory control; (2) Interventions involving ER paradigm were not more effective compared to interventions without error modification at improving upper extremity impairments and disabilities and; (3) Interventions involving an EA paradigm were more effective compared to interventions involving an ER paradigm to improve reaching trajectory control. While these conclusions hold true at a statistical level, however, this review further demonstrates that EA and ER, like standard repetitive practice, induced changes in clinical outcomes of motor recovery and function that did not reach the minimal clinically important difference. Nevertheless, this review showed that EA paradigm has promising effects for post-stroke upper extremity rehabilitation.

In the future, clinical trials of strong methodological quality which include sensitive outcomes that capture changes in movement quality and patient functioning in activities of daily living are needed to further demonstrate the effects of error-modification therapies with a stronger level of evidence and to possibly achieve clinically meaningful changes. The influence of intervention-related factors such as training intensity and duration, as well as personal factors such as the site of lesion, severity of stroke and stroke chronicity on the error-modification intervention paradigms should further be explored. Finally, the emergence of virtual reality makes other modalities, namely visual and auditory feedback, potential alternatives to haptic feedback. These modalities could be cheaper and easier to implement than robotics, and it appears that more and more studies have begun to examine the effect of these feedback on motor learning. Therefore, the use of different modalities of feedback, such as visual, auditory and/or a combination of multiple sensory modalities, could also be investigated.

Abbreviations

ADLs:

Activities of daily living

AMFM:

Arm motor Fugl-Meyer

ARAT:

Action Research Arm Test

CM:

Chedoke-McMaster scale score

CVA:

Cerebrovascular accident

EA:

Error augmentation

ER:

Error reduction

FM:

Fugl-Meyer Assessment of Motor Recovery after Stroke

ICF:

International Classification of Functioning, Disability and Health

MAL:

Motor Activity Log

MAS:

Modified Ashworth Scale

MCID:

Minimally clinically important difference

MDC:

Minimal detectable change

MSS:

Motor Status Score

ROM:

Range of motion

WMFT FAS:

Wolf Motion Function Test Functional Ability Scale

References

  1. Hodgson C. Prevalence and disabilities of community-living seniors who report the effects of stroke. Can Med Assoc J. 1998;6:9–14.

    Google Scholar 

  2. Verbrugge LM, Lepkowski JM, Imanaka Y. Comorbidity and its impact on disability. Milbank Q. 1989;67(3–4):450–84.

    Article  PubMed  CAS  Google Scholar 

  3. Wielgosz A, Arango M, Bancej C, Bienek A, Johansen H, Lindsey P, Luo W, Luteyn A, Nair C, Quan P, Stewart P, Walsh P, Webster G. Tracking heart disease and stroke in Canada. Public Health Agency Canada. 2009;e10-13.

  4. Benjamin EJ, Blaha MJ, Chiuve SE. Heart disease and stroke statistics—2017 update: a report from the American hearth association. In: Vol. 135: American Heart Association statistics committee and stroke statistics subcommittee; 2017. p. e229–445.

    Google Scholar 

  5. Saka Ö, McGuire A, Wolfe C. Cost of stroke in the United Kingdom. Age Ageing. 2009;38(1):27–32.

    Article  PubMed  Google Scholar 

  6. Mayo NE, Wood-Dauphinee S, Ahmed S, Gordon C, Higgins J, McEwen S, Salbach N. Disablement following stroke. Disabil Rehabil. 1999;21(5–6):258–68.

    Article  PubMed  CAS  Google Scholar 

  7. Hebert D, Lindsay MP, McIntyre A, Kirton A, Rumney PG, Bagg S, Bayley M, Dowlatshahi D, Dukelow S, Garnhum M, et al. Canadian stroke best practice recommendations: stroke rehabilitation practice guidelines, update 2015. Int J Stroke. 2016;11(4):459–84.

    Article  PubMed  Google Scholar 

  8. Duncan PW, Goldstein LB, Horner RD, Landsman PB, Samsa GP, Matchar DB. Similar motor recovery of upper and lower extremities after stroke. Stroke. 1994;25(6):1181–8.

    Article  PubMed  CAS  Google Scholar 

  9. Parker VM, Wade DT, Langton Hewer R. Loss of arm function after stroke: measurement, frequency, and recovery. Int Rehabil Med. 1986;8(2):69–73.

    Article  PubMed  CAS  Google Scholar 

  10. Heller A, Wade DT, Wood VA, Sunderland A, Hewer RL, Ward E. Arm function after stroke: measurement and recovery over the first three months. J Neurol Neurosurg Psychiatry. 1987;50(6):714–9.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  11. Sunderland A, Tinson D, Bradley L, Hewer RL. Arm function after stroke. An evaluation of grip strength as a measure of recovery and a prognostic indicator. J Neurol Neurosurg Psychiatry. 1989;52(11):1267–72.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  12. Wade DT, Langton-Hewer R, Wood VA, Skilbeck CE, Ismail HM. The hemiplegic arm after stroke: measurement and recovery. J Neurol Neurosurg Psychiatry. 1983;46(6):521–4.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  13. Olsen TS. Arm and leg paresis as outcome predictors in stroke rehabilitation. Stroke. 1990;21(2):247–51.

    Article  PubMed  CAS  Google Scholar 

  14. Krakauer JW. Motor learning: its relevance to stroke recovery and neurorehabilitation. Curr Opin Neurol. 2006;19(1):84–90.

    Article  PubMed  Google Scholar 

  15. Schmidt R. Motor control and learning: a behavioral emphasis, 2nd edn. Champaign: Human Kinetics Publishers; 1988.

    Google Scholar 

  16. Kleim JA, Jones TA. Principles of experience-dependent neural plasticity: implications for rehabilitation after brain damage. J Speech Lang Hear Res. 2008;51(1):S225–39.

    Article  PubMed  Google Scholar 

  17. Sage GH. Motor learning and control: a neuropsychological approach. Dubuque: William C Brown Pub; 1984.

  18. Young DE, Schmidt RA, Lee TD. Human Motor Learning & Human Learning, 1st edn. In: International encyclopedia of ergonomics and human factors; 2001.

    Google Scholar 

  19. Gilmore PE, Spaulding SJ. Motor control and motor learning: implications for treatment of individuals post stroke. Phys Occup Ther Geriatr. 2001;20(1):1–15.

    Article  Google Scholar 

  20. Norouzi-Gheidari N, Archambault PS, Fung J. Effects of robot-assisted therapy on stroke rehabilitation in upper limbs: systematic review and meta-analysis of the literature. J Rehabil Res Dev. 2012;49(4):479–96.

    Article  PubMed  Google Scholar 

  21. Levin MF, Kleim JA, Wolf SL. What do motor “recovery” and “compensation” mean in patients following stroke? Neurorehabil Neural Repair. 2009;23(4):313–9.

    Article  PubMed  CAS  Google Scholar 

  22. Dobkin BH. Strategies for stroke rehabilitation. Lancet Neurol. 2004;3(9):528–36.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Kao PC, Srivastava S, Agrawal SK, Scholz JP. Effect of robotic performance-based error-augmentation versus error-reduction training on the gait of healthy individuals. Gait Posture. 2013;37(1):113–20.

    Article  PubMed  Google Scholar 

  24. Patton JL, Stoykov ME, Kovic M, Mussa-Ivaldi FA. Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors. Exp Brain Res. 2006;168(3):368–83.

    Article  PubMed  Google Scholar 

  25. Milot MH, Marchal-Crespo L, Green CS, Cramer SC, Reinkensmeyer DJ. Comparison of error-amplification and haptic-guidance training techniques for learning of a timing-based motor task by healthy individuals. Exp Brain Res. 2010;201(2):119–31.

    Article  PubMed  Google Scholar 

  26. Molenberghs P, Cunnington R, Mattingley JB. Is the mirror neuron system involved in imitation? A short review and meta-analysis. Neurosci Biobehav Rev. 2009;33(7):975–80.

    Article  PubMed  Google Scholar 

  27. Rizzolatti G, Fabbri-Destro M, Cattaneo L. Mirror neurons and their clinical relevance. Nat Clin Pract Neurol. 2009;5(1):24–34.

    Article  PubMed  Google Scholar 

  28. Sidarta A, Vahdat S, Bernardi NF, Ostry DJ. Somatic and reinforcement-based plasticity in the initial stages of human motor learning. J Neurosci. 2016;36(46):11682–92.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Latash ML. The bliss (not the problem) of motor abundance (not redundancy). Exp Brain Res. 2012;217(1):1–5.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Diedrichsen J, White O, Newman D, Lally N. Use-dependent and error-based learning of motor behaviors. J Neurosci. 2010;30(15):5159–66.

    Article  PubMed  CAS  Google Scholar 

  31. Shadmehr R, Mussa-Ivaldi F. Adaptive representation of dynamics during learning of a motor task. J Neurosci. 1994;14(5):3208–24.

    Article  PubMed  CAS  Google Scholar 

  32. Dancause N, Ptito A, Levin MF. Error correction strategies for motor behavior after unilateral brain damage: short-term motor learning processes. Neuropsychologia. 2002;40(8):1313–23.

    Article  PubMed  Google Scholar 

  33. Fine MS, Thoroughman KA. Trial-by-trial transformation of error into sensorimotor adaptation changes with environmental dynamics. J Neurophysiol. 2007;98(3):1392–404.

    Article  PubMed  Google Scholar 

  34. Franklin DW, Burdet E, Tee KP, Osu R, Chew CM, Milner TE, Kawato M. CNS learns stable, accurate, and efficient movements using a simple algorithm. J Neurosci. 2008;28(44):11165–73.

    Article  PubMed  CAS  Google Scholar 

  35. Halsband U, Lange RK. Motor learning in man: a review of functional and clinical studies. J Physiol Paris. 2006;99(4–6):414–24.

    Article  PubMed  Google Scholar 

  36. Thoroughman KA, Shadmehr R. Learning of action through adaptive combination of motor primitives. Nature. 2000;407(6805):742–7.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  37. Weeks DL, Aubert MP, Feldman AG, Levin MF. One-trial adaptation of movement to changes in load. J Neurophysiol. 1996;75(1):60–74.

    Article  PubMed  CAS  Google Scholar 

  38. Shadmehr R, Mussa-Ivaldi FA. Adaptive representation of dynamics during learning of a motor task. J Neurosci. 1994;14(5):3208.

    Article  PubMed  CAS  Google Scholar 

  39. Alexoulis-Chrysovergis AC, Weightman A, Hodson-Tole E, Deconinck FJA. Error augmented robotic rehabilitation of the upper limb a review, Neurotechnix: proceedings of the international congress on Neurotechnology, electronics and informatics; 2013. p. 167–78.

    Google Scholar 

  40. Israely S, Carmeli E. Error augmentation as a possible technique for improving upper extremity motor performance after a stroke - a systematic review. Top Stroke Rehabil. 2016;23(2):116–25.

    Article  PubMed  Google Scholar 

  41. Malouin F, Pichard L, Bonneau C, Durand A, Corriveau D. Evaluating motor recovery early after stroke: comparison of the Fugl-Meyer assessment and the motor assessment scale. Arch Phys Med Rehabil. 1994;75(11):1206–12.

    Article  PubMed  CAS  Google Scholar 

  42. Gowland C, Stratford P, Ward M, Moreland J, Torresin W, Van Hullenaar S, Sanford J, Barreca S, Vanspall B, Plews N. Measuring physical impairment and disability with the Chedoke-McMaster stroke assessment. Stroke. 1993;24(1):58–63.

    Article  PubMed  CAS  Google Scholar 

  43. Sherrington C, Herbert RD, Maher CG, Moseley AM. PEDro. A database of randomized trials and systematic reviews in physiotherapy. Man Ther. 2000;5(4):223–6.

    Article  PubMed  CAS  Google Scholar 

  44. Foley NC, Bhogal SK, Teasell RW, Bureau Y, Speechley MR. Estimates of quality and reliability with the physiotherapy evidence-based database scale to assess the methodology of randomized controlled trials of pharmacological and nonpharmacological interventions. Phys Ther. 2006;86(6):817–24.

    PubMed  Google Scholar 

  45. Maher CG, Sherrington C, Herbert RD, Moseley AM, Elkins M. Reliability of the PEDro scale for rating quality of randomized controlled trials. Phys Ther. 2003;83(8):713–21.

    PubMed  Google Scholar 

  46. Olivo SA, Macedo LG, Gadotti IC, Fuentes J, Stanton T, Magee DJ. Scales to assess the quality of randomized controlled trials: a systematic review. Phys Ther. 2008;88(2):156–75.

    Article  PubMed  Google Scholar 

  47. Foley NC, Teasell RW, Bhogal SK, Speechley MR. Stroke rehabilitation evidence-based review: methodology. Top Stroke Rehabil. 2003;10(1):1–7.

    Article  PubMed  Google Scholar 

  48. Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Savovic J, Weeks L, Sterne JA, Turner L, Altman DG, Moher D, Higgins JP. Evaluation of the Cochrane Collaboration's tool for assessing the risk of bias in randomized trials: focus groups, online survey, proposed recommendations and their implementation. Syst Rev. 2014;3:37.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Cohen J. Statistical power analysis for the behavioral sciences, 2nd edn. Abingdon: Routledge; 1988.

  51. Sackett DLSS, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM, 2nd edn. London: Churchill Livingstone; 2000.

  52. Abdollahi F, Case Lazarro ED, Listenberger M, Kenyon RV, Kovic M, Bogey RA, Hedeker D, Jovanovic BD, Patton JL. Error augmentation enhancing arm recovery in individuals with chronic stroke: a randomized crossover design. Neurorehabil Neural Repair. 2014;28(2):120–8.

    Article  PubMed  Google Scholar 

  53. Patton JL, Kovic M, Mussa-Ivaldi FA. Custom-designed haptic training for restoring reaching ability to individuals with poststroke hemiparesis. J Rehabil Res Dev. 2006;43(5):643–56.

    Article  PubMed  Google Scholar 

  54. Tropea P, Cesqui B, Monaco V, Aliboni S, Posteraro F, Micera S. Effects of the alternate combination of errorenhancing and active assistive robot-mediated treatments on stroke patients. IEEE J Transl Eng Health Med. 2013;1(2100109):Epub 2013.

    Google Scholar 

  55. Kahn LE, Zygman ML, Rymer WZ, Reinkensmeyer DJ. Robot-assisted reaching exercise promotes arm movement recovery in chronic hemiparetic stroke: a randomized controlled pilot study. J Neuroeng Rehabil. 2006;3:12.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Takahashi CD, Der-Yeghiaian L, Le V, Motiwala RR, Cramer SC. Robot-based hand motor therapy after stroke. Brain. 2008;131(Pt 2):425–37.

    Article  PubMed  Google Scholar 

  57. Timmermans AAA, Lemmens RJM, Monfrance M, Geers RPJ, Bakx W, Smeets R, Seelen HAM. Effects of task-oriented robot training on arm function, activity, and quality of life in chronic stroke patients: a randomized controlled trial. J NeuroEng Rehab. 2014;11:45. Epub 2014

    Article  Google Scholar 

  58. Cesqui B, Aliboni S, Mazzoleni S, Carrozza MC, Posteraro F, Micera S. On the use of divergent force fields in robot-mediated neurorehabilitation. IEEE Ras-Embs Int. 2008:942–9.

  59. Huang FC, Patton JL. Augmented dynamics and motor exploration as training for stroke. IEEE Trans Biomed Eng. 2013;60(3):838–44.

    Article  PubMed  Google Scholar 

  60. Rozario SV, Housman S, Kovic M, Kenyon RV, Patton JL. Therapist-mediated post-stroke rehabilitation using haptic/graphic error augmentation. Conference proceedings, Annual international conference of the IEEE engineering in medicine and biology society. IEEE engineering in medicine and biology society. Conference; 2009. p. 1151–6.

    Google Scholar 

  61. Bouchard AE, Corriveau H, Milot MH. A single robotic session that guides or increases movement error in survivors post-chronic stroke: which intervention is best to boost the learning of a timing task? Disabil Rehabil. 2016;39(16):1607–14.

    Article  PubMed  Google Scholar 

  62. Majeed YA, Abdollahi F, Awadalla S, Patton J. Multivariate outcomes in a three week bimanual self-telerehabilitation with error augmentation post-stroke. Conf Proc IEEE Eng Med Biol Soc. 2015;2015:1425–31.

    PubMed  Google Scholar 

  63. Givon-Mayo R, Simons E, Ohry A, Karpin H, Israely S, Carmel E. A preliminary investigation of error enhancement of the velocity component in stroke patients’ reaching movements. J Isr Phys Ther Soc. 2014;16(2):36.

    Google Scholar 

  64. Chronic stage of recovery. http://www.strokengine.ca/glossary/chronic-stage-of-recovery/. Accessed June 2017.

  65. Acute stage of recovery. http://www.strokengine.ca/glossary/acute-stage-of-recovery/. Accessed June 2017.

  66. Altman DG. Practical statistics for medical research. London: Chapman and Hall; 1991.

  67. Wagner JM, Rhodes JA, Patten C. Reproducibility and minimal detectable change of three-dimensional kinematic analysis of reaching tasks in people with hemiparesis after stroke. Phys Ther. 2008;88(5):652–63.

    Article  PubMed  Google Scholar 

  68. Page SJ, Fulk GD, Boyne P. Clinically important differences for the upper-extremity Fugl-Meyer scale in people with minimal to moderate impairment due to chronic stroke. Phys Ther. 2012;92(6):791–8.

    Article  PubMed  Google Scholar 

  69. Lin KC, Hsieh YW, Wu CY, Chen CL, Jang Y, Liu JS. Minimal detectable change and clinically important difference of the Wolf Motor function test in stroke patients. Neurorehabil Neural Repair. 2009;23(5):429–34.

    Article  PubMed  CAS  Google Scholar 

  70. Van der Lee JH, De Groot V, Beckerman H, Wagenaar RC, Lankhorst GJ, Bouter LM. The intra- and interrater reliability of the action research arm test: a practical test of upper extremity function in patients with stroke. Arch Phys Med Rehabil. 2001;82(1):14–9.

    Article  PubMed  CAS  Google Scholar 

  71. Shaw L, Rodgers H, Price C, van Wijck F, Shackley P, Steen N, Barnes M, Ford G, Graham L, Ti B. BoTULS: a multicentre randomised controlled trial to evaluate the clinical effectiveness and cost-effectiveness of treating upper limb spasticity due to stroke with botulinum toxin type A. Health Technol Assess. 2010;14(26):1–113. iii-iv.

    Article  PubMed  CAS  Google Scholar 

  72. Platz T, Prass K, Denzler P, Bock S, Mauritz KH. Testing a motor performance series and a kinematic motion analysis as measures of performance in high-functioning stroke patients: reliability, validity, and responsiveness to therapeutic intervention. Arch Phys Med Rehabil. 1999;80(3):270–7.

    Article  PubMed  CAS  Google Scholar 

  73. Subramanian SK, Yamanaka J, Chilingaryan G, Levin MF. Validity of movement pattern kinematics as measures of arm motor impairment poststroke. Stroke. 2010;41(10):2303–8.

    Article  PubMed  Google Scholar 

  74. Organization WH. Towards a common language for functionary, disability and health: ICF beginner's guide. Geneva: WHO; 2002.

    Google Scholar 

  75. Abdollahi F, Rozario SV, Kenyon RV, Patton JL, Case E, Kovic M, Listenberger M. Arm control recovery enhanced by error augmentation. IEEE Int Conf Rehab Robot. 2011;2011:5975504.

    Google Scholar 

  76. Agostini M, Turolla A, Cocco L, Daud OA, Oboe R, Piron L. Haptic interface for hand rehabilitation in persons with a stroke. Physiotherapy (United Kingdom). 2011;97:eS34–5.

    Google Scholar 

  77. Arab Baniasad M, Farahmand F, Nakhostin Ansari N. Multidisciplinary clinical rehabilitation neurorehabilitation of the upper extremities in stroke patients using a new robotic device; wrist-robohab robot. Int J Stroke. 2014;9:239.

    Google Scholar 

  78. Badia SBI, Verschure PFMJ. Virtual reality based upper extremity rehabilitation following stroke: a review. J Cyber Ther Rehabil. 2008;1(1):63–74.

    Google Scholar 

  79. Basteris A, Nijenhuis SM, Stienen AH, Buurke JH, Prange GB, Amirabdollahian F. Training modalities in robot-mediated upper limb rehabilitation in stroke: a framework for classification based on a systematic review. J Neuroeng Rehabil. 2014;11:111.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Beling J, Zondervan D, Snyder B, Jiggs G, Reinkensmeyer D. Use of a mechanically passive rehabilitation device as a training tool in Vietnam: impact on upper extremity rehabilitation after stroke. Physiotherapy. 2015;101:e138–9.

    Article  Google Scholar 

  81. Broeren J, Rydmark M, Björkdahl A, Sunnerhagen KS. Assessment and training in a 3-dimensional virtual environment with haptics: a report on 5 cases of motor rehabilitation in the chronic stage after stroke. Neurorehabil Neural Repair. 2007;21(2):180–9.

    Article  PubMed  Google Scholar 

  82. Cameirao MS, Badia SB, Duarte E, Frisoli A, Verschure PF, Cameirão MS, SBi B, Duarte E, Frisoli A, PFMJ V. The combined impact of virtual reality neurorehabilitation and its interfaces on upper extremity functional recovery in patients with chronic stroke. Stroke (00392499). 2012;43(10):2720–8.

    Article  Google Scholar 

  83. Cameirao MS, i Badia SB, Verschure PF. Virtual reality based upper extremity rehabilitation following stroke: a review. J CyberTher Rehab. 2008;1(1):63–74.

    Google Scholar 

  84. Casadio M, Giannoni P, Morasso P, Sanguineti V. A proof of concept study for the integration of robot therapy with physiotherapy in the treatment of stroke patients. Clin Rehabil. 2009;23(3):217–28.

    Article  PubMed  Google Scholar 

  85. Chemuturi R, Amirabdollahian F, Dautenhahn K. Performance based upper extremity training: a pilot study evaluation with the GENTLE/a rehabilitation system. IEEE Int Conf Rehab Robot. 2013;2013:6650380.

    Google Scholar 

  86. Chemuturi R, Amirabdollahian F, Dautenhahn K. Adaptive training algorithm for robot-assisted upper-arm rehabilitation, applicable to individualised and therapeutic human-robot interaction. J Neuroeng Rehabil. 2013;10:102.

    Article  PubMed  PubMed Central  Google Scholar 

  87. Coote S, Murphy B, Harwin W, Stokes E. The effect of the GENTLE/s robot-mediated therapy system on arm function after stroke. Clin Rehabil. 2008;22(5):395–405.

    Article  PubMed  Google Scholar 

  88. Crocher V, Sahbani A, Robertson J, Roby-Brami A, Morel G. Constraining upper limb synergies of hemiparetic patients using a robotic exoskeleton in the perspective of neuro-rehabilitation. IEEE Trans Neural Syst Rehabil Eng. 2012;20(3):247–57.

    Article  PubMed  Google Scholar 

  89. De Santis D, Zenzeri J, Casadio M, Masia L, Riva A, Morasso P, Squeri V. Robot-assisted training of the kinesthetic sense: enhancing proprioception after. Stroke. 2015;8(JAN):1037.

    Google Scholar 

  90. Fasoli SE, Krebs HI, Stein J, Frontera WR, Hughes R, Hogan N. Robotic therapy for chronic motor impairments after stroke: follow-up results. Arch Phys Med Rehabil. 2004;85(7):1106–11.

    Article  PubMed  Google Scholar 

  91. Fischer HC, Triandafilou KM, Thielbar KO, Ochoa JM, Lazzaro ED, Pacholski KA, Kamper DG. Use of a portable assistive glove to facilitate rehabilitation in stroke survivors with severe hand impairment. IEEE Trans Neural Syst Rehabil Eng. 2016;24(3):344–51.

    Article  PubMed  Google Scholar 

  92. Fluet G, Qiu Q, Lafond I, Soha S, Merians A, Adamovich S. Is integrated upper extremity training more effective than isolated training of the hand and fingers in persons with hemiparesis? Physiotherapy (United Kingdom). 2011;97:eS347–8.

    Google Scholar 

  93. Fluet GG. Robotically facilitated virtual rehabilitation of arm transport integrated with finger movement versus isolated training of the arm and hand in persons with hemiparesis. Newark: University of Medicine and Dentistry of New Jersey; 2012.

  94. Fluet GG, Merians AS, Qiu Q, Davidow A, Adamovich SV. Comparing integrated training of the hand and arm with isolated training of the same effectors in persons with stroke using haptically rendered virtual environments, a randomized clinical trial. J Neuroeng Rehabil. 2014;11:126.

    Article  PubMed  PubMed Central  Google Scholar 

  95. Hachisuka K, Wada F, Ochi M, Hachisuka A, Saeki S, Nakatsuru M, Hamada M, Yamamoto I, Matsui M, Inagawa N. Multidisciplinary clinical rehabilitation development and clinical trial of a simple training-assistance robot with motion angle assistance for the upper extremities in stroke patients: a preliminary study. Int J Stroke. 2014;9:224.

    Google Scholar 

  96. Housman S, Kovic M, Kenyon RV, Patton JL. Therapist-mediated post-stroke rehabilitation using haptic/graphic error augmentation. Conference proceedings, Annual international conference of the IEEE engineering in medicine and biology society. IEEE engineering in medicine and biology society. Conference; 2009. p. 1151–6.

    Google Scholar 

  97. Huang FC, Patton JL. Evaluation of negative viscosity as upper extremity training for stroke survivors. IEEE Int Conf Rehab Robot. 2011;2011:5975514.

    Google Scholar 

  98. Krebs HI, Mernoff S, Fasoli SE, Hughes R, Stein J, Hogan N. A comparison of functional and impairment-based robotic training in severe to moderate chronic stroke: a pilot study. NeuroRehabilitation. 2008;23(1):81–7.

    PubMed  PubMed Central  Google Scholar 

  99. Lam P, Hebert D, Boger J, Lacheray H, Gardner D, Apkarian J, Mihailidis A. A haptic-robotic platform for upper-limb reaching stroke therapy: preliminary design and evaluation results. J NeuroEng Rehab. 2008;5:15.

    Article  Google Scholar 

  100. Lambercy O, Dovat L, Yun H, Wee SK, Kuah CW, Chua KS, Gassert R, Milner TE, Teo CL, Burdet E. Effects of a robot-assisted training of grasp and pronation/supination in chronic stroke: a pilot study. J Neuroeng Rehabil. 2011;8:63.

    PubMed  PubMed Central  Google Scholar 

  101. Lemmens R, Timmermans A, Smeets R, Seelen H. Transfer of motor learning in (robotic) task-oriented arm-hand training after stroke. Neurorehabil Neural Repair. 2012;26(6):747.

    Google Scholar 

  102. Liao WW, Wu CY, Hsieh YW, Lin KC, Chang WY. Effects of robot-assisted upper limb rehabilitation on daily function and real-world arm activity in patients with chronic stroke: a randomized controlled trial [with consumer summary]. Clin Rehabil. 2012;26(2):111–20. 2012

    Article  PubMed  Google Scholar 

  103. Lin CY, Tsai CM, Shih PC, Wu HC. Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation. Technol Health Care. 2015;24(Suppl 1):S97–103.

    Article  PubMed  Google Scholar 

  104. Milot MH, Hamel M, Provost PO, Bernier-Ouellet J, Dupuis M, Letourneau D, Briere S, Michaud F. Exerciser for rehabilitation of the arm (ERA): development and unique features of a 3D end-effector robot. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:5833–6.

    PubMed  Google Scholar 

  105. Oblak J, Cikajlo I, Matjacic Z. Universal haptic drive: a robot for arm and wrist rehabilitation. IEEE Trans Neural Syst Rehabil Eng. 2010;18(3):293–302.

    Article  PubMed  Google Scholar 

  106. Orihuela-Espina F, Roldan GF, Sanchez-Villavicencio I, Palafox L, Leder R, Sucar LE, Hernandez-Franco J. Robot training for hand motor recovery in subacute stroke patients: a randomized controlled trial. J Hand Ther. 2016;29(1):51–7. quiz 57.

    Article  PubMed  Google Scholar 

  107. Patton JL, Mussa-Ivaldi FA. Robot-assisted adaptive training: custom force fields for teaching movement patterns. IEEE Trans Biomed Eng. 2004;51(4):636–46.

    Article  PubMed  Google Scholar 

  108. Perry JC, Oblak J, Je HJ, Cikajlo I, Veneman JF, Goljar N, Bizovičar N, Matjači Z, Keller T. Variable structure pantograph mechanism with spring suspension system for comprehensive upper-limb haptic movement training. J Rehabil Res Dev. 2011;48(4):317–33.

    Article  PubMed  Google Scholar 

  109. Phyo ST, Kheng LK, Kumar S. Design and development of robotic rehabilitation device for post stroke therapy. Int J Pharm Med Biol Sci. 2016;5(1):31–7.

    Google Scholar 

  110. Squeri V, Masia L, Giannoni P, Sandini G, Morasso P. Wrist rehabilitation in chronic stroke patients by means of adaptive, progressive robot-aided therapy. IEEE Trans Neural Syst Rehabil Eng. 2014;22(2):312–25.

    Article  PubMed  CAS  Google Scholar 

  111. Stein J, Krebs HI, Frontera WR, Fasoli SE, Hughes R, Hogan N. Comparison of two techniques of robot-aided upper limb exercise training after stroke. Am J Phys Med Rehabil. 2004;83(9):720–8.

    Article  PubMed  Google Scholar 

  112. Timmermans A, Lemmens R, Pulles S, Smeets R, Seelen H. Effectiveness of haptic master supported task-oriented arm training in chronic stroke patients. Neurorehabil Neural Repair. 2012;26(6):751–2.

    Google Scholar 

  113. Timmermans A, Smeets R, Seelen H. Transfer of motor learning in (robotic) task-oriented arm-hand training after stroke. Neurorehabil Neural Repair. 2012;26(6):747.

    Google Scholar 

  114. Turolla A, Daud Albasini OA, Oboe R, Agostini M, Tonin P, Paolucci S, Sandrini G, Venneri A, Piron L. Haptic-based neurorehabilitation in poststroke patients: a feasibility prospective multicentre trial for robotics hand rehabilitation. Comput Math Methods Med. 2013;2013:895492.

    Article  PubMed  PubMed Central  Google Scholar 

  115. Waldner A, Tomelleri C, Hesse S. Transfer of scientific concepts to clinical practice: recent robot-assisted training studies. Funct Neurol. 2009;24(4):173–7.

    PubMed  Google Scholar 

  116. Ziherl J, Novak D, Olensek A, Mihelj M, Munih M. Evaluation of upper extremity robot-assistances in subacute and chronic stroke subjects. J Neuroeng Rehabil. 2010;7:52.

    Article  PubMed  PubMed Central  Google Scholar 

  117. Zondervan DK, Palafox L, Hernandez J, Reinkensmeyer DJ. The resonating arm exerciser: design and pilot testing of a mechanically passive rehabilitation device that mimics robotic active assistance. J Neuroeng Rehabil. 2013;10:39.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We would like to thank Dr. Mindy Levin, Dr. Sandeep Subramanian, Dr. Noémi Dahan-Oliel, Dr. Aliki Thomas and Dr. André Bussières for their knowledge input and feedback on this review paper. We would also like to thank Tatiana Ogourtsova for her assistance in the literature searching process.

Funding

The research in this publication was funded by the Canadian Institutes of Health Research (CIHR. MOP-77548).

Author information

Authors and Affiliations

Authors

Contributions

LYL participated in keywords selection, literature review and selection, literature appraisal, data analysis and synthesis, manuscript writing, manuscript revision, coordination for paper submission. YL participated in keywords selection, literature review and selection, literature appraisal and manuscript revision. AL participated in manuscript writing and manuscript revision while providing directions and support as the supervisor. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Le Yu Liu.

Ethics declarations

Ethics approval and consent to participate

Not applicable in this review article.

Consent for publication

Not applicable in this article.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Fig. 2
figure 2

ICF model. Figure modified from WHO [74]

Appendix 2

Table 7 Articles excluded following full-text review

Appendix 3

Table 8 Ratings of level of evidence from Evidence Based Medicine

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, L.Y., Li, Y. & Lamontagne, A. The effects of error-augmentation versus error-reduction paradigms in robotic therapy to enhance upper extremity performance and recovery post-stroke: a systematic review. J NeuroEngineering Rehabil 15, 65 (2018). https://doi.org/10.1186/s12984-018-0408-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12984-018-0408-5

Keywords