Skip to main content

Continuous decoding of grasping tasks for a prospective implantable cortical neuroprosthesis

Abstract

Background

In the recent past several invasive cortical neuroprostheses have been developed. Signals recorded from the motor cortex (area MI) have been decoded and used to control computer cursors and robotic devices. Nevertheless, few attempts have been carried out to predict different grips.

A Support Vector Machines (SVMs) classifier has been trained for a continuous decoding of four/six grip types using signals recorded in two monkeys from motor neurons of the ventral premotor cortex (area F5) during a reach-to-grasp task.

Findings

The results showed that four/six grip types could be extracted with classification accuracy higher than 96% using window width of 75–150 ms.

Conclusions

These results open new and promising possibilities for the development of invasive cortical neural prostheses for the control of reaching and grasping.

Findings

Introduction

In the recent past many efforts have been devoted to develop artificial devices to restore sensorimotor functions in people who lost them due to amputation, spinal cord injury, stroke, etc. [13]. The possibility of connecting the peripheral and central nervous system with artificial devices by means of invasive neural interfaces [15] is currently investigated in order to increase the number of possible functional connections between patients with impaired sensory/motor functions and the artificial device (e.g., hand prostheses, robotic arm, etc.) and to control it in a simple and intuitive way. Electrodes have been implanted invasively a) in peripheral nerves to achieve a bi-directional control of hand prostheses in amputees [6, 7]; b) in the cortex, to extract user’s motor commands from movement-related cortical signals [2, 8, 9] or to deliver a sensory feedback by stimulating selected sectors of the somatosensory cortex [10]. Thus, invasive cortical neural prostheses (ICNPs) could help subjects affected by several deficits caused by spinal cord injury, stroke, amyotrophic lateral sclerosis, cerebral palsy, and multiple sclerosis to re-establish some degree of autonomy by controlling an output on a computer or a robotic system. In most cases, research groups have focused their efforts on the extraction of information from the motor cortex (area M1) to drive a robotic arm: signals recorded from ensembles of M1 cortical neurons have been processed through different algorithms to predict reaching directions or trajectories of a robotic arm end-effector [1114]. This approach has been tested with very promising results in animal models [2, 12] and recently in selected highly disabled subjects [8, 9, 15]. Moreover, individual [16] or ensemble M1 neurons data [17] have been used in order to predict hand or forearm muscle activity in brain controlled functional electrical stimulation (FES).

The situation becomes more challenging when several degrees of freedom need to be controlled (e.g., dexterous hand prostheses). In this case, even if recent results have shown that information related to finger movements can be obtained from M1 activities [18, 19], it seems quite difficult to extract simultaneously the kinematics of all the fingers using this kind of approach. Therefore, the possibility of decoding higher level information (e.g., the specific grip type instead of the trajectories of the hand joints) may offer several advantages. Previous experiments have shown that neurons of ventral premotor area F5 (located in the posterior bank of the inferior limb of the arcuate sulcus and the cortical convexity immediately adjacent to it) do not specify/encode a given pattern of movements (as M1 neurons), but rather an end-state, like the goal of a motor act (e.g., grasping as a whole) [20]. Motor neural activity, recorded using single microelectrode [21], multi electrode array [22], ECoG [23], and Local Field Potentials [24], changes according to the properties of the object to be handled and, thus, of the grasping task. Recent studies demonstrated that it is possible to distinguish between power vs. precision grip [22, 23] or up to six grips considering different duration periods of the whole movement phase [25]. Starting from this neurophysiological rationale, a pattern recognition algorithm for a continuous decoding of four/six grip types from F5 neurons during execution of a reach-to-grasp task is presented in this manuscript.

Materials and methods

Data analyzed in this paper have been collected from area F5 in the posterior bank of the inferior limb of the arcuate sulcus in three hemispheres (contralateral to the moving forelimb) of two awake monkeys (Macaca nemestrina). Behavioral apparatus and task, animals training and data collection have been described in [21]. Briefly, the monkey seated in front of a rotating turntable subdivided into six sectors, each containing a different object. The monkey had to fixate one object and press a key then release it, reach for and grasp the object, pull it, hold it and release it. The different objects were presented to the monkey in random order (8 repetitions for each object). Two sets of six geometric objects eliciting different grip types were used (i.e., original and special set composed by objects differing in size and shape, see Table 1 and [21]).

Table 1 The objects of the original and special set and the grips used by the monkeys during grasping[21, 25]

The dataset used in this paper consisted of 46 F5 neurons that were classified as purely motor grasping neurons. The activity of these neurons was not related to individual finger movements, but to the grasping action as a whole [21]. Thirty-six were tested with the objects of the original turntable and 10 with the objects of the special turntable. Properties of the neurons have been analyzed in [21] whereas the results of grips classification during different duration periods of the movement phase have been reported in [25].

Decoding algorithm

In order to investigate whether a reliable grip classification can be obtained during the object presentation and movement phase, normalized firing rate (nFR) have been extracted from F5 motor units during a period that goes from 1800 ms before to 200 ms after the key release event (e.g., starting of the movement phase). In particular, nFR was extracted for each unit of the F5 population and in each trial using analysis windows varying in duration (bin widths from 25 to 200 ms) which progressively slid over the reference period with a moving step of 10 ms. Normalization was done with respect to the maximum nFR among all trials. The F5 neural population response onset (tonset) has been detected using a threshold algorithm [26]. The onset corresponded to the moment when the F5 population nFR in an analysis window was more than 2.5 of standard deviation value, calculated over the first 400 ms of the object presentation phase (from −1800 to −1400 ms). The sub-threshold activity before key release (object presentation) was considered as baseline. The baseline and the above-threshold activity recorded during execution of the different grips were labeled and used as examples to train the classifier or to test its generalization skills. nFR of the different neurons have been classified using SVMs (ν-SVMs with radial basis kernel function) making use of the open source library LIBSVM [27]. Training and cross-validation has been done splitting data by trials and using a random selection of 25% of the trials for the training of the classifier and the remaining 75% of the trials for the testing. Two different classification methods have been used: i) direct discrimination of 5/7 classes (baseline plus four/six grip types); and ii) hierarchical discrimination (baseline versus above-threshold activity and then selection of four/six grip types). In the case of four objects classification, the first three object of the original set (i.e., cube, sphere, and cone) as well as the first three object of the special set (sphere in groove, large cylinder in container, and small sphere) have been clustered together because the grips used for their prehension shared common features (i.e., side grip or thumb/finger opposition) [21, 25]. The accuracy of the objects classification has been assessed using a recognition ratio (RR), defined as the proportion of the grips correctly identified with respect to those classified.

Statistical analysis

A Friedman test (p ≤ 0.01) has been used to compare the results obtained from (i) the neurons tested with the two sets of objects (original vs special) and (ii) the two classification schemes (direct vs hierarchical discrimination). Moreover, a Kruskall-Wallis test (p ≤ 0.01) has been used to verify the influence of the different bin-widths in the classification accuracy.

Results

In Figure 1, the mean nFR of the F5 motor neurons population for each object of the normal and special set is given. The response has been plotted from 1800 ms before to 1000 ms after the onset of the movement (the key release event that corresponds to 0 ms). The key release event corresponds to 0 ms). The mean tonset and the mean end of the movement (tmov) are marked as red and green horizontal line, respectively.

Figure 1
figure 1

The mean normalized firing rate (nFR) ± standard deviation, during grasping of different objects, of the F5 neurons tested with the original (left) and the special (right) set of objects (bin width = 25 ms). t = 0 corresponds to the key release event (start of the movement phase). The mean tonset and tmov for each object are indicated by red and green vertical lines.

In the case of the original set of objects, the onsets of the neural population response, calculated with a threshold method (see Materials and methods section), preceded the start of movement whereas, in the case of special set of objects, the onsets were either preceding or following the start of the movement. tmov was 334 ± 113 ms for the original set of objects and 407 ± 192 ms for the special set of objects. In both cases, the prediction of grip type was done in the first 200 ms of the movement phase well before its end.

The results of the classification as a function of the window width (from 25 to 200 ms) are given in Figure 2 for all the different classification analyses (four or six grips, direct or hierarchical, original or special set).

Figure 2
figure 2

Performance of the SVM classifier as a function of the number of grips to be recognized (RR = recognition ratio). Left panel: original set; right panel: special set.

Window widths between 75–150 ms seem to be sufficient to obtain the highest values of stable RR. More specifically, with the original set of objects and a window widths of 100 ms, RR values were 99.11 ± 0.52 (direct) and 98.70 ± 1.16 (hierarchical) for the classification of 4 grips and 96.62 ± 1.16 (direct) and 95.74 ± 1.32 (hierarchical) for the classification of 6 grips. With the special set of objects and a window width of 150 ms, RR values were 97.79 ± 1.01 (direct) and 97.30 ± 1.17 (hierarchical) for the classification of 4 grips and 97.18 ± 0.95 (direct) and 96.51 ± 1.09 (hierarchical) for the classification of 6 grips.

The performance of the two populations of neurons was significantly different (p < 0.01) and this is likely due to the number of neurons belonging to each population (i.e., 36 and 10 neurons tested with the original and the special set of objects, respectively). Nevertheless, the differences between the mean accuracy obtained from the neurons tested with the original and special set of objects were limited (direct classification of 4 and 6 grips: 3.14% and 2.31%, respectively; hierarchical classification of 4 and 6 grips: 4.03% and 2.88%, respectively). These differences decreased below 2% using bins greater than 150 ms. Concerning the classification schemes, even if the direct approach was significantly better than the hierarchical one (p < 0.01), the differences between the two schemes were less than 2% (1% using window width greater than 125 ms). Finally, window length influenced significantly the accuracy (p < 0.01). The use of small bins (25 ms) resulted in deteriorated performance (i.e., differences worse than 30% as compared to the mean accuracy obtained with the use of 100 ms bins) whereas the use of bins larger than 150 ms did not result in any significant improvement (i.e., 0.34% and 0.72% differences in the mean accuracy between bins of 150 and 200 ms with the original and the special set of objects, respectively).

An example of the classification accuracy during the direct discrimination of 5/7 classes obtained with a 100 ms window width is given in Figure 3.

Figure 3
figure 3

Direct discrimination of 5/7 classes simultaneously (baseline plus four/six grip types) using a window width of 100 ms. Grip numbers according to Table 1, bl = baseline. X axis: actual grips (sorted). Y axis: grips predicted by the classifier. Correct classification results are shown as superimposition between the actual grip (black ovals) and predicted grip (red stripes). Classification errors are shown as isolated red crosses.

Actual grips are represented by black ovals whereas predicted grips are represented by red crosses. Correct classification results in a superimposition between actual and predicted grips (i.e., black ovals and red stripes). Isolated red crosses represent errors made by the classifier and are plotted at the level of the predicted class. Most of the errors occur during the classification of 6 grips. In the case of the original set, the classifier assigns part of the features extracted during the grasping of objects 1 and 3 to grip 2, confirming the fact that objects 1, 2, and 3 are grasped in a similar way [21]. For the special set of objects, a similar behavior occurs for objects 1 and 2 and for the objects 5 and 6 (hook grip) [21].

Discussion

The extraction of grip types (e.g., precision grip, finger prehension, whole hand prehension) from F5 could be a very attractive solution to be applied complementary to the M1-based ICNPs. In fact, if the goal of the ICNP is to control both reaching and manipulation of a dexterous arm-hand artificial robot, signals recorded from M1 can be used to decode information about the reaching phase and grasp timing (as already proved by the very interesting results achieved so far [13, 14]) while F5 signals can be used for the detection of the desired grasping task (see Figure 4).

Figure 4
figure 4

A possible approach for the control of ICNP based on the combination of information about reaching and timing from M1 together with information about the grip type from F5. Black circle from [13], red circle from [21], green circle from [24], and yellow circle from [28].

Recent papers [22, 23] demonstrated the possibility to distinguish between precision and power grips in a reliable way whereas in [25] the decoding of 4–6 grips during the reaching phase has been analyzed in different normalized windows (e.g., from 25% to 100% of the reaching phase).

In this paper, an SVM classifier has been used to continuously predict different grips (4 and 6) from the activity of F5 motor neurons recorded during the reach to grasp task. This classifier was already used in a previous paper from our group for the decoding of different grips [25]. However, in the current paper a population of purely motor neurons was analyzed whereas in [25] a population of motor and visuomotor units was employed. Secondly, in the present paper we investigated the effect of using different time windows while in the previous work one fixed window was used. As shown in Figure 1, activity of F5 neurons precedes or is contemporaneous to the onset of the movement. During hand transport phase and using a continuous classification scheme, it is possible to predict four/six grips with high RR using firing rate activity calculated over bins of 75 ms (for the original set of objects) and 150 ms (for the special set of objects). These results seem to be compatible with a real-time control of manipulation tasks (i.e., reaching and grasping) performed with ICNPs. In fact, the delay introduced by the grip decoding is similar to the maximum values proposed in case of an EMG-based control of prostheses (e.g., 100–125 ms [29]). motor related discharge, may allow the correct discrimination among grips even earlier than that reported in the present study [21, 25].

It must be taken into account that the decoding algorithm has been tested offline with single cell recordings. Real-time ICNPs with chronic multi-electrode arrays may perform worse, even if this limitation could be reduced thanks to the progress of micro and nanotechnologies [4] and the learning-induced tuning of the cells during real-time experiments [30]. Another issue is the limited number of decoded grips. Nevertheless, four or six grips should ensure a good grasping dexterity and a significant increase in the number of possible activities of daily living to be carried out.

In conclusion, a SVM based algorithm has been used for a continuous decoding of grip types from F5 motor neurons during execution of reach-to-grasp tasks. The results obtained show that four/six grip types were extracted with classification accuracy higher than 96% using window width of 75–150 ms. These results introduce new and promising scenarios for the development of ICNPs. In fact, the possibility not only to control computer cursors and robotic devices but also to select actions or grips could represent a real improvement of functionality for neuroprosthetic devices.

Abbreviations

ICNPs:

Invasive cortical neuroprostheses

M1:

Primary motor cortex

nFR:

Normalized firing rate

RR:

Recognition ratio

SVMs:

Support vector machines.

References

  1. Micera S, Carpaneto J, Raspopovic S: Control of hand prostheses using peripheral information. Biomed Eng, IEEE Rev 2010, 3: 48-68.

    Article  Google Scholar 

  2. Hatsopoulos NG, Donoghue JP: The science of neural interface systems. Annu Rev Neurosci 2009, 32: 249-266. 10.1146/annurev.neuro.051508.135241

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  3. Ohnishi K, Weir RF, Kuiken TA: Neural machine interfaces for controlling multifunctional powered upper-limb prostheses. Expert Rev Med Dev 2007, 4: 43-53. 10.1586/17434440.4.1.43

    Article  Google Scholar 

  4. Grill WM, Norman SE, Bellamkonda RV: Implanted neural interfaces: biochallenges and engineered solutions. Annu Rev Biomed Eng 2009, 11: 1-24. 10.1146/annurev-bioeng-061008-124927

    Article  CAS  PubMed  Google Scholar 

  5. Navarro X, Krueger TB, Lago N, Micera S, Stieglitz T, Dario P: A critical review of interfaces with the peripheral nervous system for the control of neuroprostheses and hybrid bionic systems. J Peripher Nerv Syst 2005, 10: 229-258. 10.1111/j.1085-9489.2005.10303.x

    Article  PubMed  Google Scholar 

  6. Dhillon GS, Horch KW: Direct neural sensory feedback and control of a prosthetic arm. IEEE Trans Neural Syst Rehabil Eng 2005, 13: 468-472. 10.1109/TNSRE.2005.856072

    Article  PubMed  Google Scholar 

  7. Micera S, Rossini PM, Rigosa J, Citi L, Carpaneto J, Raspopovic S, Tombini M, Cipriani C, Assenza G, Carrozza MC, et al.: Decoding of grasping information from neural signals recorded using peripheral intrafascicular interfaces. J Neuroeng Rehabil 2011, 8: 53. 10.1186/1743-0003-8-53

    Article  PubMed Central  PubMed  Google Scholar 

  8. Kim SP, Simeral JD, Hochberg LR, Donoghue JP, Friehs GM, Black MJ: Point-and-click cursor control with an intracortical neural interface system by humans with tetraplegia. IEEE Trans Neural Syst Rehabil Eng 2011, 19: 193-203.

    Article  PubMed Central  PubMed  Google Scholar 

  9. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP: Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 2006, 442: 164-171. 10.1038/nature04970

    Article  CAS  PubMed  Google Scholar 

  10. O’Doherty JE, Lebedev MA, Ifft PJ, Zhuang KZ, Shokur S, Bleuler H, Nicolelis MA: Active tactile exploration using a brain-machine-brain interface. Nature 2011, 479: 228-231. 10.1038/nature10489

    Article  PubMed Central  PubMed  Google Scholar 

  11. Fetz EE: Volitional control of neural activity: implications for brain-computer interfaces. J Physiol 2007, 579: 571-579. 10.1113/jphysiol.2006.127142

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  12. Scherberger H: Neural control of motor prostheses. Curr Opin Neurobiol 2009, 19: 629-633. 10.1016/j.conb.2009.10.008

    Article  CAS  PubMed  Google Scholar 

  13. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB: Cortical control of a prosthetic arm for self-feeding. Nature 2008, 453: 1098-1101. 10.1038/nature06996

    Article  CAS  PubMed  Google Scholar 

  14. Shenoy KV, Kaufman MT, Sahani M, Churchland MM: A dynamical systems view of motor preparation Implications for neural prosthetic system design. Prog Brain Res 2011, 192: 33-58.

    Article  PubMed Central  PubMed  Google Scholar 

  15. Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, Haddadin S, Liu J, Cash SS, van der Smagt P, Donoghue JP: Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 2012, 485: 372-375. 10.1038/nature11076

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  16. Moritz CT, Perlmutter SI, Fetz EE: Direct control of paralysed muscles by cortical neurons. Nature 2008, 456: 639-642. 10.1038/nature07418

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  17. Ethier C, Oby ER, Bauman MJ, Miller LE: Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature 2012, 485: 368-371. 10.1038/nature10987

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  18. Saleh M, Takahashi K, Amit Y, Hatsopoulos NG: Encoding of coordinated grasp trajectories in primary motor cortex. J Neurosci 2010, 30: 17079-17090. 10.1523/JNEUROSCI.2558-10.2010

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  19. Vargas-Irwin CE, Shakhnarovich G, Yadollahpour P, Mislow JM, Black MJ, Donoghue JP: Decoding complete reach and grasp actions from local primary motor cortex populations. J Neurosci 2010, 30: 9659-9669. 10.1523/JNEUROSCI.5443-09.2010

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  20. Rizzolatti G, Camarda R, Fogassi L, Gentilucci M, Luppino G, Matelli M: Functional organization of inferior area 6 in the macaque monkey. II. Area F5 and the control of distal movements. Exp Brain Res 1988, 71: 491-507. 10.1007/BF00248742

    Article  CAS  PubMed  Google Scholar 

  21. Raos V, Umiltà MA, Murata A, Fogassi L, Gallese V: Functional properties of grasping-related neurons in the ventral premotor area F5 of the macaque monkey. J Neurophysiol 2006, 95: 709-729.

    Article  PubMed  Google Scholar 

  22. Townsend BR, Subasi E, Scherberger H: Grasp movement decoding from premotor and parietal cortex. J Neurosci 2011, 31: 14386-14398. 10.1523/JNEUROSCI.2451-11.2011

    Article  CAS  PubMed  Google Scholar 

  23. Pistohl T, Schulze-Bonhage A, Aertsen A, Mehring C, Ball T: Decoding natural grasp types from human ECoG. NeuroImage 2011, 59: 248-260.

    Article  PubMed  Google Scholar 

  24. Spinks RL, Kraskov A, Brochier T, Umilta MA, Lemon RN: Selectivity for grasp in local field potential and single neuron activity recorded simultaneously from M1 and F5 in the awake macaque monkey. J Neurosci 2008, 28: 10961-10971. 10.1523/JNEUROSCI.1956-08.2008

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  25. Carpaneto J, Umilta MA, Fogassi L, Murata A, Gallese V, Micera S, Raos V: Decoding the activity of grasping neurons recorded from the ventral premotor area F5 of the macaque monkey. Neuroscience 2011, 188: 80-94.

    Article  CAS  PubMed  Google Scholar 

  26. Hodges PW, Bui BH: A comparison of computer-based methods for the determination of onset of muscle contraction using electromyography. Electroencephalogr Clin Neurophysiol 1996, 101: 511-519.

    Article  CAS  PubMed  Google Scholar 

  27. Chang C-C, Lin C-J: LIBSVM: A library for support vector machines. ACM Trans Int Syst Technol 2011, 2: 1-27.

    Article  Google Scholar 

  28. Rizzolatti G, Luppino G, Matelli M: The organization of the cortical motor system: new concepts. Electoencephalogr Clin Neurophysiol 1998, 106: 283-296. 10.1016/S0013-4694(98)00022-4

    Article  CAS  Google Scholar 

  29. Farrell TR, Weir RF: The optimal controller delay for myoelectric prostheses. IEEE Trans Neural Syst Rehabil Eng 2007, 15: 111-118.

    Article  PubMed  Google Scholar 

  30. Ganguly K, Dimitrov DF, Wallis JD, Carmena JM: Reversible large-scale modification of cortical networks during neuroprosthetic control. Nat Neurosci 2011, 14: 662-667. 10.1038/nn.2797

    Article  PubMed Central  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This work was partially supported by the EU within the NEUROBOTICS Integrated Project (IST-FET Project 2003–001917: The fusion of NEUROscience and roBOTICS).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacopo Carpaneto.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JC and SM developed the decoding algorithm. VR, MAU, LF, AM, and VG designed the experimental protocol and performed the animal experiments. All authors wrote, read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Carpaneto, J., Raos, V., Umiltà, M.A. et al. Continuous decoding of grasping tasks for a prospective implantable cortical neuroprosthesis. J NeuroEngineering Rehabil 9, 84 (2012). https://doi.org/10.1186/1743-0003-9-84

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1743-0003-9-84

Keywords