Real-time evaluation of a noninvasive neuroprosthetic interface for control of reach

10
674 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 21, NO. 4, JULY 2013 Real-Time Evaluation of a Noninvasive Neuroprosthetic Interface for Control of Reach Elaine A. Corbett, Member, IEEE, Konrad P. Körding, and Eric J. Perreault, Member, IEEE Abstract—Injuries of the cervical spinal cord can interrupt the neural pathways controlling the muscles of the arm, resulting in complete or partial paralysis. For individuals unable to reach due to high-level injuries, neuroprostheses can restore some of the lost function. Natural, multidimensional control of neuroprosthetic de- vices for reaching remains a challenge. Electromyograms (EMGs) from muscles that remain under voluntary control can be used to communicate intended reach trajectories, but when the number of available muscles is limited control can be difcult and unin- tuitive. We combined shoulder EMGs with target estimates ob- tained from gaze. Natural gaze data were integrated with EMG during closed-loop robotic control of the arm, using a probabilistic mixture model. We tested the approach with two different sets of EMGs, as might be available to subjects with C4- and C5-level spinal cord injuries. Incorporating gaze greatly improved control of reaching, particularly when there were few EMG signals. We found that subjects naturally adapted their eye-movement preci- sion as we varied the set of available EMGs, attaining accurate per- formance in both tested conditions. The system performs a near- optimal combination of both physiological signals, making control more intuitive and allowing a natural trajectory that reduces the burden on the user. Index Terms—Electromyography, eye tracking, Kalman lter, mixture model, motor neuroprostheses. I. INTRODUCTION C ERVICAL spinal cord injuries (SCIs) can cause paralysis of muscles in the arm, complicating reaching and everyday tasks such as feeding and grooming. Prosthetic devices can re- store coordinated movement and greatly improve quality of life using robotic [1] or functional electrical stimulation (FES) ap- proaches [2], [3]. To control either system, intent must rst be de- termined from physiological signals that remain under voluntary control, often by using a neural-machine interface (NMI). The available control signals vary widely among people with tetraplegia. Intracortical recordings can provide access to plen- tiful control signals, allowing effective control of robotic arms Manuscript received August 22, 2012; revised January 05, 2013; accepted February 17, 2013. Date of publication March 15, 2013; date of current version July 02, 2013. This work was supported by the National Science Foundation under Grant 0939963. E. A. Corbett is with the Department of Biomedical Engineering, North- western University, Evanston, IL 60208 USA (e-mail: [email protected] western.edu). K. P. Kording is with the Department of Physiology and the Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL 60611 USA (e-mail: [email protected]). E. J. Perreault is with the Department of Biomedical Engineering and the Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL 60611 USA (e-mail: [email protected]). Color versions of some of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TNSRE.2013.2251664 [1], [4], FES [5]–[7] and computer interfaces [8]. Currently, only a small number of patients have been able to avail of this signal source, however, partly due to its invasive nature [1], [8]. Electroencephalography (EEG) is a noninvasive means to access cortical signals [9] but has not seen widespread use in the control of neuroprostheses possibly due to the need for the user to regularly don and doff a set of recording electrodes. Pre- vious FES systems for reaching have been controlled through switching mechanisms activated by body signals unrelated to the reach such as contralateral shoulder movement [10], respiration [11] or voice [12], and have allowed a restricted set of movements through preprogrammed patterns of activation. Electromyograms (EMGs) from voluntarily controlled muscles are a viable signal source for control; however, control of reaching movements is extremely difcult when the required signals are unintuitive and dissociated from the reach [13]. This is especially true for high-level SCIs in which very few EMG signals may be available for use in a neuroprosthetic interface. Hence, there is a clear need to make prosthetic control more intuitive. Eye movements can provide a lot of information about a person’s intentions and are rarely paralyzed. For this reason they have been widely employed in computer interfaces, mostly for cursor pointing and selection tasks [14]–[16]. The recent development of lighter and cheaper eye tracking systems [17] which can, by monitoring both eyes, produce 3-D estimates of the point of regard has led to some interest for neuroprosthetic interfaces [18]. However, gaze can be a problematic input signal as it can be difcult to tell which saccades are intended as commands [19]. A system based on gaze alone might re- quire restrictions on the user’s normal eye and head behavior to avoid eliciting unintended movements. While it may be acceptable to impose strict behavioral rules when performing computer-based tasks, a motor neuroprosthesis must be usable in complex environments and should not restrict the remaining healthy functions of the user. As the brain encodes both target and trajectory information, NMI researchers have investigated incorporating both types of neural signals in decoding models [20]–[25]. Bayesian decoding approaches combine the predictions of an observation model, which is the probabilistic mapping between the state and the physiological or other signals under the user’s control, and a tra- jectory model. The trajectory model describes the probabilistic evolution of the state over time, dening our prior beliefs about the nature of the movement. We previously presented an algo- rithm drawing on these approaches to incorporate target esti- mates from gaze into the trajectory model. With knowledge of the target the trajectory model provides richer prior knowledge 1534-4320/$31.00 © 2013 IEEE

Transcript of Real-time evaluation of a noninvasive neuroprosthetic interface for control of reach

674 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 21, NO. 4, JULY 2013

Real-Time Evaluation of a NoninvasiveNeuroprosthetic Interface for Control of Reach

Elaine A. Corbett, Member, IEEE, Konrad P. Körding, and Eric J. Perreault, Member, IEEE

Abstract—Injuries of the cervical spinal cord can interrupt theneural pathways controlling the muscles of the arm, resulting incomplete or partial paralysis. For individuals unable to reach dueto high-level injuries, neuroprostheses can restore some of the lostfunction. Natural, multidimensional control of neuroprosthetic de-vices for reaching remains a challenge. Electromyograms (EMGs)from muscles that remain under voluntary control can be used tocommunicate intended reach trajectories, but when the numberof available muscles is limited control can be difficult and unin-tuitive. We combined shoulder EMGs with target estimates ob-tained from gaze. Natural gaze data were integrated with EMGduring closed-loop robotic control of the arm, using a probabilisticmixture model. We tested the approach with two different sets ofEMGs, as might be available to subjects with C4- and C5-levelspinal cord injuries. Incorporating gaze greatly improved controlof reaching, particularly when there were few EMG signals. Wefound that subjects naturally adapted their eye-movement preci-sion as we varied the set of available EMGs, attaining accurate per-formance in both tested conditions. The system performs a near-optimal combination of both physiological signals, making controlmore intuitive and allowing a natural trajectory that reduces theburden on the user.

Index Terms—Electromyography, eye tracking, Kalman filter,mixture model, motor neuroprostheses.

I. INTRODUCTION

C ERVICAL spinal cord injuries (SCIs) can cause paralysisofmuscles in the arm, complicating reachingandeveryday

tasks such as feeding and grooming. Prosthetic devices can re-store coordinated movement and greatly improve quality of lifeusing robotic [1] or functional electrical stimulation (FES) ap-proaches [2], [3]. To control either system, intentmustfirst be de-termined from physiological signals that remain under voluntarycontrol, often by using a neural-machine interface (NMI).The available control signals vary widely among people with

tetraplegia. Intracortical recordings can provide access to plen-tiful control signals, allowing effective control of robotic arms

Manuscript received August 22, 2012; revised January 05, 2013; acceptedFebruary 17, 2013. Date of publication March 15, 2013; date of current versionJuly 02, 2013. This work was supported by the National Science Foundationunder Grant 0939963.E. A. Corbett is with the Department of Biomedical Engineering, North-

western University, Evanston, IL 60208 USA (e-mail: [email protected]).K. P. Kording is with the Department of Physiology and the Department of

Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL60611 USA (e-mail: [email protected]).E. J. Perreault is with the Department of Biomedical Engineering and the

Department of Physical Medicine and Rehabilitation, Northwestern University,Chicago, IL 60611 USA (e-mail: [email protected]).Color versions of some of the figures in this paper are available online at

http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TNSRE.2013.2251664

[1], [4], FES [5]–[7] and computer interfaces [8]. Currently,only a small number of patients have been able to avail of thissignal source, however, partly due to its invasive nature [1],[8]. Electroencephalography (EEG) is a noninvasive means toaccess cortical signals [9] but has not seen widespread use inthe control of neuroprostheses possibly due to the need for theuser to regularly don and doff a set of recording electrodes. Pre-vious FES systems for reaching have been controlled throughswitching mechanisms activated by body signals unrelatedto the reach such as contralateral shoulder movement [10],respiration [11] or voice [12], and have allowed a restricted setof movements through preprogrammed patterns of activation.Electromyograms (EMGs) from voluntarily controlled musclesare a viable signal source for control; however, control ofreaching movements is extremely difficult when the requiredsignals are unintuitive and dissociated from the reach [13]. Thisis especially true for high-level SCIs in which very few EMGsignals may be available for use in a neuroprosthetic interface.Hence, there is a clear need to make prosthetic control moreintuitive.Eye movements can provide a lot of information about a

person’s intentions and are rarely paralyzed. For this reasonthey have been widely employed in computer interfaces, mostlyfor cursor pointing and selection tasks [14]–[16]. The recentdevelopment of lighter and cheaper eye tracking systems [17]which can, by monitoring both eyes, produce 3-D estimates ofthe point of regard has led to some interest for neuroprostheticinterfaces [18]. However, gaze can be a problematic inputsignal as it can be difficult to tell which saccades are intendedas commands [19]. A system based on gaze alone might re-quire restrictions on the user’s normal eye and head behaviorto avoid eliciting unintended movements. While it may beacceptable to impose strict behavioral rules when performingcomputer-based tasks, a motor neuroprosthesis must be usablein complex environments and should not restrict the remaininghealthy functions of the user.As the brain encodes both target and trajectory information,

NMI researchers have investigated incorporating both types ofneural signals in decodingmodels [20]–[25]. Bayesian decodingapproaches combine the predictions of an observation model,which is the probabilistic mapping between the state and thephysiological or other signals under the user’s control, and a tra-jectory model. The trajectory model describes the probabilisticevolution of the state over time, defining our prior beliefs aboutthe nature of the movement. We previously presented an algo-rithm drawing on these approaches to incorporate target esti-mates from gaze into the trajectory model. With knowledge ofthe target the trajectory model provides richer prior knowledge

1534-4320/$31.00 © 2013 IEEE

CORBETT et al.: REAL-TIME EVALUATION OF A NONINVASIVE NEUROPROSTHETIC INTERFACE 675

about the intended reach, possibly taking some of the burdenfrom the user. We used a mixture model to account for the factthat there may be multiple potential targets, since people maylook at a number of locations before a reach [26]. The neuraldata thus is used to resolve ambiguity in the target estimates, andwe have shown that the approach is effective in the face of hightarget uncertainty [27]. This combination of sources may enablethe use of eye tracking in a safe interface generating reachingmovements where the user’s intent can be verified through theirneural signals.Evaluations of decoding algorithms are often performed

without testing how a user would interact with the system. Thisgenerally involves recording the neural signals as natural armmovements are made, and then performing offline evaluationsof how well those movements can be reconstructed. This isan important step in the development of a decoder, allowingrapid comparisons of multiple conditions on the same dataset.However, a number of groups have shown that offline accuracydoes not necessarily predict online performance [28]–[30].When a user can interact with the system in closed-loop theycan compensate for some, though clearly not all, decoder errors.An offline evaluation of our algorithm, combining trajectoryinformation from shoulder EMGs and target information fromgaze, produced promising results suggesting that gaze may bean effective way to improve trajectory estimates and reducethe burden on the user [26]. The algorithm improved decodingrelative to one using EMG alone; however, it was unclearwhether the results would hold in closed loop. If subjects couldsufficiently compensate for decoder errors using their EMG, theadditional complexity and expense of adding gaze may not bejustified. While similar state-space models had been tested on-line by other groups [22], [31], the mixture model, a potentiallynonstationary approach incorporating eye tracking, had not.It was unknown how subjects would respond, as the relativedifficulty in using the interfaces could not be ascertained froman offline analysis. An online evaluation was clearly necessaryto account for user interaction and to test the viability of theapproach in a realistic setting.In this work we developed a neuroprosthetic user interface

combining gaze and EMG data to enable closed-loop controlof reaching. Using a variety of signals, all noninvasive and ac-cessible to individuals with SCI, subjects used the decoder tocontrol their arm position guided by a robot in a 3-D workspace.We tested shoulder EMG control both with and without incorpo-rating gaze, allowing us to evaluate whether the improvementsseen in offline analyses translated to online control. We foundthat, even with EMG alone, with the possibility of error correc-tion the user could move far better than predicted from offlinebehavior. Nonetheless, incorporating the gaze data dramaticallyimproved control, in particular when the EMG data were lowdimensional. Parts of this work were reported in a conferenceproceeding [32].

II. METHODS

By combining target information from gaze data with contin-uous EMG signals we aimed to generate a natural user interfacefor robot-assisted reaching that would reduce the burden on theuser. To determine its effectiveness the decoder was compared

Fig. 1. Graphical representation of algorithms: (a) Kalman Filter (driven byEMG alone); (b) model with single target estimate (KFT); and (c) mixture ofKFTs, using multiple potential targets.

to one with only EMG inputs. Furthermore, to evaluate the par-adigm in the best-case scenario, we also tested the decoder withfull knowledge of the correct target location. While some uncer-tainty about the target location is almost inevitable in real neu-roprosthesis use, it is instructive to compare the performancewith eye tracking to that with perfect target information as wellas that with none. These models have previously been evaluatedin an offline analysis, where the algorithm development was de-scribed in detail [26]. The algorithm is summarized again in thefollowing for completeness.

A. Decoding Algorithms

For continuous trajectory decoding from neural signals, theKalman Filter (KF) [see KF, Fig. 1(a)] is a popular state es-timation algorithm [33]. The KF performs Bayesian inferencebased on two linear models. The observation model defines alinear mapping with Gaussian noise between the state (handkinematics) and the control signals that the user may volun-tarily activate to control the neuroprosthetic device (observa-tions). Similarly, the trajectory model assumes that the evolu-tion of the state of the hand is linear with integrated Gaussiannoise

(1)

where is the state vector at time represents thehand position, is the process noise with ,and is the state covariance matrix.To create a directional trajectory model, we assume that the

effect of the target on the hand kinematics is linear and does notchange over the course of the reach. Under these assumptionsthe best estimate is achieved by adding the target position tothe state space [KFT, Fig. 1(b)], thereby linearly incorporatingit into the trajectory model [22], [23]

(2)

where is the target position vector, with dimensionalityequal to that of .When the target estimates were based on eye movements,

we needed to account for multiple potential targets. The targetinformation source provided a prior distribution for the possibletargets, , and as the neural data were integrated over thecourse of the reach , it informed us about the likelihoodof each of the possible resultant trajectories. This was realized

676 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 21, NO. 4, JULY 2013

using a probabilistic mixture model [see mKFT, Fig. 1(c)] overeach of the potential targets

(3)

The KF recursion was performed for each potential target ,and the predicted state was a weighted sum of the state esti-mates from each model, where the weights were proportional tothe prior for the associated target and the likelihood ofthe model given that target . was inde-pendent of the target and therefore was used as a scaling factorto ensure that the weights summed to 1.

B. Experimental Methods

Subjects: Six able-bodied, right arm dominant subjects(three female, three male) participated in this study. Each sub-ject provided informed written consent to a protocol approvedby Northwestern University’s Institutional Review Board.The use of able-bodied subjects allowed us to make a morethorough comparison of algorithms and signal sources thanwould have been possible with subjects who had sustained anSCI, due to the wide variation of abilities within the population.As we were not limited by the control signals available fromour subjects we could compare two different sets of EMGs,allowing us to assess the individual contributions of differentsignals. These experiments provided us with a proof of conceptand valuable insight for future tests with the target population.Robotic Neuroprosthesis: To test our approach for combining

voluntary control signals with target information obtained fromeye tracking, we developed a robotic system to assist reaching.This system allowed us to compare decoding approaches andsignal sources in an online control setting while avoiding manyof the complications involved in implementing an FES neu-roprosthesis, where user intent must additionally be translatedto stimulation patterns producing coordinated arm movements.The robotic system accurately positioned the arm based on thedecoded kinematics, thereby isolating performance issues rele-vant to decoders and signal sources and providing a wider sub-ject population on which to evaluate the approaches. Much ofthis could be achieved with a virtual interface; however, the useof a robotic system provided more realistic feedback to the sub-ject, ensuring that their arm was positioned appropriately as thecontrol signals were generated.Each subject gripped a 3 degrees of freedom HapticMaster

robot (Moog FCS, the Netherlands), mounted at 90 to thewall, which moved the subject’s hand throughout a reachingworkspace [Fig. 2(a)]. A spring-loaded stylus attached to therobot handle allowed for soft contact with targets that were dis-played on two touch-screen monitors (Planar PT19, Beaverton,OR) within reach, at variable distances from the subjects. Thedistance between the monitors was approximately 3–7 cm inthe direction; positioning depended on the range of motion ofthe subject. The monitors were both 37 cm 30 cm, of whichapproximately 75% of the total area was within the robot’sworkspace. The subject was comfortably seated, restrainedwith lap and shoulder straps, and the starting position of the

Fig. 2. (a) Experimental setup and (b) muscles recorded to simulate SCI levels.

robot was directly in front of his/her chest. The goal of the taskwas to guide the robot to position the stylus in the center of thetargets that appeared on the monitors. The velocities predictedby the decoders were sent as kinematic control signals to therobot at a rate of 60 Hz. A low-gain PID feedback on theposition error was used to prevent drift and maintain positionfidelity. Due to the high stiffness of the robot (20 000 N/m), theoutput of the decoder completely controlled the position of thehand; subjects were unable to alter the hand position by morethan a few millimeters by generating voluntary forces opposedto the decoded hand trajectory.We simulated the signals that would be available at different

levels of SCI by evaluating each of the decoders (KF, KFT andmKFT) with two sets of EMG control signals. To simulate aninjury below the fourth cervical vertebra (C4) we used just theupper trapezius and for C5 we also included the anterior, middleand posterior heads of the deltoid [Fig. 2(b)]. The EMG sig-nals were amplified and band-pass filtered between 10 and 1000Hz using a Bortec AMT-8 (Bortec Biomedical Ltd., Canada),anti-aliased filtered using fifth-order Bessel filters with a cutofffrequency of 500 Hz and sampled at 2400 Hz. The monitor andHapticMaster positions were recorded at 60 Hz using an Op-totrak motion analysis system (Northern Digital Inc., Canada)so that positions on the monitors could be transformed intothe HapticMaster coordinate system. We recorded eye move-ments with an EYETRAC-6 head mounted eye tracker (Ap-plied Science Laboratories, Bedford, MA), whose position wasalso monitored with the Optotrak. The position of the eye wasdigitized relative to the eye-tracker before its use, so that thegaze data could be projected onto the screen planes and trans-formed into the appropriate coordinate systems. All signals wererecorded simultaneously and processed at 60 Hz, so as to gen-erate a real-time command signal to control the robot position.Training the Decoders: To estimate the decoder parameters,

training reaches needed to be performed while EMGs wererecorded. Because we wanted control to be intuitive, it wasimportant that the EMGs controlling the reaches correspondedas closely as possible to those a subject would naturally makewhen attempting to reach. However, as the target populationwill be unable to generate unassisted reaches, the most practicalapproach was to have the robot move along an ideal trajectoryas the subject attempted to move along with the reach. Leastsquares regression was then used to estimate the parametersfrom the EMG and stylus kinematic data. The state vector thusconsisted of the 3-D stylus position, velocity and acceleration.

CORBETT et al.: REAL-TIME EVALUATION OF A NONINVASIVE NEUROPROSTHETIC INTERFACE 677

Fig. 3. Sample training reach (a) kinematics, generated by (4) and (b) EMGproduced by one subject assisting the reach.

In the case of the KFT parameter estimation, the final recordedposition of the stylus, representing the target, was also included.During training, 18 targets spanning the reachable area of the

two monitors each appeared twice in random order. The reachbegan with the robot in the starting position in front of the sub-ject’s chest, and after an audible go cue, it moved towards thetarget automatically while EMGs were recorded. The subjectheld the robot handle and was instructed to gently assist themovement. After the reach was completed the robot returned tothe start position. The training reaches were generated using atrajectory model that was linear in the kinematics and the target

(4)

Here, was the stylus position at time and was the target.and were the identity and zero matrices. The parame-

ters and determined the velocity profile of the trajectory andwere both set to 3 for these experiments. The resulting trajec-tories accelerated when the target was distant and deceleratedwhen it was close, allowing slower movements as the stylus ap-proached the target (Fig. 3).Two features were extracted from each 16-ms window of

EMG for use as observations of the state: the root mean square(RMS) value of the window and the number of zero-crossings,a frequency-related feature that has been shown to be useful inprosthetic control [34], [35]. These features were square-roottransformed to produce more Gaussian-like distributions.Subjects could use different strategies during the training

reaches, for example, gently pushing on the handle or at-tempting to exert as little pressure as possible. As this couldpotentially affect performance, it was sometimes useful to re-peat the training procedure after some practice test reaches hadbeen performed. After the experience of online control, subjectscould sometimes alter their behavior during the training reachesto improve the models. Specifically, at least ten practice control

Fig. 4. Example control reaches, kinematics and square-root transformed RMSof EMGs: (a) decoder with C5 EMGs alone (KF) and (b) gaze combined withthe single C4 EMG (mKFT).

evaluation reaches (see the next section) were performed afterthe training session. If either less than 8/10 targets were at-tained, or if the subject felt that performance could be improvedby retraining, the training procedure was repeated. Ten practicereaches were then performed with the new model and the bestmodel was selected. If control was improving, further practicereaches were performed on the selected model until there wasa learning plateau between successive sets of ten reaches (noimprovement in score), or at least 8/10 targets were attainedand the subject was comfortable with the model.Decoder Control Evaluation: The decoders were evaluated

with a target acquisition task. For each trial a target randomlyappeared in the reachable area of the monitors, 1 s before anauditory go cue. The goal was to place the stylus as close to thecenter of the target as possible. After the go cue, the reach wasinitiated when the square-root-transformed RMS value of anyEMG channel increased above twice its level prior to the go cue.When the C4 injury level was being simulated, the contralateralupper trapezius was also recorded to allow subjects to initiatereaches where they would not normally activate the ipsilateralmuscle, by shrugging their left shoulder. However, this musclewas not included as a part of the decoder as it was not involvedin the natural reach. After initiation, the decoded velocity andposition were used to control the robot’s reach (Fig. 4).Upon initiation of a reach, the decoder was provided with the

initial state vector including the robot’s current position. Whentesting the KFT andmKFT, target estimates were also initializedin the state vector. In the case of the KFT, the actual location ofthe target center was provided. For themKFT, the gaze data fromthe 1-s period prior to initiation were used to estimate three po-tential targets with which to initialize a corresponding mixturecomponent. The 3-D location of the eye gaze was calculated byprojecting its direction onto the monitors. The first, middle and

678 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 21, NO. 4, JULY 2013

Fig. 5. Experimental protocols for (a) mKFT experiment at either simulated in-jury level, (b) KF experiment including both simulated injury levels. Retrainingwas performed if subjects could not attain at least eight targets in a set of ten orthey were unhappy with control after the practice reaches.

last samples were selected, and all other samples were assignedto a group according towhich of the threewas closest. Themeansof these three groups were used to initialize three KFTs in themixture model and their priors were assigned proportionally tothe number of samples in them. If the subject looked at multiplepositions prior to reaching, including the target, the correct targetwould be accounted for in one of the mixture components.Forty test reaches were performed for each decoding model.

Each target consisted of a green circle of 1 cm radius surroundedby five rings of various colors 1 cm thick. When the target wasattained its color changed to that of the location correspondingto where the stylus touched. For a missed target, or if the reachtimed out (after 10 s), the target turned red. For attaining thegreen circle the subject received a score of 10 points and forouter rings they received 9, 8, 7, 6 and 5 points. Feedback of thecumulative total of their most recent ten reaches was displayedto them to increase motivation.Protocols: The decoders and simulated injury levels were

evaluated over three experimental sessions performed by eachsubject. One session tested the KF at both simulated injurylevels, involving no target information whatsoever. In the othertwo sessions the KFT (perfect target information) was testedand followed by the mKFT (with eye tracking), at either theC4 or C5 simulated injury level. Half of the subjects wererandomly assigned to perform the KF session first, while theother half performed it last. The order of simulated injury levelswas also randomized across subjects, but kept consistent foreach subject across algorithms.We performed each experiment with target information at a

single simulated injury level, so that the same model was usedin both the KFT and mKFT parts of the experiment. The exper-iment began with a set of training reaches followed by practiceand retraining if necessary as described previously [Fig. 5(a)].Test reaches were then performed on the KFT, after which theeye tracker was placed on the subject and calibrated. As thesealgorithms used the same trajectory and observation models,the KFT reaches ensured that subjects were familiar with thebasic model before adding the complexity of the mixture.After a few reaches to ensure that the subject was comfortablewith the system, the test reaches were then performed for themKFT. In the KF experiments, two different EMG sets wereused [Fig. 5(b)]. Following the training and testing of the first

set of EMGs, practice reaches were performed for the secondset. If the criteria for retraining were met at this point it wasperformed and the two EMG sets were tested on models derivedfrom different training reaches.

C. Analysis

Performance was evaluated primarily on the position of thestylus at the end of the reach, relative to the target. We com-puted the target acquisition rate, which was the proportion oftrials for which any of the rings of the target were obtained, andthe distance to the target center. However, the targets were notdistributed symmetrically—due to the constraints of the robotworkspace there was greater variance in the vertical direc-tion—so the results may have been biased if the performancedepended on the direction of movement. Therefore, to quantifythe level of control in the horizontal ( ) and vertical ( ) di-rections separately the proportion of target variance accountedfor (VAF) was calculated in the two dimensions by normalizingthe final distance to the target by the total variance of thetargets. As all of the targets were on the two monitors therewere only two target positions in the outward ( ) dimension,so this was not a useful metric. We used an analysis of variance(ANOVA) to look at the effect of the interaction of algorithmand simulated injury level on the performance metrics, withsubject as a random effect. Tukey tests were performed forpost hoc comparisons, and all statistical comparisons used asignificance level of .To consider kinematic characteristics of the trajectories, we

also computed the path efficiency of the reaches. This was cal-culated using the ratio of the straight-line distance from the startto the end of the reach to the cumulative distance travelled

PE %distance

distance(5)

where is the final reach time. This metric did not take theaccuracy of the reach into account, and therefore was only cal-culated for reaches where the target had been attained. For allof the reaches, we calculated the online —a metric incorpo-rating both the kinematic characteristics and the accuracy of thereaches. To compute this we used the method for generatingtraining reaches to simulate an “ideal” trajectory between thestart position and the target, and then evaluated the multiple[36] between that trajectory and the executed reach.How precise should we expect a system that uses eye move-

ments and EMGs at the same time to be? In good approximationthe control signals from eye tracking and EMGs should be con-ditionally independent, given the target. In other words, whileeye movements and EMGs obviously relate to the target, eyemovements will, in first approximation, relate to the target posi-tion and not the dynamics of EMG or hand movement. Clearly,in closed-loop mKFT control this will not hold, as the gaze esti-mates will influence the trajectory thereby influencing the sub-ject’s EMGs. However, if we make this assumption of condi-tional independence, and the generic assumption of Gaussiandistributions of errors for the individual sources, we can use sim-pler Bayesian methods to predict how well a combined systemcould theoretically do. To estimate the expected error in the eye-

CORBETT et al.: REAL-TIME EVALUATION OF A NONINVASIVE NEUROPROSTHETIC INTERFACE 679

tracking information we calculated the expected error acrossthe three possible targets weighted by their probabilities. To es-timate the expected variance in the EMG based KF (no eye-tracking data) we calculated the errors of the final KF-based es-timates. Standard rules of Gaussian cue combination which as-sume Gaussian, unbiased distributions, then yield that the stan-dard deviation of the optimal combined estimator should be

(6)

We then tested (using a t-test) whether the mean error for themKFT was significantly different from this estimate; if the es-timate correctly predicted mKFT performance, this would indi-cate that the algorithm was making optimal use of the availablesignals. As the errors in the gaze data were not considered indesigning the models (they were trained using perfect target in-formation), this was not a foregone conclusion.A central question in decoding research is how online and of-

fline performance correspond.Wewanted to see if the offline ac-curacy of the training data predicted performance of the modelsin closed-loop. The offline of the training data was calcu-lated by using themodels to predict the training trajectories fromthe recorded EMG and comparing the reconstructions to the ac-tual training trajectories. We used leave-one-out cross-valida-tion; the prediction of each reach was generated with a modeltrained using the remaining reaches. This training was thencompared to the online for the closed-loop reaches. The av-erage both online and offline were calculated for the KF andmKFT at C4 and C5, for each subject. As no eye tracking wasavailable from the training data, we used perfect target informa-tion to get the offline , which we compared to the onlinefor the gaze-based mKFT. The online KFT was not consideredin this analysis as perfect target information cannot be expectedin a practical system.

III. RESULTS

A. Signals and Training

The extent of the required practice and retraining varied de-pending on the decoder and subject. Retraining was always per-formed if subjects could not attain at least eight targets in a setof ten reaches, or if subjects were unhappy with control afterperforming the practice reaches. For the target-dependent de-coders, we used perfect target information at this point, and inall cases subjects immediately learned to use the decoder accu-rately [Fig. 6(a), (b)]. With the KF at C4, where there was onlya single EMG for control, subjects never achieved the required8/10 targets and thus retraining was always performed. How-ever, this seldom resulted in a substantial improvement as sub-jects quickly learned this simple control interface as well as pos-sible; after brief learning performance remained steady, withoutattaining 8/10 targets [Fig. 6(c)]. The KF at C5 involved a morecomplicated mapping using four EMGs. Some subjects requiredmany practice sets to learn the model while others adapted morequickly [Fig. 6(d)]. Retraining was performed for this modelin four out of the six subjects. Through this process subjectslearned to deal with each decoder, allowing us a meaningfulcomparison where performance had reached a plateau.

Fig. 6. Accuracy of practice reaches for an example subject with (a) C4 KFT,(b) C5 KFT, (c) C4 KF (where retraining was performed after two practice setsof ten reaches), and (d) C5 KF.

Fig. 7. Performance measures for target accuracy, mean and standard errors of(a) acquisition rate, (b) performance feedback displayed to subjects, and targetVAF in (c) X and (d) Y.

B. Target Accuracy

The models incorporating target information not surprisinglyproduced reaches that were precise at target acquisition. Whenthe decoder had perfect target information (KFT) the reacheswere highly accurate, regardless of the amount of EMG avail-able; the target acquisition rate, scores and target variance ac-counted for were all close to ideal and not statistically differentacross simulated injury levels (all , ANOVA Fig. 7).Perfect target information is unlikely to be available in a prac-tical neuroprosthesis implementation; however, its performancehere represents a best case scenario for the system. Encourag-ingly, the gaze-based model (mKFT) was almost as accurate,and the differences were not statistically significant (all ,ANOVA). The mKFT also was unaffected by the amount ofavailable EMG information (all , ANOVA). In the pres-ence of high quality target information both KFT and mKFT doa very good job at getting the stylus to the right target.Without access to the target position, the KF performance was

more dependent on the quantity of available EMG. At a simu-lated C5 injury where four muscles were available, the KF wasnot significantly different from the mKFT in target acquisition

680 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 21, NO. 4, JULY 2013

Fig. 8. Quantification of the average target error for EMG alone (KF); EMGperfect target information (KFT); EMG gaze (mKFT), errors of initial

target estimates from gaze data in the mKFT case, and the predicted error froma Bayesian combination of the EMG and gaze data.

rate, score and target VAF in and ( and, respectively). However, the target acquisition rate went

from 94% at C5 to 43% at C4 when only a single muscle wasavailable . It can be clearly seen from the targetVAF that, when using the KF at C4, subjects had no control inthe direction 0% but were able to effectively control99% . This stands to reason since the upper trapezius muscle

is primarily activated during upward reaching movements. TheKF was significantly more accurate in C5 than C4 accordingto all performance metrics (all ), except for the targetVAF in .When an adequate number of muscles areavailable the KF can do a good job at target acquisition but per-formance becomes very poor when only onemuscle is available.These results were also reflected in the target errors (Fig. 8).

The KFT average error was below 1 cm, while the mKFT av-eraged below 1.5 cm for both simulated injury levels. Errorsof 1 cm or less, the radius of the center target, represented per-fect task performance. Because this measure was more sensitivethan the score or target acquisition rate, we found a significantdifference between the KFT and mKFT at C4 , butnot at C5 . The KF at C5 had an average error of1.8 cm, which increased to 6.5 cm at C4. However, this error atC4 was again limited by the target distribution in the workspacesince, as evidenced by the target VAF, subjects were unable tomove horizontally and all of their control was in the vertical (Y)direction. Clearly, a single EMG is insufficient to decode intentin our setting.Eye tracking enabled accurate reaching in all areas of the

workspace, whereas the KF errors were more directionally de-pendent. This is illustrated by the distribution of final target er-rors based on the target position on the monitors (Fig. 9). Theerrors were evenly distributed for the models with eye tracking;average errors from all areas of the screen were less than 2 cm.By splitting the two monitors evenly into left, middle and rightsections, we found that the only statistically significant differ-ences between sections for the models with eye tracking werethat at C4 the error in the middle of the top screen was 0.4 cmhigher than that on the right of the bottom screen ( ,1-way ANOVA with subject as a random effect and Tukey posthoc), and at C5 the right side of the top screen was 0.9 cm higherthan the left side of the bottom screen . A problem

Fig. 9. Average final errors as a function of target location on the pair of mon-itors for: (a) C4 mKFT, (b) C5 mKFT, (c) C4 KF, and (d) C5 KF.

area was also evident for some subjects using the KF at C5, re-sulting in an average error in the left of the bottom screen of3.3 cm, which was significantly higher than all other areas ofthe screens (by an average of 1.8 cm, all ), except theright side of the top screen, which was not significantly differentfrom any other region . It was clear, however, thatonly targets in the center of the monitors were attainable at C4using the KF; average errors at the edges of the screens were 8.7cm, while in the center they were 3.3 cm . As notedabove, subjects could control the vertical direction but were un-able to move horizontally.

C. Trajectory Kinematic Characteristics

The decoders with target information produced straighterpaths than those with EMG alone. However, as only 40% oftargets were attained for the KF at C4, the majority of thosereaches were excluded from this analysis. This selection, ifanything, would make the KF look better than it actuallyperformed. We found the path efficiency of the KF to be lowerthan for the models with target information [ forboth C4 and C5, Fig. 10(a)], while the mKFT and KFT showedno significant difference (both ). As illustrated in theexample reaches [Fig. 10(b) and (c)], the path was very directwith target information, whereas the KF at C5 often producedmovements in the wrong direction that were subsequently cor-rected by the subject. Target information helped dramatically intrajectory decoding, by reducing the need for error correction.Overall, the online reaches closely replicated the simulated

ideal trajectories; the online was very high [Fig. 10(d)]. Im-portantly, this metric was generally high due to the constrainedmovement in the direction—all reaches were towards the twomonitors. While the subjects were not constrained to followany particular trajectory, the “ideal” straight-line reach path wasvery closely followed, particularly for the models with targetinformation, which again were not statistically different [both

, Fig. 10(d) and (e)]. Similar to the path efficiency re-sult, the KF at C5 had a somewhat lower (both ),indicating less regular trajectories and a greater need for correc-tions using the EMG signals. Both the path efficiency and onlinewere slightly lower in C5 than C4 for the models with target

information, most likely due to the influence of the additionalEMG. However, these differences were not significant statisti-cally (all ).

CORBETT et al.: REAL-TIME EVALUATION OF A NONINVASIVE NEUROPROSTHETIC INTERFACE 681

Fig. 10. Kinematic characteristics: (a) results for successful reaches (includingonly 40% of the reaches for the KF at C4); (b) example reach path for the mKFTat C5 ( %, monitor not to scale); (c) example reach path for KF atC5 ( %, monitor not to scale). (d) Comparison to ideal trajectory forall reaches: (e) example reach for the mKFT at C5 ( ; and (f) examplereach for KF at C4 .

D. Adaptation of Eye Movements to Available EMGInformation

Subjects, motivated to produce accurate control, producedmore precise eye movements when they had few EMGs avail-able. The errors in the target estimates at C4 were significantlylower than those at C5 (Fig. 8, ). This suggests thatwhen subjects had fewer EMGs to compensate for errors in theirgaze information they tried to minimize those errors, while theywere content to use the EMG when it was available. The sub-jects essentially manipulated the accuracy of the gaze informa-tion based on the accuracy possible from the available EMG.At C5 the mKFT performance was significantly more accuratethan the target estimates used , demonstrating thatthe EMG enhanced performance.

E. Efficiency of Target and EMG Integration in Decoder

If we assume that the EMG and gaze are conditionally inde-pendent, we find that the assumption of an optimal combinationalmost perfectly explains the precision of the mKFT (comparered and dashed pink in Fig. 8). Interestingly, as noted previously,we can also see a clear sign of subjects’ eye movements beingless precise in the C5 case. The prediction based on an optimalcombination of the C5 EMGs and the same eye-movement pre-cision as in C4 would have been an average error of 1.1 cm,significantly lower than what was observed . How-ever, this difference in precision would not have had a substan-tial negative effect on expected scores (scores for the KFT andmKFT differed by only 0.35), indicating that the accuracy at-tained was sufficient for the task. Hence, the decoder appearsto combine eye movement and EMG signals in a near-optimalfashion.

F. Comparison of Online and Offline Accuracies

Performance in closed loop was overall much better than pre-dicted by the offline accuracy. We found that the online was

Fig. 11. Comparison of average offline and online accuracy for each subjectand experimental condition.

consistently higher than the training (Fig. 11). In particular,the KF at C4 produced a wide range of training accuracies, asthe extent to which the upper trapezius was activated duringtraining varied across subjects. However, all subjects were ableto perform highly accurate control in the -direction, as noted.Closed-loop and offline performance do appear to be quite re-lated; a linear regression gave an of 0.7. However, the regres-sion line had a shallow slope of 0.19, and the relation appears tobe driven by a general distribution across conditions. There islittle evidence to suggest a strong relationship within any of thealgorithm and simulated injury level conditions. Thus the pri-mary insight from this comparison was that the online accuracywas far better than what was predicted by the accuracy of thetraining data.

IV. DISCUSSION

To achieve robust control of neuroprostheses for individualswith high tetraplegia, there is a need for more natural userinterfaces. As EMGs from the available proximal musclescannot provide sufficiently intuitive control, the use of addi-tional sensors, such as eye trackers, can provide informationabout the context of the reach with little cost to the userin terms of effort. We demonstrated that target informationobtained from gaze could dramatically improve control ofassisted reaching with a robot arm using a single shoulder EMGchannel. This could equally have been achieved using a numberof other trajectory-target combining methods [21], [37], with anecessary extension such as the mixture model described hereto account for target uncertainty, enabling safer usage. Addingtarget information into the trajectory model naturally enhancescontrol; the target allows a well-defined reach profile to begenerated.Gaze information is extremely useful when predicting de-

sired trajectories and we have shown here that it is effectiveand practical for use in a real-time system. While accuratereaches were performed in the C5 case using only EMG, itoften required more effort from the user. Qualitatively, subjectsreported that the decoders incorporating target informationwere much easier to control than those without; the modelwith perfect target information often seemed effortless. While

682 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 21, NO. 4, JULY 2013

the KF reaches were almost as accurate at the target (Fig. 7),as illustrated by the kinematic characteristics (Fig. 10), thetrajectories produced were less consistent. The lower pathefficiencies indicated that the subjects were using their EMGto correct for errors in the trajectories, suggesting that evenwhen there was sufficient neural data to control the reach, thegaze information helped by reducing the burden on the userand producing a more natural trajectory. At the C4 level, whichis more representative of the high tetraplegia population, thetarget information was necessary to produce functional controlin all directions. Gaze information makes control both moreprecise and more natural.Closed-loop interface comparisons can provide usability in-

formation not evident from offline evaluations. In this studyboth offline and online accuracy with target information werevery high, although the offline evaluations used perfect target in-formation and may therefore have overestimated the accuracy.More interestingly, KF performance without target information,when the subjects could interact with the decoder, was dramat-ically better than the offline accuracy predicted. Even thoughsubjects were in no way constrained to follow the “ideal” reachpath, it was more closely replicated in online control than inreconstructing the training reaches where that path had been en-forced. This demonstrated that offline accuracy can be a poorpredictor of how well a decoder can be utilized in closed loopand is in agreement with recent studies that have produced sim-ilar findings [28], [29].This closed-loop comparison also presented the opportunity

to consider the balance between system accuracy and the cog-nitive burden placed on the user when interacting with the de-coder. In this study, once an acceptable performance was at-tained, subjects were happy to sacrifice some precision to avoidadditional effort of providing highly accurate gaze data. Sub-jects improved their accuracy relative to the target estimatesusing their EMG, rather than producing more accurate target es-timates, demonstrating that the EMG was a valuable part of theinterface. Only when the EMG was insufficient did they put inthe extra effort with their gaze, so that equal performance couldbe achieved. On the other hand, without target information (KF)the addition of the deltoid EMGs provided far greater accuracy,but the mapping became more complicated and subjects oftenrequired more practice to become accustomed to it. While theaddition of signal sources can greatly help decoding, it is impor-tant to combine the information in such a way as to simplify theuser’s task as much as possible. In this study the gaze providedpotential targets that were complementary to the EMG infor-mation about the trajectory, and the mapping was immediatelyintuitive.Our approach to collecting training data was an important part

of this work not only because it would be practical for use withparalyzed patients, but also because it ensured that the experi-mental conditions during training were as similar as possible tothe real-time control. Specifically, the subject’s arm was underthe control of the robot, which was equally stiff in both cases.By automatically generating the training reaches as opposed to,for example, using natural reaches generated by the subjects,we could avoid discrepancies between the training signals andthose in closed-loop control.

In this work, we have evaluated the decoders while emulatingreal neuroprosthesis use as closely as possible. The use of therobot to move the subject’s arm accounted for any effect thatthis had on the control signals and helped to provide a real-istic feedback absent from virtual interfaces. However, whilethe EMGs used would be available in the SCI population, theirvolitional control may be not be perfect and may be affectedby the paralysis of synergistic muscles. Sensory feedback in thehand and arm, unavoidable with able-bodied subjects, may alsobe altered; to truly evaluate the system it will need to be testedby individuals with SCI. Furthermore, in a clinical FES system,where the desired kinematics must be translated to appropriatemuscle stimulation patterns, accounting for the effects of fatigueand gravity, additional error will inevitably be introduced and afeed-forward system such as this would not suffice. Nonethe-less, the use of able-bodied subjects and an idealized robotic in-terface facilitated a thorough comparison of the signal sourcesand algorithms, providing a valuable proof of concept that canbe developed to study more relevant populations in the future.We believe that this decoding approach could provide a nat-

ural and intuitive user interface that will enable individuals withhigh tetraplegia to control a reaching neuroprosthesis. Under re-alistic environmental conditions, where there is potential for un-certainty in the target estimates, the probabilistic combinationof signal sources provides the user with the ability to correct forerrors over the course of the trajectory, making possible a safeand effective interface.

ACKNOWLEDGMENT

The authors would like to thank T. Haswell and B. Walkerfor their work developing the data acquisition and robot con-trol systems and Dr. N. Sachs for experimental assistance andhelpful discussions.

REFERENCES[1] L. R. Hochberg, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral,

J. Vogel, S. Haddadin, J. Liu, S. S. Cash, P. van der Smagt, and J. P.Donoghue, “Reach and grasp by people with tetraplegia using a neu-rally controlled robotic arm,” Nature, vol. 485, no. 7398, pp. 372–375,May 2012.

[2] A. M. Bryden, K. L. Kilgore, R. F. Kirsch, W. D. Memberg, P. H.Peckham, and M. W. Keith, “An implanted neuroprosthesis for hightetraplegia,” Top. Spinal Cord Inj. Rehabil., vol. 10, no. 3, pp. 38–52,2005.

[3] K. L. Kilgore, H. A. Hoyen, A.M. Bryden, R. L. Hart,M.W. Keith, andP. H. Peckham, “An implanted upper-extremity neuroprosthesis usingmyoelectric control,” J. Hand Surgery, vol. 33, no. 4, pp. 539–550,2008.

[4] M. Velliste, S. Perel, M. C. Spalding, A. S. Whitford, and A. B.Schwartz, “Cortical control of a prosthetic arm for self-feeding,”Nature, vol. 453, no. 7198, pp. 1098–1101, 2008.

[5] E. A. Pohlmeyer, E. R. Oby, E. J. Perreault, S. A. Solla, K. L. Kilgore,R. F. Kirsch, and L. E. Miller, “Toward the restoration of hand use toa paralyzed monkey: Brain-controlled functional electrical stimulationof forearm muscles,” PloS One, vol. 4, no. 6, p. e5924, 2009.

[6] C. Ethier, E. R. Oby, M. J. Bauman, and L. E. Miller, “Restorationof grasp following paralysis through brain-controlled stimulation ofmuscles,” Nature, vol. 485, no. 7398, pp. 368–371, 2012.

[7] C. T. Moritz, S. I. Perlmutter, and E. E. Fetz, “Direct control of paral-ysed muscles by cortical neurons,” Nature, vol. 456, no. 7222, pp.639–642, Oct. 2008.

[8] L. R. Hochberg, M. D. Serruya, G. M. Friehs, J. A. Mukand, M. Saleh,A. H. Caplan, A. Branner, D. Chen, R. D. Penn, and J. P. Donoghue,“Neuronal ensemble control of prosthetic devices by a human withtetraplegia,” Nature, vol. 442, no. 7099, pp. 164–171, 2006.

CORBETT et al.: REAL-TIME EVALUATION OF A NONINVASIVE NEUROPROSTHETIC INTERFACE 683

[9] D. J. McFarland,W. A. Sarnacki, and J. R.Wolpaw, “IOPscience-Elec-troencephalographic (EEG) control of three-dimensional movement,”J. Neural Eng., vol. 7, p. 036007, 2010.

[10] B. T. Smith, M. J. Mulcahey, and R. R. Betz, “Development of anupper extremity FES system for individuals with C4 tetraplegia,” IEEETrans. Rehab. Eng., vol. 4, no. 4, pp. 264–270, Jul. 1996.

[11] N. Hoshimiya, A. Naito, M. Yajima, and Y. Handa, “A multichannelFES system for the restoration of motor functions in high spinal cordinjury patients: A respiration-controlled system for multijoint upperextremity,” IEEE Trans. Biomed. Eng., vol. 36, no. 7, pp. 754–760,Jul. 1989.

[12] R. Nathan and A. Ohry, “Upper limb functions regained inquadriplegia: A hybrid computerized neuromuscular stimulationsystem,” Arch. Physical Medicine Rehab., vol. 71, no. 6, p. 415, 1990.

[13] K. L. Kilgore and R. F. Kirsch, “Upper and lower extremity neuropros-theses,” in Neuroprosthetics: Theory and Practice. River Edge, NJ,USA: World Scientific, 2004.

[14] S. Brunner, S. Hanke, S. Wassertheuer, and A. Hochgatterer, “EOGpattern recognition trial for a human computer interface,” UniversalAccess in Human-Computer Interaction Ambient Interaction, pp.769–776, 2007.

[15] A. Duchowski, Eye Tracking Methodology: Theory and Practice.New York, NY, USA: Springer, 2007, vol. 373.

[16] T. E. Hutchinson, K. P. White, Jr., W. N. Martin, K. C. Reichert, andL. A. Frey, “Human-computer interaction using eye-gaze input,” IEEETrans. Syst., Man Cybern., vol. 19, no. 6, pp. 1527–1534, 1989.

[17] M. L. Mele and S. Federici, “Gaze and eye-tracking solutions for psy-chological research,” Cognitive Processing, pp. 1–5, 2012.

[18] W. W. Abbott and A. A. Faisal, “Ultra-low-cost 3D gaze estimation:An intuitive high information throughput compliment to direct brain-machine interfaces,” J. Neural Eng., vol. 9, no. 4, p. 046016, 2012.

[19] R. J. Jacob and K. S. Karn, “Eye tracking in human-computer inter-action and usability research: Ready to deliver the promises (sectioncommentary),” in The Mind’s Eyes: Cognitive and Applied Aspects ofEye Movements. Oxford, U.K.: Elsevier Science, 2003, pp. 573–605.

[20] B. M. Yu, C. Kemere, G. Santhanam, A. Afshar, S. I. Ryu, T. H. Meng,M. Sahani, and K. V. Shenoy, “Mixture of trajectory models for neuraldecoding of goal-directed movements,” J. Neurophysiol., vol. 97, no.5, p. 3763, 2007.

[21] L. Srinivasan, U. T. Eden, A. S. Willsky, and E. N. Brown, “A state-space analysis for reconstruction of goal-directed movements usingneural signals,” Neural Computation, vol. 18, no. 10, pp. 2465–2494,2006.

[22] G. H. Mulliken, S. Musallam, and R. A. Andersen, “Decoding trajec-tories from posterior parietal cortex ensembles,” J. Neurosci., vol. 28,no. 48, pp. 12913–12926, 2008.

[23] C. Kemere and T. Meng, “Optimal estimation of feed-forward-con-trolled linear systems,” in Proc. IEEE Int. Conf. Acoustics, Speech,Signal Processing, 2005, vol. 5, pp. 353–356.

[24] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski, “Populationdecoding of motor cortical activity using a generalized linear modelwith hidden states,” J. Neurosci. Methods, vol. 189, no. 2, pp. 267–280,2010.

[25] M. M. Shanechi, G. W. Wornell, Z. M. Williams, and E. N. Brown,Feedback-Controlled parallel point process filter for estimation of goal-directed movements from neural signals 2012.

[26] E. A. Corbett, E. J. Perreault, and K. P. Körding, “Decoding with lim-ited neural data: A mixture of time-warped trajectory models for direc-tional reaches,” J. Neural Eng., vol. 9, no. 3, p. 036002, 2012.

[27] E. A. Corbett, N. A. Sachs, K. P. Kording, and E. J. Perreault, “Dealingwith noisy gaze information for a target-dependent neural decoder,” inProc. 33rd Annu. Int. Conf. IEEE Engineering in Medicine and BiologySoc., 2011, pp. 5428–5431.

[28] S. M. Chase, A. B. Schwartz, and R. E. Kass, “Bias, optimal linearestimation, and the differences between open-loop simulation andclosed-loop performance of spiking-based brain-computer interfacealgorithms,” Neural Netw., vol. 22, no. 9, pp. 1203–1213, Nov. 2009.

[29] S. Koyama, S. M. Chase, A. S. Whitford, M. Velliste, A. B. Schwartz,and R. E. Kass, “Comparison of brain-computer interface decoding al-gorithms in open-loop and closed-loop control,” J. Computational Neu-rosci., pp. 1–15, 2010.

[30] J. P. Cunningham, P. Nuyujukian, V. Gilja, C. A. Chestek, S. I. Ryu,and K. V. Shenoy, “A closed-loop human simulator for investigatingthe role of feedback control in brain-machine interfaces,” J. Neuro-physiol., vol. 105, no. 4, pp. 1932–1949, 2011.

[31] S. P. Kim, J. D. Simeral, L. R. Hochberg, J. P. Donoghue, and M. J.Black, “Neural control of computer cursor velocity by decoding motorcortical spiking activity in humans with tetraplegia,” J. Neural Eng.,vol. 5, no. 4, pp. 455–476, Dec. 2008.

[32] E. A. Corbett, K. P. Kording, and E. J. Perreault, “Real-time fusion ofgaze and EMG for a reaching neuroprosthesis,” in Proc. 34th Annu.Int. Conf. IEEE Engineering in Medicine and Biology Soc. , 2012, pp.739–742.

[33] R. E. Kalman, “A new approach to linear filtering and prediction prob-lems,” J. Basic Eng., vol. 82, no. 1, pp. 35–45, 1960.

[34] B. Hudgins, P. Parker, and R. N. Scott, “A new strategy for multifunc-tion myoelectric control,” IEEE Trans. Biomed. Eng., vol. 40, no. 1,pp. 82–94, Jan. 1993.

[35] D. Tkach, H. Huang, and T. A. Kuiken, “Study of stability of time-do-main features for electromyographic pattern recognition,” J. Neuroeng.Rehab., vol. 7, no. 1, p. 21, 2010.

[36] L. Ljung and E. J. Ljung, System Identification: Theory for the User.Englewood Cliffs, NJ, USA: Prentice-Hall, 1987, vol. 11.

[37] V. Lawhern, N. G. Hatsopoulos, and W. Wu, “Coupling time decodingand trajectory decoding using a target-included model in the motorcortex,” Neurocomputing, 2011.

Elaine A. Corbett (M’12) received the B.E. degreein electronic engineering from University CollegeDublin, Ireland, in 2006, and the M.S. and Ph.D. de-grees in biomedical engineering from NorthwesternUniversity, Evanston, IL, USA in 2008 and 2012respectively.She is a Postdoctoral Fellow in the Sensory Motor

Performance Program, Rehabilitation Institute ofChicago. Her research involves Bayesian modelingof human movement and the development of userinterfaces for neuroprosthetic control.

Konrad P. Körding received the Ph.D. degree inphysics from the Federal Institute of Technology(ETH), Zurich, Switzerland, where he worked onCAT electrophysiology and neural modeling.

He received postdoctoral training in Londonwhere he worked on motor control and Bayesianstatistics. Another postdoc position followed at theMassachusetts Institute of Technology, where heworked on cognitive science and natural languageprocessing and deepened his knowledge of Bayesianmethods. Since 2006, he has worked for the Rehabil-

itation Institute of Chicago and Northwestern University, Chicago, IL, wherehe received tenure and a promotion to Associate Professor in 2011. His group isbroadly interested in uncertainty, using Bayesian approaches to model humanbehavior and for neural data analysis.

Eric J. Perreault (M’03) received the B.Eng.and M.Eng. degrees in electrical engineeringfrom McGill University and the Ph.D. degree inbiomedical engineering from Case Western ReserveUniversity.He is an Associate Professor at Northwestern Uni-

versity with appointments in Biomedical Engineeringand Physical Medicine and Rehabilitation, and in theSensory Motor Performance Program at the Rehabil-itation Institute of Chicago, Chicago, IL, USA. Hecompleted a Postdoctoral Fellowship in physiology

at Northwestern University, and was a Visiting Professor at ETH Zürich. Hiscurrent research focuses on the neural and biomechanical factors involved inthe control of multi-joint movement and posture, as well as changes followinginjury.