Integration of visual and somatosensory target information in goal-directed eye and arm movements

11
Abstract In this study, we compared separate and coor- dinated eye and hand movements towards visual or so- matosensory target stimuli in a dark room, where no vi- sual position information about the hand could be ob- tained. Experiment 1 showed that saccadic reaction times (RTs) were longer when directed to somatosensory tar- gets than when directed to visual targets in both single- and dual-task conditions. However, for hand movements, this pattern was only found in the dual-task condition and not in the single-task condition. Experiment 1 also showed that correlations between saccadic and hand RTs were significantly higher when directed towards somato- sensory targets than when directed towards visual tar- gets. Importantly, experiment 2 indicated that this was not caused by differences in processing times at a per- ceptual level. Furthermore, hand-pointing accuracy was found to be higher when subjects had to move their eyes as well (dual task) compared to a single-task hand move- ment. However, this effect was more pronounced for movements to visual targets than to somatosensory tar- gets. A schematic model of sensorimotor transformations for saccadic eye and goal-directed hand movements is proposed and possible shared mechanisms of the two motor systems are discussed. Key words Saccade · Hand movement · Sensorimotor integration · Reaction time · Accuracy Introduction In everyday life we spend much of our time reaching for and manipulating objects. When making a goal-directed hand movement towards a target, precise information about this target has to be obtained. Target information can be multimodal, including visual, auditory and so- matosensory sources of information. Since humans can make precise aiming movements under rapidly changing conditions, a mechanism dynamically integrating all these types of information is necessary to initiate and guide ongoing goal-directed movements. During the last few decades, many researchers have investigated the role of eye movements and visual infor- mation in the control of limb movements. Studies on eye-hand coordination to visual targets (e.g., Abrams et al. 1990; Bekkering et al. 1994, 1995; Frens and Erkelens 1991; Biguer et al. 1982; Mather and Fisk 1985; Prablanc et al. 1979; Pélisson et al. 1986) have shown that saccadic eye movements are much shorter and quicker than goal-directed hand movements. For in- stance, when making coordinated eye/hand responses to a visual target at 20° eccentricity it has been found that eye movements are usually initiated about 150 ms before hand movements (e.g., Biguer et al. 1982), and eye movement durations to targets at 20° eccentricity are known to be much shorter than hand movement dura- tions, about 50 ms versus 350 ms, respectively (Pélisson et al. 1986), which has led some researchers to infer that besides retinal information about the target position, also an eye movement is needed for guiding a limb accurately to the target (e.g., Paillard 1982; Prablanc et al. 1979). Indeed, several researchers have found that limb move- ment accuracy did suffer when subjects were not allowed to move their eyes (e.g., Abrams et al. 1990; Mather and Fisk 1985; Prablanc et al. 1979). An unsolved problem in eye/hand coordination is how motor commands for the arm are constructed from sen- sorial (visual) target information and afferent proprio- ceptive signals. The present experiments were designed to gain more insights into the role and organization of sensorial information transformations in the control of limb movements and eye movements. To investigate the influence on movement latencies of the recoding of sen- sorial information about the spatial properties of the tar- get into useful target representations, visual targets or so- matosensory targets are used to define a target in space. S.F.W. Neggers ( ) · H. Bekkering Max-Planck Institute for Psychological Research, Department of Cognition and Action, Leopoldstrasse 24, D-80802 Munich, Germany e-mail: [email protected] Tel.: +49-89-38602277, Fax: +49-89-342473 Exp Brain Res (1999) 125:97–107 © Springer-Verlag 1999 RESEARCH ARTICLE S.F.W. Neggers · H. Bekkering Integration of visual and somatosensory target information in goal-directed eye and arm movements Received: 15 June 1998 / Accepted: 21 September 1998

Transcript of Integration of visual and somatosensory target information in goal-directed eye and arm movements

Abstract In this study, we compared separate and coor-dinated eye and hand movements towards visual or so-matosensory target stimuli in a dark room, where no vi-sual position information about the hand could be ob-tained. Experiment 1 showed that saccadic reaction times(RTs) were longer when directed to somatosensory tar-gets than when directed to visual targets in both single-and dual-task conditions. However, for hand movements,this pattern was only found in the dual-task conditionand not in the single-task condition. Experiment 1 alsoshowed that correlations between saccadic and hand RTswere significantly higher when directed towards somato-sensory targets than when directed towards visual tar-gets. Importantly, experiment 2 indicated that this wasnot caused by differences in processing times at a per-ceptual level. Furthermore, hand-pointing accuracy wasfound to be higher when subjects had to move their eyesas well (dual task) compared to a single-task hand move-ment. However, this effect was more pronounced formovements to visual targets than to somatosensory tar-gets. A schematic model of sensorimotor transformationsfor saccadic eye and goal-directed hand movements isproposed and possible shared mechanisms of the twomotor systems are discussed.

Key words Saccade · Hand movement · Sensorimotorintegration · Reaction time · Accuracy

Introduction

In everyday life we spend much of our time reaching forand manipulating objects. When making a goal-directedhand movement towards a target, precise informationabout this target has to be obtained. Target information

can be multimodal, including visual, auditory and so-matosensory sources of information. Since humans canmake precise aiming movements under rapidly changingconditions, a mechanism dynamically integrating allthese types of information is necessary to initiate andguide ongoing goal-directed movements.

During the last few decades, many researchers haveinvestigated the role of eye movements and visual infor-mation in the control of limb movements. Studies oneye-hand coordination to visual targets (e.g., Abrams etal. 1990; Bekkering et al. 1994, 1995; Frens andErkelens 1991; Biguer et al. 1982; Mather and Fisk1985; Prablanc et al. 1979; Pélisson et al. 1986) haveshown that saccadic eye movements are much shorterand quicker than goal-directed hand movements. For in-stance, when making coordinated eye/hand responses toa visual target at 20° eccentricity it has been found thateye movements are usually initiated about 150 ms beforehand movements (e.g., Biguer et al. 1982), and eyemovement durations to targets at 20° eccentricity areknown to be much shorter than hand movement dura-tions, about 50 ms versus 350 ms, respectively (Pélissonet al. 1986), which has led some researchers to infer thatbesides retinal information about the target position, alsoan eye movement is needed for guiding a limb accuratelyto the target (e.g., Paillard 1982; Prablanc et al. 1979).Indeed, several researchers have found that limb move-ment accuracy did suffer when subjects were not allowedto move their eyes (e.g., Abrams et al. 1990; Mather andFisk 1985; Prablanc et al. 1979).

An unsolved problem in eye/hand coordination is howmotor commands for the arm are constructed from sen-sorial (visual) target information and afferent proprio-ceptive signals. The present experiments were designedto gain more insights into the role and organization ofsensorial information transformations in the control oflimb movements and eye movements. To investigate theinfluence on movement latencies of the recoding of sen-sorial information about the spatial properties of the tar-get into useful target representations, visual targets or so-matosensory targets are used to define a target in space.

S.F.W. Neggers (✉) · H. BekkeringMax-Planck Institute for Psychological Research,Department of Cognition and Action, Leopoldstrasse 24,D-80802 Munich, Germanye-mail: [email protected].: +49-89-38602277, Fax: +49-89-342473

Exp Brain Res (1999) 125:97–107 © Springer-Verlag 1999

R E S E A R C H A RT I C L E

S.F.W. Neggers · H. Bekkering

Integration of visual and somatosensory target informationin goal-directed eye and arm movements

Received: 15 June 1998 / Accepted: 21 September 1998

Visual targets are projected on the retina and thereforethe incoming position information is retinotopic, imply-ing that the visual input information changes when theeye and/or head is reoriented in space. Retinotopic orga-nized visual maps are also found in several areas in themammalian cortical and visuomotor maps (Hubel andWiesel 1959; Sparks 1989). Early models of saccadiccontrol assumed that saccadic eye movements were exe-cuted relative to the actual gaze direction. Saccadic mo-tor commands can thus be described in a retinotopicframe of reference, only coding the movement directionand amplitude, which is close to the difference betweenactual gaze and target orientation (Robinson 1973). Theidea of retinotopic oculomotor organization is also moti-vated by electrophysiological single-cell recordings inthe monkey superior colliculus, which consists of super-ficial layers with neurons with discharge rates dependingon the retinocentral location of visual stimuli, and deeplayers with neurons that show motor-related activity de-pendent on the direction and amplitude of the saccadethat will be executed, even when no visual target is pres-ent (Schiller and Koerner 1971; Robinson 1972). Morerecent studies and models show that oculomotor com-mands also take the position of the eyeball in its orbit inthe head into account (Hallet and Lightstone 1976a,1976b; Robinson 1975; Mays and Sparks 1980; Zee etal. 1976). Mays and Sparks (1980) evoked a saccade byelectrostimulating the deep layers of the superior collicu-lus of macaque monkeys, just before or during the exe-cution of a saccade to a visual target that was alreadypresent before the electrostimulation occurred. They ob-served that after the electrostimulation the eye success-fully moved or continued to move to the visual target,now starting from another position because of the elec-trostimulated saccade, and not with the direction and am-plitude it would have moved at when the original motorprogram was retinotopic (which would have produced asaccade missing the target, parallel to the originally in-tended one). Head-centered saccadic motor commandswere then assumed to reposition the eye in the head, andnot relative to the fovea. Eye position signals are knownto exist, shortly after an eye movement, in the brainstemnuclei abducens nucleus, the oculomotor nucleus, andthe trochlear nucleus (Fuchs et al. 1985), which directlyinnervate the eye muscles, and obtain their commandsfrom the deep layers of the superior colliculus.

Arm movements, however, are conducted relative tothe initial set of limb angles, positioned relative to thetrunk, and most likely depend on a target representationin trunk-referenced coordinates. For arm movements it isthus necessary that the retinotopic visual target coordi-nates are transformed into a trunk-referenced target rep-resentation, in order to compensate for eye orientationand head orientation (Soechting et al. 1991; Flanders etal. 1992). Parietal neurons are found to encode visualtargets in a body-centered space by the use of a head-po-sition signal (Brotchie et al. 1995).

Somatosensory stimulus information (here offered onthe knee, at the same position as the visual targets) that

reaches the sensorimotor system is invariant under headand eye movements, and is in that way trunk-referenced.As described above, goal-directed arm movements arelikely to be commanded in such a trunk-referenced coor-dinate system. Evidence for a shoulder (or trunk)-cen-tered frame of reference for pointing motor commandstowards kinesthetic targets can be found in recent workby Baud-Bovy and Viviani (1998), who also provided amodel for pointing towards such targets. However, in or-der to initiate saccadic eye movements, the somatosenso-ry target information presumably needs to be coded in atleast a head-centered frame of reference (for a discussionon this topic, see Groh and Sparks 1996c).

The influence of necessary sensorimotor transforma-tions is expected to change motoric preprocessing times,and therefore saccadic and/or hand movement latencies.To be able to distinguish between the mechanisms thatregulate the sequential movement order of eye and arm,and signal transformation, a dual-task paradigm wasused. The subjects are instructed to make only a saccadeto the target in one condition, or to fixate a fixation dotand move just their hand to the target in another condi-tion, or to perform a coordinated eye/arm movement tothe target in a third condition. The design makes it possi-ble to study the role of stimulus information on saccadiceye movements or hand movements separately or whenthey are part of an integrated movement sequence. Aprecise observation of saccadic eye and hand movementscan reveal more detailed information on how the ocularand manual motor systems integrate stimulus informa-tion and afferent information into aiming and saccadiceye movements.

In summary, the sensorimotor transformations de-scribed above predict longer movement reaction times(RTs) for eye movements toward somatosensory targetsthan eye movements towards visual targets. As men-tioned before, retinotopic target coordinates are relatedto oculomotor coordinates, probably first after integrat-ing eye position information, which, however, might berapidly available. However, to construct an oculomotorprogram from trunk-referenced (somatosensory) targetcoordinates, preceding transformations taking posture,head and eye orientation into account have to take place,which presumably consumes more time. No specific dif-ferences for manual/arm movement RTs can be predictedfrom the proposed functional pathways, because armmovements require spatial somatosensory transforma-tions taking body posture into account when moving to-wards somatosensory targets, and transformations takinghead position and eye position into account when mov-ing towards visual stimuli. Both processes are presum-ably consuming considerable preprocessing time.

These RT characteristics are expected principally forthe single-task saccades and hand movements. Althoughsimilar transformations presumably take place in coordi-nated eye-hand movements, their impact on movementRTs might be lost because of a strong demand to performboth motor tasks sequentially, e.g., the typically foundeye-first, hand-second movement order might occur par-

98

tially independent of the sensorimotor transformationsthat take place for the separate motor systems.

Therefore a comparison between latencies of visuallyand somatosensory evoked movement might shed somelight on the influence of sensorimotor transformations onthe one hand, and the differences between the dual-taskand single-task latencies on coordination of goal directedeye-hand movements, on the other hand. In addition, thecorrelation between eye and hand RTs (in the dual task)will be analyzed with respect to target stimulus modality.When eye and hand motor systems share many informa-tion-processing stages (i.e., eye and hand movements tosomatosensory targets), a high correlation coefficient canbe expected. When only a few stages of movement pre-processing are used by both systems (i.e., eye and handmovements to visual targets), one can expect a relativelylow correlation coefficient. Also, in order to investigatethe effectivity of the proposed preprocessing of target in-formation, pointing accuracy is analyzed with respect tothe single- and dual-task hand movements to both the vi-sual and the somatosensory target stimuli.

Experiment 1

In the first experiment eye and hand movements werestudied that were directed towards a somatosensory tar-get at the left or the right knee, or towards a visual targetat the same location. In this way visual or somatosensoryspatial target information represents the same target posi-tion. In one sequence, referred to as eye-only conditionor single task, subjects had to perform a single saccadetowards the target, and were not required to point to thetarget. In another sequence, the hand only condition orsingle task, subjects had to point towards the target, andfixate the midline. In another sequence, the eye-and-hand condition or dual task, subjects were required tomake a combined eye-hand movement towards the tar-get. The tasks were designed to study the effect of stimu-lus modality on eye and hand movements separately (eyeonly, hand only) or on a combined eye-hand movementsequence (dual task).

Experiment 2

In the second experiment the setup of experiment 1 wasused again, but now subjects always had to make a com-bined eye/hand movement towards the visual or somato-sensory targets. In three steps the current through theelectrostimulation electrodes on the knees was varied,and subjects had to perform a block of trials for each lev-el of stimulation. The correlation between saccadic andhand movement RTs was calculated for each level ofelectrostimulation. If it is true that the observed high cor-relation coefficients are caused by a perceptual processdelayed by a low-stimulus background contrast, onewould expect that the correlation coefficients wouldhave decreased when a higher level of electrostimulation

was used. If the correlation is caused by sensorimotorspatial transformations that are shared by eye and handmovement preparation processes at a later stage, the cor-relations would not be influenced by somatosensorystimulation intensity.

Materials and methods

Experiment 1

Subjects

The experiment was conducted on six healthy subjects, mostlystudents.

Apparatus

The subjects were seated in a dentist's chair, with their legs liftedabout 20° in order to obtain a frontal view of the stimulus board atthe knees. A foam headrest restricted head movements. Subjectswere required to wear shorts to facilitate somatosensory stimula-tion. The room in which the experiments were conducted wascompletely darkened. As visual stimuli, green light-emitting di-odes (LEDs) were used, positioned 2 cm above the knees, at a hor-izontal visual angle of approximately 20°, varying with subject

99

Fig. 1 Experimental setup. The subject is seated with an LEDboard on the knees, containing a red fixation LED and two greentarget LEDs. Two centimeters under the target LEDs tactile stimu-lation electrodes are connected to the skin on the knees. Ocularmovements are measured by two infrared cameras (250 Hz)mounted on a helmet SMI system, and the OPTOTRAK systemmeasures the position of infrared markers on the board and the fin-gertip (250 Hz)

height. They were positioned on a stimulus board of 500×80×15 mm, 150 mm left or right from the midline of the board. Thefixation point, a red LED situated on the midline of the board, wasilluminated throughout the whole experiment. On both knees anelectrical stimulation electrode was connected. These were placedon the reverse side of the stimulus board, 20 mm under the greenLEDs. The stimulus consisted of a continuous sequence (5 Hz) ofelectrical pulses (300 V, 10–30 µA) with a duration of 100 ms.

To measure hand position and eye orientation, two infrared (IR)tracking devices were used. The position in three-dimensionalspace was measured by the OPTOTRAK system from NorthernDigital. Six cameras were installed to sample raw position data ofan active IR marker with a sampling rate of 250 Hz. One markerwas situated at the index fingertip, and three others on the stimulusboard. In this way the system always had access to the positions ofthe stimulus LEDs in terms of stimulus board coordinates (calibrat-ed before the experiments, expressed relative to three board LEDs).

The eye orientation was also measured with an IR trackingsystem, the EyeLink system of Sensomotoric Instruments. Forthese experiments only the left eye was tracked, at a rate of250 Hz. The azimuthal and elevational angles of the eyeball in thehead were calculated using the EyeLink PC, and collected by theOPTOTRAK Data Acquisition unit (ODAU). The ODAU sam-pling unit was completely synchronized with the OPTOTRAK, toensure that a hand position sample and eye position angles wereobtained at the same moment (Fig. 1).

The ODAU and OPTOTRAK sample data were stored andthen entered onto a Compaq Pentium 200 Pro PC, which also con-trolled the timing of the trials, the stimulus onset and online re-cording of the movements of the subject.

Procedure

The experiment consisted of three blocks of trials, and for eachblock the subjects received another set of instructions. The orderof the blocks was balanced between subjects. Subjects were ableto practice the instructions for each block before the experimentstarted. Before a block started the visual angle of each target wascalibrated. The current through the skin electrodes for somatosen-sory stimulation was turned up in small steps, until the subject re-ported feeling the pulses clearly. The currents used were differentfor each subject, due to the individual electrical resistance valuesof each subjects' skin.

Each block contained 80 trials with 4 stimulus types equallydivided. In 20 trials visual stimulation occurred on the left kneeand in 20 trials on the right knee, in 20 trials somatosensory stimu-lation occurred on the left knee, and in 20 on the right knee. Thestimulus modality and the direction were completely randomizedthroughout the block. After each trial a random delay was built in,to ensure that subjects had no information about when the next tar-get would appear.

In the eye-only block, subjects were asked to make an eyemovement to the emerging stimulus as quickly as possible. In thehand-only condition, the subjects were asked to make a handmovement to the emerging target as quickly as possible while fix-ating the red fixation LED in the middle of the board. In the dual-task condition the subjects were asked to make an eye and a handmovement to the target. Eye and hand movements always startedin the middle of the board, at the red fixation LED. The targetLED was switched 50 ms after the time the ocular gaze camewithin 5° of the visual angle of the LED, or the hand position wassituated within 2 cm of the LED. The target offset was triggeredthis way in order to signal to the subjects they had completed thetask and could return their gaze and/or fingertip to the fixationLED.

Data analysis

To analyze the performance of the subjects, six movement param-eters were calculated. For each of the two-dimensional eye move-

ment trials (azimuth and elevation) and three-dimensional trialsfor hand movements (X, Y, Z), the tangential velocity was calcu-lated along the trajectory. For detection of the movement onsetand offset time, a relative velocity threshold was used (10% or 5%of peak velocity for eye or hand movements ). Onset of movementwas defined as the moment the tangential velocity exceeded thisthreshold and a minimum displacement (50% or 25% of move-ment amplitude for eye or hand movements) took place within amaximum amount of time after this onset (30 ms or 300 ms foreye or hand movements). The offset time of a trial was the reverseof this procedure, taking the first sample time after the detectedmovement onset time where the velocity decreased to a level low-er than the threshold (as in Meyer et al. 1988; Abrams et al. 1990).

The accuracy of a pointing movement was the two-dimension-al spatial distance between the x,y-coordinates of the marker onthe index finger at movement offset time and the x,y-coordinatesof target position. Eye movement accuracy is not available, be-cause head position was not measured. Head movements were re-stricted, but the drift of the head was too big to determine the dis-tance from saccadic end positions to the precalibrated target posi-tions with an accuracy that exceeded the criterion for target offset(5° of visual angle), which would have been necessary to makecomparisons of eye movement accuracies between conditions.

These movement parameters (such as reaction time, duration,accuracy) are grouped corresponding to the experimental condi-tions “task” (single or dual), “stimulation” (visual or somatosenso-ry), “direction” (left or right) and “type” (eye or hand). The move-ment parameters for each condition (for example visually stimu-lated single-trial hand movements to the right ) in one subjectform a group of 20 measurements per movement parameter. Thenfor each movement parameter mentioned above, a large tolerancewindow was defined by a minimum and a maximum value. Valuesthat were not in this domain were discarded. The window for la-tencies was 150–1000 ms, for pointing accuracy 0–50 mm. Only asmall number of measurements were affected.

To compare the results for all subjects, an analysis of variance(ANOVA) was performed. The influence of the conditions “type”(eye or hand), “stim” (visual or tactile) and “task” (dual or single)on the movement RTs and pointing accuracy were tested. Thevariable “subject” was a random factor in the analysis.

Experiment 2

Subjects

The experiments were conducted on four healthy subjects, all stu-dents.

Apparatus

The same equipment and setup was used as in experiment 1.Again, the room in which the experiments were conducted wascompletely darkened.

Procedure

First three levels of electrical stimulation were determined, foreach subject separately. The first level was a threshold level, de-termined by turning up the current from 0 in small steps to a levelwhere subjects noticed all the stimulations in a test sequence. Theother two levels were determined by increasing the current byabout 20 A, depending on the subjects’ sensitivity to electricalstimulations. If subjects did not notice a difference at the secondlevel, we increased the current until the subject felt the stimulationdifference compared to the previous level. At the highest levelsubjects noticed the pulses clearly, but none of them reported pain-ful or uncomfortable stimulation.

In each of the blocks of trials, one of the previously deter-mined levels of electrical stimulation was used. In all three blocks

100

subjects had to execute a combined eye and hand movement (dualtask), as described in experiment 1. Again 80 trials were per-formed for each block, evenly distributed over visual and somato-sensory target stimuli, on the left and the right side of the fixationdot. The intensity of the visual targets was held constant. The or-der of the blocks (level 1.3) was randomized between subject.

Data analysis

Data analysis was similar to that in experiment 1.

Results

Experiment 1

The sampled data of a trial where both eye and handmoved to the target and back are presented in Fig. 2. Inthe figure the x-coordinates (along the stimulus board) offingertip position and the azimuthal angle of the left eyeare plotted. Dotted lines denote stimulus on- and offset.

Latencies

A main effect was found for stimulus modality (STIM:F(5,1)=28.7 P=0.003), which means latencies were on av-erage shorter to visual target stimuli (342 ms) than to so-matosensory target stimuli (412 ms). Also, a main effectfor movement type was found (TYPE: F(5,1)=24.3,P=0.004), implying that eye movements on average wereinitiated earlier (334 ms) than hand movements (420ms). No main effect was found for task conditions, sin-gle-task movements were initiated on average 385 ms af-ter stimulus onset and dual-task movements 369 ms(TASK: F(5,1)=1.2, P=0.33) (see Fig. 3).

Importantly, an interaction effect was found betweentarget stimulus modality and movement type(F(5,1)=89.5,P<0.001), indicating that the RT difference between eyemovements directed to visual targets (285 ms) and thosedirected to somatosensory targets (380 ms) was largerthan this difference for hand movements to visual andsomatosensory targets, 376 ms and 420 ms, respectively.

In addition we performed separate ANOVAs for eyeand hand movements. Interestingly, eye movements havea latency that is much shorter when stimulated visuallythan when stimulated somatosensorily, for both dual- andsingle-task movements (STIM: F(5,1)=44.6, P<0.002;STIM×TASK: F(5,1)=1.7, P=0.25), 383 ms and 285 msrespectively. No effect was found (F(5,1)<1) for the taskcondition (single or dual), nor for the interaction be-tween task condition and stimulus modality (F(5,1)=1.7,P=0.25).

For hand movements, an interaction between themovement task (single or dual) and stimulus modalitywas found (STIM×TASK: F(5,1)=5.4, P=0.067). Dual-task hand movement RTs were also larger when stimulat-ed somatosensorily (446 ms), than when stimulated visu-ally (371 ms). Single-task hand movements, however, re-sulted in the statistically similar (STIM: F(5,1)=4.2,P=0.1) latencies towards visual (382 ms) and somatosen-sory targets (395 ms).

Correlation coefficients

For all subjects a correlation between eye and hand la-tencies was found for somatosensory stimulation

101

Fig. 2 Upper graph shows the azimuthal angle of the eye withtime. Lower graph shows the changes in the x-coordinate (alongthe board) of the position of the fingertip. Dotted lines denotestimulus onset and offset

Fig. 3. a In the upper graph the RTs of single-task and dual-tasksaccades are plotted, towards visual or somatosensory target stim-uli. Error bars represent 1 SD. For both single- and dual-task sac-cades the RT was prolonged when the saccade was directed to so-matosensory targets. b In the lower graph the RTs of single- anddual-task hand movements are plotted. Hand movement RTs to-wards somatosensory targets are only prolonged in the dual task;in the single task the RT is comparable to the mean RTs of handmovements directed towards visual targets

(r=0.65±0.13). The correlation between eye and handRTs towards visually evoked movements (r=0.30±0.45)was lower than the correlation observed for somatosen-sory targets (t-test: P<0.05), and deviated largely be-tween subjects. In Fig. 4 for one subject the RTs of eyeand hand movements in a combined eye/hand movementtask are plotted, with circles depicting right knee andcrosses left knee electrical (right graph) or visual (leftgraph) stimulation. In theory it is possible that a differ-ence in sensitivity for somatosensory stimulation be-tween the left and right knee stimulation causes pseudo-correlation, forming two distinguishable groups in RTspace. This is not the case for these subjects, correlationalso being found within lateral stimulation. The correla-tion coefficients for all subjects are given in Table 1, cal-

culated for leftward and rightward movements separatelyor taken together.

Accuracy

For all trials the accuracy of the pointing was calculatedas mentioned in “Methods”. Accuracy in general variedbetween 0 and 30 mm.

A main effect was found for stimulus modality(F(5,1)=7.4, P=0.041), indicating that pointing accuracywas higher to visual stimuli than to somatosensorystimuli. However, there was an interaction trend be-tween stimulus modality and task condition(STIM×TASK: F(5,1)=4.3; P=0.09), which indicated thatonly dual-task hand movements to visual targets weremore accurate than hand only movements, but this ac-curacy improvement in the dual task is much smallerwhen somatosensory stimuli were used. In Fig. 5 anoverview is plotted.

In Fig. 6 the probability density of landing positionsof pointing movements is plotted, in Fig. 6a for dual-and single-task movements towards visual targets, and inFig. 6b for movements towards somatosensory targets. Agray value at position x,y in the graph represents theprobability of a pointing movement to end within a

102

Fig. 4 Each movement executed by one subject (C.E.) in the dual-task block is represented as a point in two-dimensional space, thex-coordinate being the manual RT corresponding to this move-ment, the y-coordinate the saccadic RT of this particular move-ment. In the left graph all movements directed towards visual tar-gets are plotted, and in the right graph all movements towards so-matosensory targets. Correlation between eye and hand movementRTs is significantly higher when directed towards tactile targets.Similar behavior is observed for the other subjects, as can be seenin Table 1

Table 1 For all subjects the correlation coefficients between thereaction times of saccades and hand movements towards visual orsomatosensory (Som.) targets are given, in the left two columnsfor leftward and rightward movements taken together, in the otherfour columns for both directions separately. The correlation coeffi-cient between saccadic and manual reaction times is higher whensomatosensory targets are used

Subject Target

Visual Som. Visual Visual Som. Som.right left right left

D.K. 0.63 0.74 0.50 0.76 0.55 0.79J.L. 0.68 0.74 0.48 0.66 0.90 0.61K.K. –0.48 0.51 –0.62 –0.30 0.13 0.53C.E. 0.33 0.68 0.24 0.24 0.34 0.94F.B. 0.63 0.49 0.76 0.43 0.57 0.43E.D. 0.04 0.79 –0.23 0.49 0.59 0.89

Fig. 5 Tangential distance in millimeters between the landing po-sition (fingertip position at movement offset time) on the stimulusboard and the target position is plotted, as a measure of pointingaccuracy, for dual- and single-task pointing towards visual and so-matosensory targets. Pointing accuracy is improved when an eyemovement is made (dual vs single task), but only when visual tar-gets are used. See also Fig. 6

103

Fig. 6a, b Probability densityfor a pointing movement to endwithin a surface unit, i.e., thenumber of end positions withineach two-dimensional bin di-vided by the total number ofpointing trials towards this par-ticular target, is plotted as gray-scale values (dark high proba-bility, light low probability).Left or right, visual or somato-sensory targets are analyzedseparately. The pointing endpositions of all subjects are tak-en together. a The probabilitydensity plot is plotted for allpointing movements towardsvisual trials in the dual task(upper graph) and in the singletask (lower graph). The binsize was 6×6 mm. b The proba-bility density for movementstowards somatosensory targetsis plotted, for dual- (uppergraph) and single-task pointingmovements (lower graph), re-spectively. Now the bin size is8×8 mm. Pointing movementsend closer to the target in thedual task than in the singletask, but only when visual tar-gets are used

A

B

square surface unit at position x,y. The pointing accuracytowards somatosensory targets is clearly unaffected bythe presence of an eye movement (compare dual- andsingle-task distributions); towards visual targets the fin-gertip lands closer to the target when the pointing move-ments are executed in a dual task. This pattern is mostobvious for movements at the right target.

Experiment 2

Obviously the correlation coefficient of eye and handmovement RTs with somatosensory targets does notchange when using three levels of electrostimulation inthis experiment (F(3,1)<1). Correlation coefficients be-tween eye and hand movement RTs towards somatosen-sory or visual (constant intensity) targets are plotted inFig. 7, for each level separately, and averaged over foursubjects.

Although correlation between eye and hand move-ment RTs is not affected by electrostimulation intensity,the values of saccadic and hand movement RTs them-selves decrease with increasing somatosensory target in-tensity (t-test between levels 1 and 3: t=2.6; P<0.05 foreye movements; hand movement RTs not significantlydifferent). See Fig. 8 for an overview.

Discussion

Experiment 1

The main findings of experiment 1 can be summarized:

1. Saccadic eye RTs were longer when directed to so-matosensory targets than when directed to visual tar-gets in both single- and dual-task conditions.

2. Hand RTs were longer when directed towards somato-sensory target stimuli only in the dual-task condition,while in the single-task condition they were not sig-nificantly different.

3. Correlations between saccadic and hand RTs weresignificantly higher when directed towards somato-sensory targets than when directed towards visual tar-gets.

4. Pointing accuracy was higher when subjects had tomove their eyes as well (dual task), compared to asingle hand movement. This effect, however, wasfound to be stronger for visual targets than for so-matosensory targets.

Many researchers have measured correlation coefficientsbetween saccadic eye and hand movement RTs towardsvisual targets. Gielen et al. (1984) and Frens and Erke-lens (1991) found a weak correlation of about 0.5. In ad-dition, Mather and Fisk (1985) found that the correlationdepends on target presentation duration. The correlationcoefficients between saccadic and hand movement RTsfor visual targets found in this study support the weakcorrelation, implying that eye and hand movements to-wards stationary visual targets do not share much prepro-cessing, but rely on parallel processes early after visualperception.

However, the correlation found for eye-hand move-ment RTs towards somatosensory targets is significantlyhigher. As stated in the “Introduction”, somatosensorysignals that arrive at one of the knees probably need tobe represented in a trunk-referenced coordinate frame(because of the obvious anatomic fact that both the headand the arms are attached to the trunk), taking leg posi-tion and body posture into account, before arm move-ments or a head-referenced representation for a saccadicmotor command can be constructed. Thus, the saccadicmotor system and the manual motor system then rely onthe same spatial signal transformation processes, causinga relatively high correlation.

High correlation coefficients of eye and hand move-ment RTs towards somatosensory targets, however,might also be caused by a somatosensory stimulus-back-

104

Fig. 7 The correlation coefficients between eye and hand move-ment RTs are plotted for each of the three levels of electrostimula-tion (black line), averaged over all four subjects. Error bars de-note 1 SD. The correlation coefficients for visual targets are alsoplotted (gray line). Correlation coefficients between saccadic andhand movement RTs are invariant under somatosensory intensitychanges

Fig. 8 The RTs are given for eye and hand movements towards vi-sual and somatosensory targets, for each of the three levels ofelectrostimulation. The manual and saccadic RTs decrease gradu-ally, although the correlation between them is invariant (Fig. 7)

ground intensity contrast that is low compared with thevisual stimulus contrast, causing a delay and variance atperceptual level that is shared by eye and hand move-ment preparation. In order to test this hypothesis, a sec-ond experiment was conducted. In experiment 2 the levelof somatosensory stimulus intensity was varied, and cor-relation coefficients of eye-hand movement RTs werecalculated, in order to see if they were affected. Also thevalues of eye and hand RTs by their selves were ana-lyzed, in order to see if perception was affected by themanipulations at all.

Experiment 2

The main findings were:

1. Correlation between saccadic RTs and hand move-ment RTs was not influenced by a changing electricalstimulation intensity.

2. The values of saccadic and hand movement RTs de-creased when electrical stimulation intensity in-creased.

Together this implied that the differences in eye-hand RTcorrelation between movements directed to somatosen-sory or visual targets were not caused by differences inprocessing times at a perceptual level. Namely, the in-creasing electrical stimulation intensity does influencethe eye and hand RTs towards somatosensory targets, butnot the correlation between them. The variance saccadicRTs and hand movement RTs share must have arisenelsewhere in sensorimotor processing.

General discussion

In the present study, we were interested in two issues re-garding sensorimotor transformations in the control oflimb movements and eye movements. First, the influenceof the kind of stimulus information, i.e., visual or so-matosensory target stimuli, on the initiation and execu-tion of eye and hand movements, was investigated. Sec-ond, the interaction between eye and hand movements,and the role of saccades in accurate pointing, was stud-ied, i.e., subjects needed to initiate either eye or hand inthe single-task or in a coordinated way in the dual-taskcondition. Here also visual and somatosensory targetswere used, in order to investigate whether integrativeprocesses are influenced by stimulus modality.

The first issue was looked at in experiment 1, whichshowed that saccadic RTs were longer when directed tosomatosensory targets than when directed to visual tar-gets in both single- and dual-task conditions. However,for hand movements, this pattern was only found in thedual-task condition and not in the single-task condition.For a discussion about these results, see the section“Sensorimotor transformations” below.

The second issue was looked at in experiment 1,which showed that correlations between saccadic and

hand RTs were significantly higher when the movementswere directed towards somatosensory targets than whenthey were directed towards visual targets. Importantly,experiment 2 indicated that this was not caused by dif-ferences in processing times at a perceptual level. Fur-thermore, hand-pointing accuracy was found to be higherwhen subjects had to move their eyes as well (dual task)compared to a single-task hand movement. However,this effect was more pronounced for movements to visu-al targets than to somatosensory targets. In the section“Eye hand coordination” these results are discussed inmore detail.

Sensorimotor transformations

A functional sensorimotor pathway, integrating efferentinformation about limb, eye and head angles with senso-rial signals and constructing appropriate goal-directedhand and eye movements, is proposed. This graphicalrepresentation of the proposed coordinate-transformationsis able to explain the main findings with a set of transfor-mations and a few assumptions, in a qualitative way.

Oculomotor transformations. In accordance with thefindings of Groh and Sparks (1996a), saccadic latencieswere found to be longer for saccades initiated to somato-sensory targets than for saccades initiated to visual tar-gets.

The visual information on the retina is represented asa visual motor error, the distance and direction of the vi-sual target projected on the retina, relative to the fovea.The oculomotor system converts the motor error, withuse of the position of the eye in the orbit (Mays andSparks 1980), into an eye movement that minimizes thismotor error by producing a saccade to the target. So-matosensory signals, however, deriving from receptors atthe knee, are presumably represented spatially in relationto the trunk (as proposed in a study on pointing to kines-thetic targets; Baud-Bovy and Viviani 1998), by takingleg position and body/trunk posture into account. Thetrunk-referenced target position is probably transformedinto a head-referenced oculomotor representation by tak-ing head and eye orientation into account, which then re-sults in the desired eye displacement. The amount of sen-sorimotor transformations1 needed to prepare a saccadeto the retinotopic-coded visual targets is therefore proba-

105

1 The postulated transformations are able to explain the differ-ences in ocular RTs for visual and somatosensory stimulation, un-der the assumption that more transformation processes requiremore response preparation time. In general, this is not necessarilytrue, because neural systems are very well capable of processinginformation in parallel. Most spatial transformations, however, re-ly on parameters holding information about the spatial status ofthe reacting subject, such as eye and head orientation andarm/trunk posture, which is probably available as efferent signalsrelated to muscle tension and limb kinesthetics (Gandavia et al.1983; Burgess et al. 1973). This supports the assumption that ac-cumulating spatial transformations require accumulating prepara-tion time, because status parameters have to be updated continu-ously for each transformation.

bly smaller than the more complex recoding of leg-refer-enced somatosensory coordinates into head-referencedmotor commands; see also Fig. 9. In a recent study, Grohand Sparks (1996b, 1996c) report neurophysiologicalfindings, supporting a convergence of somatosensoryand visual sensory signals in the monkey superior collic-ulus, which is known to generate oculomotor commandscorresponding to the size and direction of the desiredsaccadic eye movement.

Manual-motor transformations. Hand latencies werefound to be equal in the single-task condition to somato-sensory targets and to visual targets. The visual pathwayleading to the arm motor system, presumably, takes eyeand head position into account in order to transform themotor error into a trunk (or body)-referenced target posi-tion (rT) (Soechting et al. 1991; Flanders et al. 1992). To-gether with the actual orientation of the arm (limb anglesof forearm, upper arm and hand), this signal can be usedto prepare the proper muscle contractions in order to movethe hand to the desired location. Somatosensory signalscoming in at receptors at the knee are represented spatiallyin relation to the trunk (rT) by taking leg position andtrunk posture into account. This trunk-referenced repre-sentation can now be used by the motor system responsi-ble for arm movements (Baud-Bovy and Viviani 1998).

Eye-hand coordination

Coupling of movements. Interestingly, only the dual-taskmanual RTs were delayed to somatosensory target stimu-li compared to visual target stimuli. A possible underly-ing mechanism for this observation might be the generaltendency to plan an eye movement before a hand move-ment. The objective for this eye-first, hand-second ordermight be found in the fact that single-task pointingmovements towards visual targets are less accurate thandual-task pointing movements (e.g., Fig. 5 of this study;Mather and Fisk 1985; Abrams et al. 1990; Prablanc etal. 1979). If indeed the sensorimotor system enforces eyemovements to precede hand movements, it can be ex-pected that manual dual-task movement RTs show thesame dependence on stimulus modality as the dual-taskeye movement RTs, as is indeed observed in this study(Fig. 3). For somatosensory targets, the initiation of eyemovements in the present study is delayed with respectto visually evoked saccades, to the same level as can beobserved for the hand movements in the single task.Thus, an interesting consequence of the suggested cou-pled eye-first, hand-second sequence (see also Bekkeringet al. 1994, 1995 for a discussion on this topic) mightthen be that the initiation of hand movements is alsoslowed to somatosensory target stimuli, but only whenthe eyes move. In the scheme in Fig. 9 a saccade might“trigger” the hand movement preparation process, to en-force the hand movement to wait for the saccade to becompleted, and use retinal target information first afterthe saccade is performed, in order to obtain foveal infor-mation about the target and its surroundings.

In agreement with this notion is the observation thatthe hand movements are more accurate when the eyehas also moved, but only to visual targets; for tactile tar-gets there is hardly a benefit for the accuracy in the dualtask (Fig. 6a,b). In other words, the tendency to movethe eyes first is general (observed for both somatosenso-ry and visual targets), but only when the target is visualdoes the manual system profit from the execution of thepreceded saccadic eye movement. That is, it is likelythat the visual information is acquired at different timesin the eye-hand sequence for the eye and the hand.Namely, only peripheral target information is used in or-der to execute a saccade, and foveal visual informationis then used in order to produce a hand movement. Thesomatosensory target information, however, is invariantunder eye movements; no improvement in somatosenso-ry target representation can be expected by making asaccade. Therefore it seems likely that the same infor-mation has been used to prepare an eye and a handmovement. This assumption might explain the observa-tion that pointing accuracy to somatosensory targets wasnot improved in the dual-task condition compared to thesingle-task pointing condition. It also has implicationsfor the interpretation of the different correlation coeffi-cients between saccadic and arm movement reactiontimes, with respect to stimulus modality. See the end ofthe next paragraph.

106

Fig. 9 A general scheme of information flow and spatial sensori-motor transformations, to explain the movement characteristicsobserved in this study. In the upper graph a scheme is drawn forvisual information processing, in the lower graph for somatosen-sory information processing. The signal rR represents the targetposition in retinal coordinates (i.e., relative to the fovea); rH repre-sents the target with respect to the head; and rT with respect to thetrunk

Shared pathways. In Fig. 9 the pathways from visualsensory input signals to the two motor commands driv-ing the eye and the arm separate after the spatial codingof the target in head-referenced coordinates. With respectto necessary spatial transformations, eye and hand move-ments do not share much processing. The pathways con-ducting somatosensory stimulus information to the twomotor systems separate first after the transformation ofsomatosensory sensory signals from the leg- to trunk-ref-erenced coordinates, taking leg orientation and trunkposture into account. In short, in the presented schemethe number (and perhaps complexity) of shared spatialtransformations between eye and hand is higher for so-matosensory targets than for visual targets. This corre-sponds with the observed low correlation between eyeand hand movement RTs for visual target stimuli, andhigh correlation of movement RTs for somatosensory tar-get stimuli, i.e., the more transformations the eye andhand systems share, the higher the correlation betweenthe eye and hand RTs will be.

An alternative explanation to explain the differencesin correlation between eye and hand RTs can be derivedfrom the conclusion from the previous paragraph, i.e.,that the ocular and manual motor systems use similar so-matosensory stimulus information, while the oculomotorsystem might use the visual information that is availablebefore a saccade has been made, and that the manualsystem might also use the visual information after thesaccade has been executed. From the experiments in thisstudy it cannot be derived which of either of the possibleexplanations is most likely.

Acknowledgements This study was supported by grant 1873/1from the German Science Foundation (DFG) as a part of theinterdisciplinary project “Sensomotoric Integration”. We thankMarcel Brass for preliminary reviewing of the manuscript. Wethank Fiorello Banci and Karl-Heinz Honsberg for technicalsupport.

References

Abrams RA, Meyer DE, Kornblum S (1990) Eye-hand coordina-tion: oculomotor control in rapid aimed limb movements. JExp Psychol Hum Percept Perform 16:248–267

Baud-Bovy G, Viviani P (1998) Pointing to kinesthetic targets inspace. J Neurosci 18:1528–1545

Bekkering H, Adam JJ, Kingma H, Huson A, Whiting HTA(1994) Reaction time latencies of eye and hand movements insingle- and dual task conditions. Exp Brain Res 97:471–476

Bekkering H, Adam JJ, Kingma H, Van den Aarssen A, WhitingHTA (1995) Interference between saccadic eye and goal-di-rected hand movements. Exp Brain Res 106:475–484

Biguer B, Jeannerod M, Prablanc C (1982) The coordination ofeye, head, and arm movements during reaching at a single vi-sual target. Exp Brain Res 46:201–304

Brotchie PR, Anderson RA, Snyder H, Goodman SJ (1995) Headposition signals used by parietal neurons to encode location ofvisual stimuli. Nature 375:232–235

Burgess PR, Wei JY, Clark FJ, Simon J (1982) Signaling of kines-thetic information by peripheral sensory receptors. Annu RevNeurosci 5:171–187

Flanders M, Helms Tillery SI, Soechting JF (1992) Early stages ina sensorimotor transformation. Behav Brain Sci 15:309–362

Frens MA, Erkelens CJ (1991) Coordination of hand movementsand saccades: evidence for a common and a separate pathway.Exp Brain Res 85:682–690

Fuchs AF, Kaneko CRS, Scudder CA (1985) Brainstem control ofsaccadic eye movements. Annu Rev Neurosci 8:307–337

Gielen CCAM, Van den Heuvel PJM, Van Gisbergen JAM (1984)Coordination of fast eye and arm movements in a trackingtask. Exp Brain Res 56:156–161

Groh JM, Sparks DL (1996a) Saccades to somatosensory targets.I. Behavioral characteristics. J Neurophysiol 75:412–427

Groh JM, Sparks DL (1996b) Saccades to somatosensory targets.II. Motor convergence in primate superior colliculus. J Neuro-physiol 75:428–438

Groh JM, Sparks DL (1996c) Saccades to somatosensory targets.III. Eye-position-dependent somatosensory activity in primatesuperior colliculus. J Neurophysiol 75:439–453

Hallet PE, Lightstone AD (1976a) Saccadic eye movements to-wards stimuli triggered by prior saccades. Vision Res16:99–106

Hallet PE, Lightstone AD (1976b) Saccadic eye movements toflashed targets. Vision Res 16:107–114

Hubel DH, Wiesel TN (1959) Receptive fields of single neurons inthe cats striate cortex. J Physiol 160:106–154

Mather JA, Fisk JD (1985) Orienting to targets by looking andpointing: parallels and interactions in ocular and manual per-formance. Q J Exp Psychol [A] 37:315–338

Mays LE, Sparks DL (1980) Saccades are spatially, not retinocen-trally, coded. Science 208:1163–1165

Meyer DE, Abrams RA, Kornblum S, Wright CE, Smith JEK(1988) Optimality in human motor performance: ideal controlof rapid aimed movements. Psychol Rev 95:340–370

Paillard J (1982) The contribution of peripheral and central visionto visually guided reaching. In: Ingle D, Goodale M, Masn-field R (eds) Analysis of visual behaviour. MIT Press, Cam-bridge. pp 367–385

Pélisson D, Prablanc C, Goodale MA, Jeannerod M (1986) Visualcontrol of reaching without vision of the limb. Exp Brain Res62:303–311

Prablanc C, Echallier JF, Komilis E, Jeannerod M (1979) Optimalresponse of eye and hand motor systems in pointing. I. Spatio-temporal characteristics of eye and hand movements and theirrelationships when varying the amount of visual information.Biol Cybern 53:113–124

Robinson DA (1972) Eye movements evoked by collicular stimu-lation in the alert monkey. Vision Res 12:1795–1808

Robinson DA (1973) Models of the saccadic eye movement con-trol system. Kybernetik 14:71–83

Robinson DA (1975) Basic mechanisms of ocular motility andtheir clinical implications. In: Bach-y-Rita, Lernerstrand G(eds) Pergamon, Oxford, p 337

Schiller PH, Koerner FJ (1971) Discharge characteristics of singleunits in superior colliculus of the alert rhesus monkey. Neuro-physiology 34:920–936

Soechting JF, Flanders M, Helms Tillery SI (1991) Transformationfrom head- to shoulder-centered representation of target direc-tion in arm movements. J Cognit Neurosci 2:32–43

Sparks DL (1989) The neural encoding of the location of targetsfor saccadic eye movements. J Exp Biol 146:195–207

Zee DS, Optican LM, Cook JD, Robinson DA (1976) Slow sac-cades in spinocerebellar degenration. Arch Neurol 33:243–251

107