home home
ABSTRACTS - Speakers

ABSTRACTS - Posters
Simulated roll and pursuit affect perceived heading differently
Jaap A Beintema , (Rotterdam)
Comparison of horizontality and verticality judgements obtained with a visual and an oculomotor paradigm
Anton van Beuzekom , (Nijmegen)
PARIETAL CORTEX AND PERCEPTION OF SELF-GENERATED MOVEMENTS
Elena Daprati (Trieste)
Altered perception of space in patients with homonymous paracentral scotoma.
Florence Duret , (Geneva)
Untitled
Sonja Gabel , (Nijmegen)
Active and passive cues to translation
Laurence Roy Harris , (Toronto)
Spatial attention improves visual accuity in non-human and human primates
Alla Ignashchenkova , (Tübingen)
Modeling of otolith mechanics
Rudi Jaeger , (Tuebingen)
Untitled
John Jewell , (Philadelphia)
Vestibular response parameters in macaque posterior parietal cortex neurons
François Klam , (Paris)
A simulation-based comparison of the spatial and displacement models of the saccade generator
Ansgar Koene , (Utrecht)
Untitled
Bart Krekelberg (Bochum)
Floccular purkinje-cell responses to transparent optokinetic stimulation in the rabbit.
Anil Lekhram Mathoera , (Rotterdam)
Aspects of multimodal integration in artificial systems and neuroscience
Giorgio Metta , (Genova)
Visual Search for a saccade-pointing sequence versus single saccades
Bas Neggers , (Munich)
Patients with neglect show no direction-specific deficits in saccadic eye movements during exploration of space.
Matthias Niemeier , (Tuebingen)
The lesion of the nucleus basalis magnocellularis impairs the ability to learn the task requirements but not the spatial navigation.
Francisco Antonio Nieto Escamez , (Almeria)
A 6 DoF head tracking device for experiments in active vision
Francesco Panerai , (Paris)
Responses to continuously changing optic flow in area MST.
Monica Paolini , (Bochum)
3D structure from motion: an fMRI study.
Anne-Lise Paradis , (Orsay)
Coordination of eye and hand movements during pointing to a target
Mirjam Pijnappels , (Maastricht)
Ipsilesional saccade biases from lesions of the human inferior parietal lobule
Tony Ro , (London)
A comparison of the effects of hip and pelvic control on the transport phase of reaching in infants 5 - 6 months of age.
Louise Rönnqvist , (Umeå)
Saccadic and arm reach responses to visual double simultaneous stimulation follow a head-centered rather than a trunk-centered reference frame in the monkey
Hansjörg Scherberger , (Pasadena)
Binaural-, Ipsilateral-Only-, Contralateral-Only, Up To One Hz-, Up To Ten Hz- and Sharp Tuning Neurons Integrating Vestibular Information in Caudal Nucleus Fastigil
Hans-Georg Schlosser , (Berlin)
Dynamic Properties of the Human Ocular Torsion Induced by AC Galvanic Vestibular Stimulation
Erich Schneider (Muenchen)
Limits in measuring torsional eye-position using VOG-devices
Kai Schreiber (Tübingen)
Control of the upper limb perturbation in a catching task
Patrice Senot , (Paris)
Gaze and body orientation in the dark
Isabelle Siegler , (Paris)
Short-latency ocular following and its associated neuronal activity in medial superior temporal area : dependence on horizontal disparity.
Aya Takemura , (Tsukuba)
Mapping the oculomotor system : the power of transneuronal tracing
Gabriella Ugolini , (Gif-sur-Yvette)
Oculomotor strategies and orientation in space
Isabelle VIAUD DELMON , (Paris)
An evaluation of underlying hypotheses in the perception of surface inclination from static monocular texture cues
Mark Wexler , (Paris)

Jaap A Beintema (Rotterdam)

Simulated roll and pursuit affect perceived heading differently

J.A. Beintema, A.V. van den Berg Erasmus Universiteit Rotterdam, The Netherlands

The retinal flow that arises during observer translation is changed by eye, head or body rotation. If the brain does not compensate for this change, the observer might wrongly perceive his direction of heading. Indeed, shifts of perceived heading occur if the eye at rest is presented rotational flow corresponding to a horizontal pursuit movement. Moreover, the shift is found to increase when depth range in the scene is reduced. We investigated whether the amount of compensation for rotational flow and the effect of depth range is the same for simulated roll about the line of sight.
We simulated rotation and translation (2 m/s) through a scene of dots (average distance 7.5 m) for two depth ranges (0.5 and 10 m). The rotational flow was consistent with horizontal pursuit (freely moving fixation point), head roll, or the combination thereof. The simulated heading directions were always at 12 deg angle with the line of sight, but otherwise randomly oriented. Subjects indicated their perceived heading at the end of the trial (1 s) with a 2D pointer. Without compensation, horizontal pursuit (2 deg/s) would cause 7.5 deg horizontal shift, while line of sight rotation (8 deg/s) would cause 6 deg torsional shift (in visual angle, along the direction perpendicular to the heading direction and the line of sight).
In the horizontal pursuit condition, we found horizontal shifts as large as 7.5 deg, and a significant effect effect of depth. In contrast, during line of sight rotation, we found only small torsional shifts (<3 deg), and no significant effect of depth. Furthermore, in the combined rotation, torsional flow did not affect the amount of horizontal compensation, nor vice versa.
We conclude, that also for torsional flow, a visual compensation mechanism exists. Unlike the visual compensation for horizontal eye movements, however, the visual compensation mechanism for torsional flow does not need depth differences in the scene.

UP

Anton van Beuzekom (Nijmegen)

Comparison of horizontality and verticality judgements obtained with a visual and an oculomotor paradigm

A.D. van Beuzekom and J.A.M. van Gisbergen

Department of Medical Physics and Biophysics, University of Nijmegen, The Netherlands

The constancy of orientation perception relative to gravity in tilted subjects has been studied extensively with the luminous-line method, concentrating mainly on the quality of verticality judgements. Remarkably, use of an oculomotor pointing paradigm yielded clear differences in performance depending on whether the task required verticality or horizontality judgements (Pettorossi et al. 1998; Wood et al. 1998). If substantiated, these nonorthogonal responses are of interest since they may reflect a distorted representation of external space. This interpretation would be strengthened if similar nonorthogonalities can be shown to occur in visual-line experiments. This has recently been investigated by Betts and Curthoys (1998) who compared subjective vertical and subjective horizontal settings in tilted subjects. While the phenomenon of nonorthogonal horizontality and verticality judgements was also apparent with the visual-line paradigm, its manifestation was less robust and appears to be qualitatively different from the oculomotor studies. Apart from discrepancies in the horizontal results from both oculomotor studies, a major difficulty in interpreting these various results is that the experimental conditions in all three studies were different.
Our major goal was to assess to what extent orientation constancy performance is really task-dependent (horizontal versus vertical) and to find out whether the results are similar for the two paradigms (visual versus motor). To isolate these factors, we used both paradigms and both tasks in the same set of subjects (n = 6) in otherwise identical experimental conditions. Rather than concentrating on a limited set of tilt angles (up to 45 or 90 deg), like in the above-mentioned studies, we investigated these issues for the entire range of tilt angles (from -180 to 180 deg, with steps of 10 deg).
In order to compare the results from the different tasks and paradigms, and to characterise intersubject differences, we applied a principal components analysis on the visual-line and oculomotor data. Deviations of individual responses from the overall mean could be described quite well (mean R^2 = 0.86) using only the first two principal components.
The first principal component was negligible for tilt angles up to 90 deg and thus accounted for the variability in the strength of the A-effect for larger tilts. By contrast, the second principal component captured the variability in small tilt behaviour, which ranged from small A-effects to modest E-effects. The main results can be summarised as follows.
The contribution of the first component varied among subjects but was largely independent on task and paradigm. By contrast, the second component showed a very clear task-related behaviour. Subjects made errors of the A-type for modest tilts in the horizontal tasks. In the vertical task these A-effects were smaller or even reversed to take the form of E-effects. These task-dependent differences were most manifest in the oculomotor experiments and were qualitatively similar, in both paradigms, to those obtained by Pettorossi et al (1998).

Betts and Curthoys (1998) Vision Research 38: 1989-1999. Pettorossi et al. (1998) Experimental Brain Research 121: 46-50. Wood et al. (1998) Experimental Brain Research 121: 51-58.

UP

Elena Daprati (Trieste)

PARIETAL CORTEX AND PERCEPTION OF SELF-GENERATED MOVEMENTS

Elena Daprati1§, Angela Sirigu2, Nicolas Franck2, Pascale Pradat-Dihel3, Marc Jeannerod2

1Università di Parma, Parma, I, 2Institut des Sciences Cognitives, Lyon, F,
3Hôpital de la Salpetrière, Paris, F
§SISSA-ISAS, Trieste, I

In the present study we required three apraxic patients with left parietal lesions to execute finger movements of increasing complexity with either hand. We used a device which allowed us to present on a screen either the patients' hand (subject condition) or the examiner's hand performing either an identical (congruent condition) or a different movement (incongruent condition). In each trial, patients were required to decide whether the hand shown on the screen was their own or not.
Apraxic patients failed in correctly executing movements when gestures required complex hand-finger configurations. Incomplete or clumsy movements were frequent for both the right (18%) and the left hand (22%). Patients produced a number of correct ownership judgements comparable to those of six normal controls when the image presented on the screen was that of their own hand (subject condition), or that of the examiner's performing a different movement (incongruent condition). On the contrary, when required to recognise the examiner's hand pantomiming their own movement (congruent condition), patients were significantly more impaired than controls (mean correct responses: patients 3.0, controls 12.7; U=0.00001, Z=-2.012, p<0.04). When groups were compared according to gesture type, number of correct responses was significantly lower for patients with respect to controls only for complex gestures (correct responses: patients 0.5, controls 6.0; U="0.00001," Z="-2.024," p<0.04). Indeed, even when gestures were incorrectly executed (80% of trials), patients failed to recognise the alien hand, and claimed that the neat movement presented on the screen was their own.
Our data favour the hypothesis that apraxia is a disorder which affects specifically the more representational aspects of skilled hand movements and suggest that parietal cortex plays an important role in retrieving and updating hand motor representations.

UP

Florence Duret (Geneva)

Altered perception of space in patients with homonymous paracentral scotoma.

Florence Duret(1) Christophe Mermoud(1), Philippe Vindras(2), Paolo Viviani(2), Theodor Landis(3), Avinoam B Safran(1)

From the Neuro-ophthalmology Unit, Ophthalmology Clinic, Geneva University Hospitals, Geneva, Switzerland (1) ; the Department of Psychobiology, Faculty of Psychology, Geneva University, Geneva, Switzerland (2) ; and from the Neuronal Clinic, Geneva University Hospitals, Geneva, Switzerland (3).

Purpose. To investigate the visual perception of space in patients with homonymous paracentral scotomas.

Methods. The study included two patients (aged 51 and 69 years) with right homonymous paracentral scotoma, and ten normal subjects age-matched to the younger patient. Each tested individual was asked to fixate a cross displayed in the center of a computer screen while a series of spots 0.5 deg in diameter were successively presented in pairs, one spot being located to the left of the fixation cross and the other to the right. In each pair, one spot had a fixed position and served as reference, while the other spot could be moved horizontally and vertically by the subject, using a joystick. The task required was to position the mobile spot symmetrically to the reference spot, relative to the fixation cross. A total of 26 reference spots for each subject were tested. They were located according to a regular grid with a 4 deg spacing covering an area of 20 deg horizontally by 16 deg vertically. The final location of each mobile spot was recorded.

Results. Using tolerance ellipses and binomial law analysis, it appeared that the evaluation of distances differed in both patients from those in normal subjects. Linear regression analysis showed that both patients underestimated distances between the central fixation point and tested locations in the right hemifield, containing the scotoma, in comparison with values measured in the opposite hemifield. These distortions in spatial perception reflected a tendency to shift the image towards the scotoma, a tendency which increased with the distance from the scotoma.

Discussion. Distortion of spatial perception may be related to reorganization in the visual cortex, including a change in the size and location of the receptive fields corresponding to the cortical regions onto which the scotoma and surrounding areas are projected.

UP

Sonja Gabel (Nijmegen)

In monkey cortex, several areas are involved in calculating egomotion through a dynamic environment, using both retinal and extraretinal inputs. One of these is the parietal area VIP. It is sensitive to optic flow cues (schaafsma and Duysens, J. Neurophys 1996), which enable one to estimate speed and direction of heading, and it is expected to be able to account for flow components due to eye or head movements in the flow pattern.
It has recently been shown (by Bradley et al, Science, 1996) that area MST, which provides input to area VIP, is able to account for eye rotations. We are now investigating if similar effects can be shown in area VIP.
An awake macaque monkey was presented with two types of stimuli: 1) uniformly translating optic flow patterns while the animal was fixating a stationary target; and 2) a stationary dot pattern while the animal was tracking a moving target. At the retinal level, these two types of stimulus are nearly identical. Preliminary results show however that the neuronal responses differ, which finding may make area VIP more likely to contribute to the calculation of egomotion.

UP

Laurence Roy Harris (Toronto)

Active and passive cues to translation

Laurence R. Harris, Fara P. Redlick, Michael Jenkin

Depts Biology, Psychology and Computer Science, York University, Toronto, Ontario.

Self-motion is generally accompanied by many cues that can potentially be used to monitor performance and update ones position and orientation in space. Cues include visual, vestibular, proprioception and the efference copy generated by an active movement. This study examines the relative contributions of vision and the proprioceptive and efferent copy associated with active leg motion (without vestibular cues) to perception of translation. We compare the perceived distance of travel created by active and passive self-motion simulated on a stationary exercise bicycle.

Subjects sat on a stationary bicycle on which they were trained to pedal at a constant acceleration of 0.05m/s/s or 0.1m/s/s. They wore a virtual reality helmet that displayed a simulation of a corridor (w=2, h=2.5, l=50 m). Forward motion along the virtual corridor was driven by rotation of the bicycles wheels. Subjects were shown a virtual target at various distances from 2 to 20 m. The target then disappeared and subjects either i) moved only visually along the corridor at the trained acceleration: "vision-only, passive condition" (ii) pedalled actively at the trained acceleration along the virtual visual corridor: "vision + proprioception, active condition", or (iii) pedalled actively at the trained acceleration in the dark: "proprioception-only, active condition". Their task was to push a button when they perceived that they had reached the position previously occupied by the target.

Subjects needed to travel significantly further to match a given visual target distance when actively pedalling compared to the passive, vision-only condition. This was found for pedalling in the light or dark. Visual motion was associated with a larger perceptual amplitude of displacement when it was experienced without pedalling than when it was experienced with pedalling. Subjects needed to cycle further, either in the light or dark, to feel they had travelled through the same distance as that percept generated by visual motion alone.

A model of how visual and proprioception and efference copy cues to translation might interact to explain these data will be presented. Supported by: Natural Science and Engineering Research Council (NSERC) of Canada and the Centre for Research in Earth and Space Technology (CRESTech) of Ontario.

UP

Alla Ignashchenkova (Tübingen)

Spatial attention improves visual accuity in non-human and human primates

A. Ignashchenkova, P. Thier and T. Haarmeier. Sektion fuer Visuelle Sensomotorik, Neurologische Universitätsklinik Tübingen, 72076 Tübingen, Germany

One of the open questions in the study of visual attention has been if spatial attention is able to improve performance on visual discrimination tasks (for review, see Pashler, "The Psychology of Attention", The MIT Press, 1998). In order to contribute to this debate, we measured visual acuity based on a paradigm, requiring the discrimination of the two possible orientations of a conventional Landolt "C" optotype. To be able to address later the neuronal mechanisms underlying the attentional modification of discrimination performance, using a monkey model, we performed a comparative psychophysical study of humans and monkeys.
Subjects had to indicate the orientation of a Landolt "C", whose size was varied according to a PEST staircase procedure. The "C" was presented in locations along the horizontal and vertical axes with eccentricities of 3, 9 or 15 degrees of visual angle. We compared acuity thresholds, defined as the minimum size of the "C"-gap, yielding 75% correct responses. Three types of trials, differing with respect to whether a spatial cue was absent, present and valid or present, but invalid, were presented randomly interleaved. The spatial cue appeared 500 ms after the onset of maintained fixation of a little spot in the center of the display, stayed on for 150 ms and after a gap of 100 ms, the "C" was presented for 150 ms in one out of two mirror symmetric positions on one of the cardinal axes (choice of axes and eccentricities blocked), while the subject maintained fixation. Monkeys indicated their perceptual decision by eye movements, well after the presentation of the "C" and human subjects used response buttons.
As expected, we found that acuity thresholds increased with increasing eccentricity. We also observed a strong influence of perceptual learning, reflected by the fact that acuity thresholds kept declining asymptotically over about 5 weeks of daily measurements. However, even after having settled to a more or less constant threshold, spatial cueing remained able to modify acuity thresholds.
Shifting the monkeys attention to the correct location by spatial cueing improved acuity thresholds on average by 20% , independent of eccentricity. Conversely, shifting spatial attention to the false location raised thresholds by a comparable amount, largely independent of eccentricity. While absolute acuity thresholds for the rhesus monkey studied most heavily so far were significantly higher than those of human subjects, tested on the same task, the modulatory influence of spatial cueing on the performance seems to be the same in the two primate species explored. We conclude that spatial attention is able to enhance spatial visual resolution in monkeys and man, based on a mechanism which might be independent of the mechanism(s) underlying perceptual learning.

UP

Rudi Jaeger (Tuebingen)

Modeling of otolith mechanics

Jäger R, Haslwanter T, Fetter M

To gain a better understanding of the information that is provided by the otoliths for central processing we want to perform simulations of the 3-dimensional static and dynamic displacement properties of the otoliths, using numerical methods. Thereby several issues will be addressed. The first aim will be to identify the effect of the curved shape of the otoliths on the signal elicited by accelerations along different directions. The second aim will be to simulate the effects of hypogravity and hypergravity, as well as different static and dynamic conditions, on the displacement of the otoliths. The third goal will be to investigate the pattern and dynamics of hair-cell activity at different areas of the otolith epithelium due to different static and movement paradigms. We further plan to integrate the results of the simulations into an existing model of otolith-canal interaction invented by D. Merfeld. This will give us a better understanding of the perception of orientation on the ground and under microgravity, and of eye movements elicited by complex stimuli.
Verification of the theoretical predictions will be based on existing experimental data, as well as on new experiments. In a first set of experiments we will investigate the orientation of the subjective vertical in three dimensions during static whole-body pitch- and combined pitch-roll displacements. This study will help to clarify the physiological mechanisms of vestibular gravity receptors. It will hopefully give new insight into otolith activity during the initial period in space, as well as during early post-flight stages.

UP

John Jewell (Philadelphia)

Most current computational models of spatial orientation are based on visual processing mechanisms and have a difficult time accounting for the possible influences of other sensory information. In a series of three studies, we examined the contribution of vision and somatosensory input on judgments of perceived body tilt. In the first study, participants estimated the angle of body tilt when tipped backward both with vision present and absent (blindfolded). In both conditions, participants overestimated body tilt, with no differences between eyes open and eyes closed estimates. Perceived body tilt also varied linearly with compactness of body posture. Perceived body tilt was overestimated most when tilted while the body was retracted, overestimated an intermediate amount when tilted while seated, and overestimated least, although still inaccurate, when tilted while standing. These findings support the idea that the representation for perceived body tilt is principally influenced by somatosensory information. In the second study, participants were tilted in multiple directions starting from both vertical and horizontal orientations. Again, perceived body tilt was overestimated in all directions, and the presence of vision had little effect. Participants also exhibited greater sensitivity for deviation from upright particularly when tilted sideways and obliquely, the directions in which the body has a decreased range of movement. The pattern of perceived tilt found for vertical versus horizontal starting positions supports the idea that participants principally judge body tilt using an egocentric coordinate system rather than a world-based one. In the final study, participants were tilted backwards while simultaneously viewing dynamic "virtual tilt" that was either consistent or inconsistent with body movement. In this "virtual environment", tilt judgments did not differ between the consistent and inconsistent conditions, supporting the idea that perceived body tilt is based largely on somatosensory information. Our findings support the idea that perceived body tilt is principally judged based on somatosensory cues. We propose that these dramatic distortions in the perception of body tilt have functional significance and should be modeled as part of a general representation of spatial orientation that can incorporate multisensory information.

UP

François Klam (Paris)

Vestibular response parameters in macaque posterior parietal cortex neurons

F. Klam and W. Graf. CNRS - Collège de France, 75270 Paris Cedex 06, France.

The presence of vestibular signals in cortical areas involved in the processing of movement detection seems to be more widespread than anticipated earlier. We had previously described vestibular signals in the ventral intra-parietal area (VIP). The investigated neurons carried also direction selective visual and somatosensory signals that, however, were non-complementary with reference to the vestibular on-direction, i.e., the preferred direction of a visual stimulus was in the same direction as the vestibular on-direction, although the visual world typically moves on the retina in the opposite direction to a given head movement. We have now further analyzed the vestibular responses of neurons in and about area VIP in the posterior parietal cortex of macaque monkeys.
Single cell responses were recorded extracellularly in one awake macaque monkey (Macaca fascicularis) who was trained to perform a fixation task. Visual stimuli were backprojected on a translucent tangent screen. Vestibular stimuli were delivered as random vertical axis rotations in darkness or in light. An LED was fixed to the turntable to test neuronal responses under VOR supression conditions. Neuronal discharges were plotted as a function of turn-table angular position and velocity, and velocity and acceleration. A total of 52 neurons was recorded, of which 37 underwent full testing and analysis.
Several types were found regarding vestibular parameters. A small number of neurons was coding only one signal quality, either to acceleration and velocity. The vast majority of cells carried two or more parameters including a possible position signal (head in space).. A given cell, coding position and velocity in light, had the same response under VOR suppression in darkness. In some other cases, the position response was lost in darkness. Vestibular signals in the posterior parietal cortex my thus contain information about the head position in space. Such signals may contribute to the processing of constant spatial coordinates during movement via population coding. A final answer to these questions has to come from recordings in head-free animals.

UP

Ansgar Koene (Utrecht)

A simulation-based comparison of the spatial and displacement models of the saccade generator

Ansgar R. Koene and Casper J. Erkelens. Helmholtz Institute, Utrecht University, The Netherlands

As a first step in developing a comprehensive model of binocular eye motion control, we investigated the long standing question whether the saccade generator [SG] is best described by a spatial model or a displacement model. We first investigated the effects of lesion of the integrator generating the tonic signal on the two models. We also expanded both models to three dimensions to see how well they could accommodate Listings law. Since we felt that neither of these studies conclusively determined which model is physiologically correct, we then investigated what kind of lesion or stimulation experiment would be necessary to determine the nature of the feedback signal. This led us to propose a new experiment in which the saccade target is altered during the saccade. In all of the investigations our models of the SG were based on the work of Crawford and Guitton [1] and of Gancarz and Grossberg [2].
In the case of lesioning the neural integrator our simulations indicate that in the spatial model the elastic restoring forces of the eye plant, which are no longer compensated by the tonic signal, will cause the saccade to fall short of its target. Since the phasic signal however is not decreased by an efference feedback the eye will remain fixated at the maximum achieved excitation until the input to the SG is terminated. For the displacement model removal of the neural integrator generating the tonic signal does not affect the feedback decreasing the phasic signal. As a result the peak saccade amplitude will be slightly less than that achieved by the spatial model and the eye will immediately begin to drift back to the primary position. The data in the literature concerning this experiment however is not sufficiently detailed for us to assign it to one of the two models.
By expanding the models to three dimensions,we investigated whether the requirements of Listings law might help us determine which model more closely describes the physiological reality. In order to abide by Listings law, the SG must use information concerning the orientation of the eye at the beginning of the saccade. While Crawford and Guitton assumed that this information is not available in the displacement model it is our belief that since the initiation of a saccade requires visual information of the target location the SG may be provided with information on the current orientation of the eye via similar channels. Thus without introducing a feedback signal from the tonic neurons the displacement model can still be provided with information concerning the eye orientation at the start of the saccade. We therefore conclude that both the spatial and displacement models can successfully incorporate Listings law.
Since the previous two experiments proved to be inconclusive, we now propose a new experiment aimed at determining the nature of the feedback signal. By changing the target input to the SG during the execution of a saccade we should be able to determine whether the feedback signal used in the SG carries spatial or displacement information. Since the spatial model uses the actual eye orientation for both the input and feedback it manages to move the eye to the second target even though the initial saccade has not been completed. For the displacement model, where the feedback signal communicates the eye displacement from the start of the saccade, the outcome depends on the way in which the RNI is reset. The displacement model will reach the desired target only if the act of switching the saccade targets also resets the RNI.
While most of the experimental objections to the spatial model can be refuted by adaptations of the original model proposed by Robinson in the mid 70s these adaptations tend to introduce a great deal of complexity. Comparing these added complexities to the elegance of the displacement model as presented by Gancarz and Grossberg the latter model appears to be the most satisfying. Consequently, we will be basing our further work on the three-dimensional expansion of this model.
[1] Crawford, J. Douglas and Daniel Guitton.
Visual-motor transformations required for accurate and kinematically correct saccades
J. Neurophysiol. 78: 1447 - 1467, 1997.
[2] Gregory Gancarz and Setphen Grossberg.
A neural model of the saccade generator in the reticular formation.
Neural Networks 11: 1159 - 1174, 1998.

UP

Bart Krekelberg (Bochum)

Responses of MST cells to optic flow stimulation are complex in many ways. First, there is considerable structure in the temporal evolution of the spike rates. Secondly, the spatial composition of receptive fields is far from trivial. That is, given the response to a whole field optic flow stimulus, it is not trivial to predict the response to stimulation of only a part of the receptive field. The goal of our project is to determine how much of this spatiotemporal complexity is related to the computation of behaviourally significant quantities that go beyond the direction of heading in three dimensional space. To determine this, we first need to extract those parts of the responses that can be explained as a consequence of the computation of the direction of heading together with the complexity already present in the stimulus.

Using data from macaque MST cells, we develop a model that aims to capture the complexity in the spikerates with minimal assumptions about the network of which a particular MST cell forms a part. An important assumption that enters this model is the connectivity between MT and MST cells. Different approaches are clearly possible, of which a simple linear combination of MT cells to form an MST cell receptive field is only one. We will discuss the merits of this and other more complicated schemes. As the primary model contains no feedback dynamics, the temporal complexity in the spike-rates in the model are solely due to the inherent spatiotemporal complexity of optic flow stimuli. This allows us to ascertain how much of the complexity commonly seen in MST cells has to be ascribed to non-stimulus-driven factors.

After subtracting the stimulus-driven complexity from the responses of MST cells, a certain amount of temporal structure in the spikerates remains. We see this structure as a signature of an information processing task being performed by the cell. The next challenge for the model is to relate this to behaviourally relevant factors and to capture this in the model by adding complexity to the model in the form of non-linear dynamics.

UP

Anil Lekhram Mathoera (Rotterdam)

Floccular purkinje-cell responses to transparent optokinetic stimulation in the rabbit.

A.L. Mathoera, M.A. Frens, J van der Steen. Dept. of Physiology, Erasmus University, the Netherlands.

We recorded floccular Purkinje cell activity together with eye position in awake Dutch belted rabbits. Transparent visual stimuli were used to study simple (SS) and complex spike (CS) properties and their interactions. The paradigm consisted of 12 conditions. In one condition the visual world was oscillated about the yaw axis (0.1 Hz, amplitude = 2.50) , one was a static visual scene and in ten conditions two simultaneous transparent visual patterns were presented; one static and one moving sinusoidally about the yaw axis, at ten different Normalized Luminances(NL). NL was defined as: (luminance moving - luminance static) / (luminance moving + luminance static). In preceding experiments1NL dependency of the optokinetic reflex gain was demonstrated; the OKR gain increased sigmoidally from zero to maximum as the NL increased.
Vertical axis (VA) Purkinje cells were identified by determining the optimal response axis for the complex spike response using a hand held target and on the basis of the typical response2 when optokinetically stimulated about the yaw axis. Preliminary analysis of six VA cells was preformed off-line using level discrimination for SS and CS detection.
Instantaneous SS and CS firing frequency as well as eye position were averaged over ten cycles of stimulation from which phase, modulation depth and eye movement amplitude were determined by a sine fit procedure for each of the stimulus conditions.
The results showed that the modulation of the SS frequency increased with the amplitude of the eye movement in a monotonous way showing a stronger relation with motor than with sensory parameters. Furthermore the SS modulation turned out to be phase locked with the eye movements under all conditions. However, the CS modulation showed a phase shift as NL decreased resulting in loss of the CS-SS reciprocity in 5 out of 6 cells.
These results suggest that SS-signal of the Purkinje cells in the rabbit predominantly carries motor information. The reciprocity between CS and SS that is observed during pure optokinetic stimulation breaks up under transparent conditions, and is therefore probably not a result of CS-SS interactions.

1 Vestibular influence on transparent motion responses in the rabbit. A.L. Mathoera, M.A. Frens and J. van der Steen. 1997 Soc Neuro Sci Abstr 23: 471
2 Spatial organization of visual messages of the rabbit cerebellar flocculus. II. Complex and simple spike responses of Purkinje cells. W. Graf, J.I. Simpson, C.S. Leonard. 1988 J. Neurophysiol 60: 2091-2121

UP

Giorgio Metta (Genova)

Aspects of multimodal integration in artificial systems and neuroscience

G.Metta, R.Manzotti, G.Sandini, X.M.Sauvan, Lira Lab - DIST - University of Genova, Italy F.Panerai, LPPA - College de France, Paris, France

We shall discuss whether the implementation of plausible biological models on artificial systems is a suitable tool to gain deeper knowledge on brain functions, especially those related to multimodal sensory-motor integration. There are some reasons why we believe that this could be indeed the case. First, the present data from neuroscience studies are still patchy and in spite of the recent advances (i.e. fMRI, PET, etc) they need to be coherently integrated. Second, beside the possibility to build more efficient robots, real simulations with artificial agents could be useful to test hypotheses and to design further experiments (Lambrinos et al 1997). Third, as we will show, some of the problems to be solved are common to both humans and robots.
In particular, in order to integrate results gathered by different research fields, we devised experiments based on a developmental paradigm, which hopefully might provide further unitary perspective on the otherwise fragmentary data. All computational modules, either processing or control ones, are combined into a real robot system. It is worth stressing that a successful integration of these modules is not just a matter of combining the outputs in a linear fashion. This is particularly true if the system has learning capabilities and it adapts its behavior on the basis of repetitive interactions with the environment.
To demonstrate the feasibility of the developmental approach, we shall present results of experiments on multimodal sensory-motor integration performed on our robotic setup. The artificial system is composed of a multi joint arm (6 Degrees of Freedom - DoF) and a binocular head (4 DoF) equipped with two retina-like color cameras and an artificial vestibular system. As regards the visual system, it consists of two space-variant color cameras, whose sensing elements are arranged in order to resemble the distribution of photoreceptors in the retina (Questa and Sandini, 1996). The visual processing based on these retina-like sensors provides information about binocular disparity, optical flow and position of objects in the environment. As a matter of fact, their effective use requires the ability to modify dynamically the gaze direction. The motion of the cameras is accomplished by the head, which is composed of a neck, a common tilt and independent vergence. Specifically, motor control is based on the so-called force field approach proposed in the biological literature as a model of the spinal cord networks (Mussa-Ivaldi et al, 1993; Metta et al, 1998).
The artificial vestibular system measures linear and angular motion and it is mounted on the robot head. It has been used to improve image stability and tracking performance either during head motion or under external disturbances (Panerai and Sandini, 1998). This is exactly the same solution developed by biological systems where vision and vestibular control are integrated (Sauvan, 1999) in order to cope with both low and high frequency stimuli. The results of our experiments show what has been learned about multimodal integration from the robotic perspective.

Lambrinos, D., Maris M., Kobayashi H., Labhart T., Pfeifer R., and Wehner R. (1997). An autonomous agent navigating with a polarized light compass. Adaptive Behavior, Vol 6, No 1, p. 131-161
Metta G., Sandini G. and Konczak J. (1998). A developmental approach to sensori-motor coordination in artificial systems. System Man and Cybernetics IEEE 1998 Conference Proceedings, 3388-3393. Mussa-Ivaldi F.A., Giszter S.F., and Bizzi E. (1993). Covergent force fields organized in the frog s spinal cord, The Journal of Neuroscience, 13(2) 467-491, 1992.
Panerai F., and Sandini G. (1998). Oculo-motor stabilization reflexes: Integration of inertial and visual information. Neural Networks, 11, 1191-1204.
Questa P., and Sandini G. (1996). Time to contact computation with a space-variant retina-like c-mos sensor. In Proc. Int. Conference in Intelligent Robots and Systems. Osaka - Japan. JRS-IEEE. Sauvan X.M. (1999). Early integration of retinal and extra-retinal information: Recent results and hypotheses. Reviews in the Neurosciences, 10, in press.

UP

Bas Neggers (Munich)

Visual Search for a saccade-pointing sequence versus single saccades

Humans are very well capable of selecting a single item in a heterogeneous visual environment. In order to pick up a pencil from a desk full of other items, that might even share certain properties with the pencil, like shape or color, an extensive analysis of the visual field has to be performed. The search for a visual object among others has been an extensive field of research in the past three decades. In visual search studies, subjects usually have to detect a target item in a display that also contained distractor items. The number of distractors offered is varied systematically, as are the properties that they share with the target item. Obviously, early stages of visual processing of the visual field are massively parallel (Julesz, 1984 Treisman, 1988). This conclusion is based on the observation that the time between the presentation of the display and the identification-response is not depending on the number of distractor items in the display, as long as the target-features differ with respect to basic features, like one red target among green distractors, or a horizontal line between vertical distractors. A second, serial search mechanism is being assumed after this parallel processing, that is under attentional control (Neisser, 1967, Treisman & Gelade, 1980). When target items differ only slightly and in a more complex way from the distractor items, the
identification-reaction time increases almost linearly with respect to the set size. An example is the search for a target line of 20 deg slope between distractors of 30 deg and 50 deg orientation. The slope of these graphs, i.e. the search time per item, is taken as a measure for seriality of target selection. High slopes (~50 ms/item) imply a high degree of serial processing. Other measures of selection performance are the error rates, the number of selection responses towards a false item per total number of reactions. Error rates are also known to increase with set sizes, and to vary with target and distractor features.
This study questions if the search process is completely limited to perception. Selection processes might apply different criteria on perceptual information if the demands of the planned reaction on the selection (grasping, an eye movement, etc) are different. It is tested wether "visual search" characteristics as reaction times and error ratios are different when subjects select an item among others for single saccades, or for saccade-grasping movements. Participants were to make a speeded saccade towards a target among distractors. Targets differed in a unique way color (orange and green) and shape (cylinder or bar) from distractor items. In another task, subjects were to make a speeded saccade and a subsequent grasping movement to a target in the same target-distractor sets. Performance of the saccades in the single and dual task were compared. By doing so, it was possible to directly compare the search characteristics of the different tasks by means of the same motor system. Saccades are usually performed before grasping movements, and can be expected to take place after visual selection. Preliminary analysis showed significantly less form errors in the dual-task saccades than in the single task saccades. That is, when subjects also needed to grasp an object, the eyes were more likely to the correct shape than when no grasping movement was required. This finding suggests an early and detailed, task dependent visual analysis. More experiments are planned to elaborate this task- dependent visual search paradigm.

UP

Matthias Niemeier (Tuebingen)

Patients with neglect show no direction-specific deficits in saccadic eye movements during exploration of space.

M. Niemeier & H.-O. Karnath. Dept. of Neurology, University of Tübingen, Hoppe-Seyler-Str. 3, D-72076 Tübingen, Germany

Oculographic analyses requiring patients with neglect to perform eye movements towards targets in the left and right periphery found saccades in contralesional direction that (a) showed increased reaction times, (b) were hypometric, and (c) were increased in number. These deficits might reflect a general impairment in disengaging attention before shifts in the contralateral direction, which has been discussed as the mechanism leading to neglect. If so, affected contralateral saccades should be observed not only with sudden appearance of peripheral targets but also in spontaneous visual search. In the present study we analysed combined eye and head movements of neglect patients and control subjects during exploration of space. Subjects were required to search (a) for a laser spot in darkness and (b) for a letter within an array of distractors surrounding the subject. Neglect patients showed no direction-specific differences, neither for the number of saccades nor for the average duration of fixation. Saccade amplitude was reduced regardless of direction. In addition, neglect patients did not employ head movements to compensate for deficient saccades. In conclusion, we found no indication for a direction-specific deficit of saccadic eye movements in neglect. The results argue against an interpretation of neglect as a basic deficit to disengage from an attentional focus to a target located in the contralesional direction.

UP

Francisco Antonio Nieto Escamez (ALMERIA)

FRANCISCO NIETO-ESCAMEZ; JUAN BORREGO; ROCIO BELTRAN; MIGUEL RODRIGUEZ; MIGUEL CORTES; GERARDO NIETO-MONTES; EDUARDA AGUILERA; ALICIA NIETO-ESCAMEZ; ANA ISABEL NIETO-ESCAMEZ.

The lesion of the nucleus basalis magnocellularis impairs the ability to learn the task requirements but not the spatial navigation.

Lesions of the Nucleus basalis magnocellularis impairs the execution of an allocentric spatial task in the Morris water maze. Several hypotheses based on working memory, attentional or spatial processing deficits have been postulated in order to explain that impairment. Here we show an experiment with 4 different phases. Innitially we have observed that this lesions impairs the execution of a navigation task in the Morris water maze using a proximal cue (placed on the pool rim surface). The damaged animals were however able to locate a visible platform in the second experimental phase. And in the 3rd phase we make the animals to perform a standar navigation task in the swimming-pool. The damaged animals performed this phase perfectly. In the 4th phase we verified the ability of these animals to learn a spatial delayed task in a T-maze, the damaged animals did not learn the task.

We postulate that this lesions does not produce a spatial navigation deficit but an impairment to learn the requirement of the task.

UP

Francesco Panerai (Paris)

A 6 DoF head tracking device for experiments in active vision

F. Panerai, LPPA, Collège de France, Paris

Several studies about 3D perception in humans have shown higher discrimination performances in 3D variables estimation tasks when subjects are allowed to actively move their head. To investigate the role of active head movements in perception of three-dimensional variables, a head tracking apparatus has been designed. The apparatus will be used in a series of psycho-physiological experiments, where position and orientation of the head need to be measured in real time with high accuracy, high reliability and minimal interference with subject movements. Rotations and translations of the head can be tracked using a light helmet linked to a multi-joint mechanical structure. The use of the apparatus requires i) a geometric modeling of the multi-joint structure and ii) a calibration procedure to identify correctly the model s parameters. These developmental stages have been completed and the accuracy of the apparatus has been tested. In particular, the expected accuracy, derived on the basis of the pure geometric model and the sensor resolution, is compared with the real accuracy, obtained by performing repetitive measurements on a calibration fixture. The outcome of the comparison confirms the validity of the proposed solution. Head position can be measured at a sampling frequency above 1kHz with an overall accuracy of 0.5 mm. Resolution in translation and rotation are in the order of 0.2 mm and 0.036 degrees respectively.

UP

Monica Paolini (Bochum)

Responses to continuously changing optic flow in area MST.

Monica Paolini, Frank Bremmer, Markus Lappe, Klaus-Peter Hoffmann Zoology and Neurobiology LabV Ruhr Universität BochumV D44780 BochumV Germany

In primate area MST, a large proportion of cells respond vigorously to optic flow stimuli. We studied the temporal properties of the response in MST cells by presenting a series of optic flow stimuli (a) separated by an interstimulus interval (ISI), during which there is no visual stimulation; (b) without ISI (with an abrupt change in the motion pattern) or (c) separated by a transition (the current stimulus is gradually morphed into the following one by linearly decreasing the motion component of the current stimulus and increasing the motion component of the following one).

Methods. We recorded the response from 122 MST cells, from an awake monkey (Macaca mulatta), engaged in a fixation task. Each stimulus, transition and ISI was presented for 834 msec. Stimuli included translations and rotations in six directions (up, down, left, right, forward and backwards).

Results. Stimulus presentation with and without ISI. The non-selective onset peak that has been previously observed (Duffy and Wurtz, 1997, Exp Brain Res, 114, 472-482) was present only when stimuli preceded by an ISI were presented, but disappeared when the ISI was removed. In addition, the peak was also present, although in a weaker form, in some cells during the ISI, often following a non-preferred stimulus. Constant flow vs. transitions (morphing). We observed three types of responses during transitions. In the majority of cells the firing rate gradually decreased after a preferred optic flow and gradually increased before a preferred optic flow. Such temporal changes in the response are consistent with a cell model with a temporally linear firing rate and a linear input summation. Other cells were genuinely selective for a small number of specific transitions; the firing rate during the preferred transition is significantly higher than the firing rate during the preceding and following stimulus. Finally, in few cells the firing rate remained high throughout the transition as long as one of the preferred components of the optic flow was present, i.e., when the transition followed or preceded a preferred stimulus.

Conclusions. The lack of an onset peak when the ISI was removed suggests that the initial peak obtained with stimuli with an ISI signals the motion onset/offset, but it is not an integral part of the response to optic flow stimuli. The three types of responses to transitions (linear summation, selectivity for specific transitions, and non-linear selectivity for motion component) indicate that while a large proportion of cells respond linearly, many others respond in a non-linear fashion that needs to be further investigated.

UP

Anne-Lise Paradis (Orsay)

3D structure from motion: an fMRI study.

A.-L. Paradis *,#, V. Cornilleau-Pérès #, J. Droulez #, P.-F. van de Moortele *, E. Lobel *,#, A. Berthoz #, D. Le Bihan *, J.-B. Poline *.

* Service Hospitalier Frédéric Joliot, CEA, Orsay, France. # Laboratoire de Physiologie de Perception et de l'Action, CNRS - Collège de France, Paris.

The pattern of velocity which is projected on the retina during translatory motion of an observer depends on the distance of the various visual scene components to the observer. These spatial variations of the retinal velocity field, called motion parallax, have been proved to be a powerful cue for the perception of the environment 3D structure [1]. However, little is known about the functional anatomy underlying motion parallax processing.

To investigate the cortical areas involved the visual perception of 3D structure through motion parallax, we used functional magnetic resonance imaging (fMRI). Stimuli consisted of 300 dots randomly scattered on a circular viewing window subtending 16° visual angle. The pattern of dots was either stationary (ST), randomly moving (RM), expanding and contracting (EX) or representing a spherical surface oscillating round one of its fronto-parallel tangents (SP3D). EX and SP3D both display a coherent pattern of motion, but EX elicits little depth information compared to SP3D. In the SP3D stimulus, motion parallax was the only depth cue, but subjects could readily perceive the structure of the object represented. Nine subjects were presented with three stimulation schemes that alternated epochs of 1) ST and RM stimuli, 2) EX and RM stimuli, and 3) SP3D and RM stimuli. First experiment aimed at highlighting the areas involved in motion processing, in order to use them as functional landmarks. Data were analysed groupwise and individually, in order to get global statistical inference and individual anatomy based localisation.

As expected [2], the visual areas V1/V2, and the complex V5+ were found activated in the RM vs. ST experiment. However, they did not generally show a preference for coherent motion (EX or SP3D), as compared to incoherent motion (RM).

Overall the brain areas that were more activated by EX or SP3D than by RM were highly similar. The areas sensitive to both coherent motion stimuli mainly consisted of the superior occipital gyrus, and the junction between the posterior parietal and superior occipital gyri, which is consistent with previous findings [3]. Both areas showed a higher level of BOLD (Blood Oxygenation Level Dependent) signal for coherent motion as compared to the incoherent motion, but the former area was found sensitive to motion in the RM vs. ST experiment (presumptive area V3/V3A), while the later was not.

Finally, the ventral aspect of the left occipito-temporal junction, along the collateral sulcus, appeared to be sensitive to the three motion stimuli, with a slightly stronger level of activation in the 3D sphere versus random motion. Parts of the fusiform gyrus have already proved to be selectively activated by 3D objects presentation when static depth cues were used [4,5]. The results of the present experiment suggest that part of this region may also be selective to 3D stimuli when motion parallax is the only depth cue.

[1] - Rogers BJ & Graham M (1979) Perception 8: 125-134.
[2] - Dupont P et al. (1994) J. Neurophysiol., 72, 3, 1420-1424.
[3] - De Jong BM et al. (1994) Brain 117, 1039-1054.
[4] - Schacter DL et al. (1995) Nature 376: 587-590.
[5] - Malach R et al. (1995) PNAS, USA, 92: 8135-8139.

UP

Mirjam Pijnappels (Maastricht)

Coordination of eye and hand movements during pointing to a target

Pointing movements toward a visual target in space involves the operation of two basic processes. First, the target needs to be localized perceptually, and second, a goal-directed hand movement has to be executed. The coordination of perception and action has to be controlled both temporally and spatially.
Using a touch screen and a video eye tracking system we studied the temporal and spatial aspects of both eye and hand movements during pointing to a target. In particular, we measured movement initiation time and termination time of both responses and their spatial end points. We manipulated stimulus duration, movement velocity and target location to assess how these variables mediate the temporal order and spatial accuracy of eye and hand movements. The discussion considered the implications of the results for theoretical notions of eye-hand coordination.

UP

Tony Ro (London)

Ipsilesional saccade biases from lesions of the human inferior parietal lobule

Tony Ro 1, Chris Rorden 1, Jon Driver 1, and Robert Rafal 2

1 Institute of Cognitive Neuroscience University College London London, UK

and

2 School of Psychology University of Wales Bangor, UK

When two visual events occur simultaneously in time, one event may sometimes be perceived as preceding the other. Previous studies show that attention modulates temporal order judgments (TOJ), with attended events perceived as occurring sooner. Here we tested patients with unilateral lesions involving the frontal eye fields (FEF) or inferior parietal lobule (IPL) to determine the contribution of these regions in a TOJ task. Two visual events, one in each hemifield, were presented at various stimulus onset asynchronies. Patients pressed a button (manual report) or moved their eyes (oculomotor report) to indicate the side perceived as appearing first. An oculomotor bias towards the ipsilesional event was found in the IPL patients, but not in the FEF patients. No biases were apparent for either group in manual report, suggesting that the parietal saccade deficit was not due to perceptual deficits. While choice of saccade direction was normal in the FEF patients, their contralesional saccades tended to be faster, which was not true of the IPL group. Lesions of the IPL in humans can impair oculomotor behavior independently of stimulus perception.

UP

Louise Rönnqvist (Umeå)

A comparison of the effects of hip and pelvic control on the transport phase of reaching in infants 5 - 6 months of age.

By using a modified versus unmodified seating, reaching movements of a group of infants who could not yet sit independently without support were studied. Kinematic recordings of arm and head movements were made by using the 3-D MaxReflex automated motion system. The transport components of the arm movements and the head movements during the reaching phase were analysed in relation to the two seating conditions. Some kinematic parameters of interest to be discussed from this study are: number of movements units, straightness of the reach, and speed-curvature relationship.

UP

Hansjörg Scherberger (Pasadena)

Saccadic and arm reach responses to visual double simultaneous stimulation follow a head-centered rather than a trunk-centered reference frame in the monkey

Hansjörg Scherberger1, Melvyn A Goodale2, Richard A Andersen1.

1California Institute of Technology, Pasadena, USA, and 2University of Western Ontario, London, Ontario, Canada.

Double simultaneous stimulation (DSS) is a common psychometric test for visual extinction, a form of neglect where patients fail to report the contra-lesional of two visual stimuli when presented simultaneously, while able to report either stimulus when presented singly. Here we asked, whether the response to visual DSS depends on the modality of report (eye vs. left vs. right arm movement), whether the response varies for different spatial stimulus locations, and if so, in what kind of reference frame (e.g. head- vs. body-centered) that response pattern is embedded.
Monkeys were seated in a primate chair in front of a switch panel containing a row of 9 pushbuttons each 16º apart. Each button had integrated a red and a green LED light that served as a visual stimulus. The monkey was trained to interpret the red light as a command to visually fixate the pushbutton, whereas the green light required reaching and pressing that pushbutton. During the experiment, the monkey's head, trunk, and left or right arm were restrained, while monocular eye position was recorded with the search coil method. In each trial, the monkey was first asked to visually fixate and reach a fixation point (FP). Then, a green (red) stimulus appeared either 16º to the left or to the right, and the monkey, in order to get a reward, had to reach (saccade to) the stimulus while maintaining the FP visually (manually) fixated. In 20% of all trials (the interesting DSS cases), however, two green (red) stimuli appeared 16º bilateral of the FP, and the monkey was required to reach (saccade to) either one of both stimuli to be rewarded. The horizontal position of the FP was altered from straight-ahead to lateralities of up to ±32º, and the monkey's choice in the DSS trials (left vs. right target) analyzed as a function of FP position. Finally, head and trunk position was varied, each to an amount of ±32º.
Results in one monkey showed a strong dependence of the saccadic and the reach response to DSS on the horizontal stimulus location. The saccadic response was equally often to the left and to the right when the FP was straight-ahead, whereas for an eccentric FP the response had a strong tendency towards (head-centered) midline. When responding with arm movements, this was also true for very large FP eccentricities. Around straight-ahead, however, responding with the left arm was associated with a bias to the left, while the right arm response had a rightward bias. When varying head but not trunk position, these response patterns remained invariant in a head-centered reference frame. Surprisingly for arm responses, the variation of the trunk position, with the head stationary in space, did not change the spatial response distribution.
These results suggest that the decision to respond to either the left or the right stimulus in DSS is predominantly made in terms of a head-centered, as opposed to a body-centered, reference frame. Different response modalities (saccades vs. left vs. right arm reaches) modify these responses. In particular, the ipsilateral bias for reach responses could be due to the lateralized biomechanics of the left and the right arm.
Supported by a Gosney Fellowship of the Caltech Division of Biology.

UP

Hans-Georg Schlosser (Berlin)

UP

Erich Schneider (Muenchen)

Dynamic Properties of the Human Ocular Torsion Induced by AC Galvanic Vestibular Stimulation

Erich Schneider, Stefan Glasauer, Marianne Dieterich

DC galvanic vestibular stimulation induces a tonic torsion of both eyes. This study investigates the dynamic properties of the ocular torsion induced by AC galvanic vestibular stimulation of 6 healthy subjects. Using binocular three dimensional video-oculography at a sampling rate of 30 Hz we measured eye movements while the subjects heads were in erect and supine positions and they fixed a distant target. Depending on the subjects motivation, 5 or 10 sinusoidal AC stimuli from a logarithmically spaced frequency range of 0.005 Hz to 1.67 Hz were applied. The stimuli had peak amplitudes of 3 mA and lasted from 30 to 200 sec.

At the lowest frequencies the ocular torsional response reached a peak amplitude of mean 3.5 deg (0.89 SD). No disconjugate torsional eye movements and no differences between the erect and supine positions were observed. Over the frequency domain measured the gain and phase relationships of the ocular torsion show a lowpass characteristic with a time constant (TC) in the range of 1.25 sec. This TC lies an order of magnitude above the mechanical TC of the eye plant [3] and an order of magnitude beneath the TC of the neuronal integrator located in the otolith pathways [2].

Under the assumption that ocular torsion is induced by the galvanic stimulus acting on otolith afferents we should expect to find the lowpass characteristic (TC 10 sec) contained in the OOR for pure otolith stimulation [1]. This lowpass helps to discriminate between tilt and translation [2]. However, the dynamics we observed in this study is rather similar to the dynamics of the velocity-to-position neural integrator located in the canal pathways and first introduced by Robinson in a model of the torsional VOR [3]. The lowpass characteristic of the integrator has a TC of 2 sec [3]. Simulating a DC galvanic stimulus on the afferent canal input of Robinsons model induces a tonic torsion of the eyes despite the fact that only canal pathways are involved. This tonic torsion has been described previously [4]. A small modification of the model by adding an additional integrating otolith pathway (TC=10sec, gain=0.3) in parallel to the canal integrator (TC=1sec, gain=0.7) produces a slightly better fit to our data.

Thus we conclude that galvanic vestibular stimulation has an effect on both otolith and canal afferents. The observed dynamics of ocular torsion, however, favours the idea that the effect on the canal afferents is dominant.

References

[1] Clark, A.H, et al. Abstracts for the Satellite Symposium to the XXth Meeting of the Barany-Society. Freiburg, Germany. 1998; 15.
[2] Merfeld, D.M. et al. Exp-Brain-Res. 1996; 110(2): 315-21
[3] Seidman, S.H. et al. Vision Res. 1994; 5: 679-689.
[4] Zink, R. et al. Neurosci. Lett. 1997; 232: 171-174

UP

Kai Schreiber (Tübingen)

Limits in measuring torsional eye-position using VOG-devices

Recent improvements in computer hard- and software have made widely available video-oculography-devices for measuring 3D-eye-position. While in 2D-VOG the algorithms mostly are based on pupil-detection and subsequent calculation of the angular position of the eye, torsional measurement is usually carried out using iral signature patterns, making it much less easy to determine limits of the method and sources of error as well as putting on the measurer the problem of choosing a iral signature of good enough quality without having at hand other than intuitive criteria for that quality.

In particular, recording moving eyes with VOG will deliver blurred images and thus blurred iral signatures, spoiling the measurement of torsion. A criterion is suggested for estimating the quality of an iral signature with respect to measurement as well as for estimating the effect of blurred images on torsion measurement. Also, simulations of measurements on moving eyes are presented, proposing a limit of eye velocity (in relation to the exposure time of the used cameras) up to which measurements of torsion are likely to succeed.

UP

Patrice Senot (Paris)

Control of the upper limb perturbation in a catching task

P. Senot, J. McIntyre, P. Prevost, A. Berthoz LPPA, CNRS-Collège de France, Paris

Catching an object gives rise to perturbations that the central nervous system must control in order to assure postural stability. In order to efficiently control the impact between the body and the object, the central nervous system must develop muscular activity fitted to the perturbation.
D.Lee (1976) and F.Lacquaniti(1989) have yet demonstrated that the central nervous system is able to predict the time remaining before the impact between the hand and the object in constant velocity and acceleration conditions respectively. Lacquaniti has also shown that subjects are able to develop an anticipatory muscular activity which is in relationship with the momentum of the objet at impact. But the nature of the parameters which are taken into account to calibrate this activity have not been specified.
Seven healthy subjects were recruited to catch an object attached to the extremity of a pendulum.The apparatus was built in order to have a variable acceleration in time and different acceleration profiles for different trials. Electromyographic data were recorded on flexor carpi an extensor carpi. These data reveal that an anticipatory muscular activity developped 120ms before impact and that its intensity was correlated with the object s velocity at impact. The central nervous system was therefor able to estimate the final velocity of the object from visual information obtained a few hundred milliseconds before impact. This result suggests that the central nervous system is able to take into account the object s acceleration.

UP

Isabelle Siegler (Paris)

Gaze and body orientation in the dark

I.Siegler, I.Israël

LPPA, Collège de France, 11 place M.Berthelot, 75005 Paris, France

The aim of this study was to investigate how different instructions concerning gaze orientation (head+eye) toward a previously seen target could influence the performance of subjects in a task of self-controlled whole-body rotations in the dark. Subjects were seated on a mobile robot they could drive themselves by means of a joystick. They were asked to perform complete clockwise turns (360deg) in four different conditions. The instructions they were given were the following:
Condition No Target (NT): subjects were simply asked to keep their eyes open and to look far ahead in front of them in the dark.
Condition Imagined Target (IT): before each rotation, subjects were shown a target on a wall in front of them; they could not see it anymore when performing the rotation, but they were asked to visualise it mentally and to orient themselves during the rotation using it.
Condition Gaze on Target (GT): before each rotation, subjects were shown the same target as in IT. In this case, they were not only asked to visualise mentally the target in the room, but also to follow, as far as possible, with the gaze the estimated position of this target, i.e. to maintain the gaze fixed in space. Subjects were explicitely told that they had to move the eyes and the head to accomplish the task.
Condition Head Fixed Target (HFT): subjects had to maintain gaze on a real head-fixed visual target while performing the rotations.

We measured robot, eye and head position during rotations. Preliminary results (10 subjects) show that gaze orientation toward an imagined earth-fixed target can influence tha way subjects perform the 360degree-whole-body rotations. Indeed, subjects largely undershot the expected angle in NT (284+-49 deg) and were much better in GT (341+-72 deg). Analysis needs to be performed on eye and head movements to understand this main result and see how subjects followed the instructions.

UP

Aya Takemura (Tsukuba)

Short-latency ocular following and its associated neuronal activity in medial superior temporal area : dependence on horizontal disparity.

A. Takemura1*, Y. Inoue1, K. Kawano1 and F. A. Miles2. 1Neuroscience Section, Electrotechnical Lab., Tsukuba, Ibaraki, 305-8568, Japan, and 2Lab. of Sensorimotor Research, National Eye Institute, Bethesda, MD 20892, USA.

Movements of the visual scene evoke tracking movements of the eyes (ocular following responses, OFR) at short latencies (Miles et al., 1986). It has been reported that the very earliest OFR are sensitive to the binocular disparity of the images. We have sought to determine if the direction-selective neurons in MST that have been implicated in the generation of the earliest OFR show a similar dependence on binocular disparity.
The dependence of OFR on horizontal disparity steps was studied in four monkeys (Macaca fuscata), and its associated unit discharges in the MST were studied in one of these. Animals faced a translucent tangent screen onto which two identical, random dot patterns were backprojected. Orthogonal polarizing filters in the two projection paths and in front of each eye ensured that each of the two eyes saw only one of the patterns, movements of which were produced by mirror galvanometers. We used step-ramp motions of the two projected images, and analyzed only the very earliest OFR and neuronal activities ("open-loop" responses), before eye movements had had any chance to affect the visual stimuli on the retina. The steps were disconjugate (horizontal disparities were ranging from 0 deg to 4 deg) and served to position the binocular image of the random-dot patterns in a new depth plane nearer (crossed disparity) or farther (uncrossed disparity) than the screen. The ramps were conjugate (both images moved together at constant speed with durations of 150 ms) and served to elicit OFR by moving the patterns in the new depth plane. The steps were applied during a centering saccade, and the ramps commenced 50 ms after the end of that saccade. In this way we assessed the effect of disparity on OFR, using an electromagnetic induction technique to record eye movements, and on the activity of 29 MST neurons that showed OFR-related modulations of their discharges.
Based on the change in eye position over the period 60-93 ms (measured from stimulus onset), the initial OFR showed clear dependence on the disparity imposed during the preceding centering saccade, the disparity tuning curves being S-shaped with a peak at crossed disparities and a trough at uncrossed disparities. All OFR-related MST neurons showed significant sensitivity to disparity steps. The early neuronal responses of many units (17/29, 58.6 %) had disparity tuning curves resembling those for OFR, peaking with crossed disparities, but a few peaked with uncrossed disparities while some showed the largest or the smallest response to stimuli moving at zero disparity (tuned neurons). For the majority of MST neurons, the OFR-related activities were correlated with the changes of eye position. These data are consistent with our previous suggestion (Kawano et al., 1994) that the earliest OFR are mediated at least in part by MST. (Supported by the Human Frontier Science Progragy, CREST of Japan Science and Technology Corporation)

UP

Gabriella Ugolini (Gif-sur-Yvette)

Mapping the oculomotor system : the power of transneuronal tracing

G. UGOLINI1, N. YATIM2, N.M. GERRITS3 AND W. GRAF2

1Lab Génétique des Virus, CNRS, 91198 Gif-sur-Yvette, France, 2Lab. Physiologie de la Perception et de l'Action, CNRS.Collège de France, 75231 Paris Cedex 05, France, and 3Dept. Anatomy, Erasmus University, 3000 DR Rotterdam, The Netherlands

Retrograde transneuronal tracing with rabies virus allows specific labeling of synaptically connected neurons (Ugolini, 1995, J. Comp. Neurol. 356:457). This method was applied to the oculomotor system to test the hypothesis of a modular organization of eye movement circuits. In guinea pigs, rabies virus (CVS strain ; 1 1) was injected into one medial rectus (MR) muscle. This muscle was chosen because of its special innervation pattern (ascending tract of Deiters, abducens internuclear pathway). The CNS distribution of the virus was studied immunohistochemically at sequential 12 hr intervals from 2 to 6.5 days post-inoculation (p.i.). Specificity of uptake was demonstrated by double immunofluorescence for rabies and choline acetyl-transferase (CAT ; a motoneuron marker) in sections of the oculomotor, trochlear and abducens nuclei.
Transneuronal transfer was time-dependent. Initially, labeling involved only MR motoneurons 2 days p.i. (first-order) in the ipsilateral oculomotor nucleus. At 2.5 days p.i., the onset of transfer visualized abducens internuclear neurons contralaterally, and other cell groups of the horizontal system that project directly to medial rectus motoneurons, i.e. ipsilateral prepositus hypoglossi neurons, ipsilateral medial vestibular nucleus neurons of the ascending tract of Deiters, bilaterally distributed interneurons (CAT-negative) in the oculomotor and trochlear nuclei and the supra oculomotor regions and neurons associated with saccade generation in the nucleus reticularis cuneiformis. At 3 days p.i., additional labeling involved consistently many neurons in the contralateral medial vestibular nucleus, caudal parts of the ipsilateral medial vestibular nucleus, the superior vestibular neucleus (mainly ipsilaterally), the paramedian pontine reticular formation, paramedian tract neurons and also some neurons in contralateral Y group and in the interstitial nucleus of Cajal. Inconsistent labeling of a few neurons in the latter cell groups occurred already in some cases at 2.5 days p.i.. From 3.5 days p.i. onwards, transfer to higher-order neurons occurred in the descending vestibular nucleus, Scarpa's ganglion, the ipsilateral cerebellar floccculus (FL) and deep cerebellar nuclei, and several other cortical and subcortical cell groups. Additional labeling appeared at longer survival times.
Purkinje cells (PCs) were labeled at 3.5 days p.i. in the ipsilateral FL in a single band that ran diagonally from caudomedial to rostrolateral in an intermediate position. This band corresponds to the so-called « horizontal zone ». In some cases, labeling continued laterally across the posterolateral fissure into the ventral paraflocculus. At longer survival times (between 3.5 and 4 days p.i.), the initial band became slightly broader and additional separate bands of labeled PCs appeared rostromedially and caudolaterally in the FL. At these times, an intermediately positioned band also appeared in the contralateral FL that mirrored the one initially labeled in the ipsilateral and contralateral FL reflects a similar time difference in labeling in ipsi versus contralateral magnocellular medial vestibular nucleus neurons. A few hours later, additional bands developed in the contralateral FL as observed earlier in the ipsilateral FL. After 4 days, the areas adjacent to the « horizontal zone » in both FL became gradually filled with labeled PCs, but even at 6.5 days p.i. some parts of the FL remained unlabeled. This increase suggests an involvement of PCs belonging to vertical zones » in horizontal eye movement circuits via neurons in the vestibular nuclei or the deep cerebellar nuclei and possibly the Y group.
These first results suggest a modular organization at the level of basic neuronal circuits involved in spatial coordination of eye movements, but also indicate a much more complex network for the control of horizontal eye movements than anticipated earlier, that involve multiple diverging and converging connectivities.

UP

Isabelle VIAUD DELMON (Paris)

Oculomotor strategies and orientation in space

Isabelle Viaud-Delmon and Isabelle Siegler

LPPA, Collège de France 11 place Marcelin Berthelot, Paris 75005 France

Numerous studies have investigated a potential link between anxiety and the vestibular system, but few of them have addressed the specific topic of spatial representation. Passive whole-body rotations in the horizontal plane were imposed on two groups of subjects who differed in their level of trait anxiety. Subjects were seated on a mobile robot in darkness. After each passive rotation, subjects were asked to reproduce the stimulus by driving the robot with a joystick and to perform a rotation of the same magnitude. Eye movements were recorded and analyzed.
No difference in either perception (accuracy in the reproduction task) or in VOR gain was found between the two groups of subjects. Mean eye deviation, caused by fast phases of the nystagmus, differed in the two groups. It was typically in the anticompensatory direction in the non-anxious group, and in the compensatory direction the anxious group. Such compensatory movement may be explained by an egocentric orientation strategy (Siegler et al 1998), that may in turn indicate a lack of interest towards the visual surroundings.
An egocentric strategy for self-orientation exhibited at an unconscious level could reveal the existence of a physiological mode of processing leading to agoraphobic avoidance.

UP

Mark Wexler (Paris)

An evaluation of underlying hypotheses in the perception of surface inclination from static monocular texture cues

Mark Wexler and Jacques Droulez

It has often been claimed that the visual system reconstructs a 3D surface from a static monocular grid-like texture based on the hypothesis that the grid is square (i.e., right angles, equal lengths) on the target surface--the "square grid" hypothesis. We checked this hypothesis directly by means of an experiment in which the subjects indicated the 3D surface inclination at various points on a curved grid seen at a wide viewing angle. We find that the two components of the square grid hypothesis--previously considered as undifferentiated--have different status, in that subjects prefer the sub-hypothesis of right angles over that of equal lengths. Finally, we discuss a different hypothesis, according to which the observer chooses a target surface in which the grid elements are parallelograms. It appears that this last, parallelogram hypothesis accounts for our data better than either element of the square grid hypothesis.



up




| Cortical neuronal mechanisms and psychophysics of orientation and motion in three-dimensional space |
| Description of the meeting | Program | Abstracts | Organization | Various |