Visual motion in animals. (neuroscience)

Essay by suutje August 2004

download word file, 12 pages 3.0

Downloaded 45 times

Visual motion information plays a crucial part in the lives of animals and insects. It is essential in order to perform successful navigation, avoid predators and catch prey. An animal receives motion information all the time it perceives an object in its field of vision, this however is not the only source of motion information, motion stimuli are in fact processed all the time during self-motion. This produces a self-generated retinal flowfield, which gets continually displaced according to the animal's trajectory and the three-dimensional structure of the visual environment (Egelhaaf et al, 1992).

There are three main types of visual flow patterns, which encode three possible motion situations. A rotatory large-field motion will signal to the animal any unintended deviation from its own course and trigger a negative feedback response to compensate for these deviations, for example for hovering behaviour bees. Image expansion signals that the animal is heading towards an obstacle, this can be useful both to avoid obstacles and to correctly time a smooth landing.

The third type of flowfield is produced by discontinuities in the retinal motion field and small field motion which indicate the presence of a stationary or moving object (Egelhaaf, 1992).

There are four good reasons for researchers to analyse the mechanisms underlying motion detection in the fly (fig 1):

Fig1: A picture showing the compound eyes of a fly

· Flies have relatively "compressed" and simplified neural circuitries

· They also have relatively large and fixed-focused compound eyes

· they have acquired a highly specialised visual system for motion vision (due to their poor visual resolution)

· The nervous system of at least the relatively big blowfly Calliphora erythrocephala is amenable to an analysis on the basis of morphologically distinct nerve cells that can easily be identified individually (Egelhaaf & Borst, 1992).

Anatomical and behavioural crossed analyses carried out in recent years, with the aid of computer modelling, have uncovered the neuronal computation that underlie visuo-motor processing, in flies. On the basis of this research, it is now possible to model this kind of neural computations at the level of the synaptic interactions and transmitters being released, which had previously been defined exclusively in formal terms. Outputs from the model were consequently monitored when specific components were functionally disabled. This allowed scientists to elaborate an accurate picture of the different computations performed by the fly in vivo as well as enabling us to identify and characterise basic local motion detectors. These are individually responsible for processing motion information from discrete areas of the visual field and are organised in two-dimensional retinotopic arrays which cover the entire area of the animal's visual field (Egelhaaf et al, 1992).

In the 1950s Reichardt and Hassenstein proposed a model, based on the computation of flowfields in flies, called the correlation model, in which each elementary motion detector (emd) is made up of two mirror-symmetrical subunits and each subunit is responsible for detecting motion across their own discrete sample of the visual field. The input signals, received from two separate channels in each subunit of an emd, interact in a non-linear way before their values are subtracted from each other for the final output. The output from each emd is integrated together, with the output of populations of emds tuned for the same preferential direction of motion, to provide a visual flow of motion across the whole field of vision. According to the direction of movement of the object being detected, each emd will either fire, in response to a high degree of correlation to its own preferential direction of motion, or it will produce a negligible output, in response to a weak correlation. (see picture below) (Osorio, 2001).

Fig 2: Each compound eye is composed of up to several thousand individual visual organs called ommatidia. The surface of each ommatidium is a hexagonal lens, below which is a second, conical lens. Light entering the ommatidium is focused by these lenses down a central structure called the rhabdom, where an inverted image forms on light-sensitive retinular cells. Pigment cells surrounding the rhabdom keep light from other ommatidia from entering. Optic nerve fibers transmit information from each rhabdom separately to the brain, where it is combined to form a single image of the outside world (Encarta, 1999)

Elementary motion detectors of similar nature have also been discovered in locusts, pigeons and rabbits, for collision-avoidance purposes, and in rhesus monkeys, for general motion in the visual field. A number of electrophysiological recordings from individual neurons in the dorsal posterior zone of the nucleus rotundus in the pigeon's brain have been taken in an attempt to discover the mechanisms which permit the animal to appropriately time its reactions, in order to avoid crashing into objects or being attacked by an approaching predator. The results of these studies revealed that 24 neurons, in this area of the pigeon's brain from which the recordings were taken, were specifically tuned to responding to objects moving on a direct collision course with the animal itself. Each neurone started firing at a particular time before collision, according to which part of the visual field they were signalling for, regardless of the absolute size of the object. The time at which each neurone started to fire varied between 800 and 1400 ms (Rind et al, 1999).

In the locust's central nervous system, a pair of synaptically connected interneurons, known as the lobula giant movement detector (LGMD) and the descending contralateral-movement detector (DCMD), cover analogous functions to the ones just described for motion detection in pigeons (Blanchard et al, 2000). Unlike the looming-sensitive neurons in the pigeon, the response produced by the LGMD increases during the approach of an object, according to its speed and size. The specificity of the LGMD for objects that are approaching the animal rather than receding from it, seems to be generated by means of a race between all the excitatory signals, which travel through the dendrites of the LGMD in the optic lobe, and the lateral inhibition that is mediated by synapses onto neurones from the medulla. The biggest responses will be triggered by the correlated action of the LGMD and the DCMD when the excitatory signals arrive before the inhibitory ones do. The evidence that this is the mechanism that is conducted by the nervous system of the locust has come from two separate studies. One study implemented a computational model incorporating all features of the known input to the system, which provided evidence for waves of inhibition which spread across the medullary units and has also provided evidence for directional discrimination and timing of the response. The second study looked instead at the synaptic arrangements in the LGMD, which confirmed the existence of lateral interactions among presynaptic units to the LGMD (Rind et al, 1999).

In the 1960s attempts to understand the way in which motion information is computed by the nervous system were carried out by Barlow and Levick on the rabbit's retina. They found that if they stimulated the gaglion cells of the animal by shining a single beam of light across two slits in a sequential fashion that mimicked motion in the preferred direction, a series of action potentials were elicited, while motion in the null direction didn't. They also found that there was a greater response when they illuminated just a single slit than when they shone light sequentially across the two slits in the null direction, which suggested that directional sensitivity in these cells must be achieved through a veto operation. According to the model proposed by Barlow and Levick, photoreceptors connected to direction-selective cells are arranged so that stimuli in the preferred direction will reach the excitatory cells before they reach the inhibitory cells. If however, motion is detected in the null direction, the inhibitory signal, which is slower, arrives at the ganglion cell at the same time as the excitatory signal and therefore the generation of an action potential is prevented. It is because of this mechanism that a stationary light in the visual field will produce a greater response than a light moving in the null direction (Poggio & Koch, 1987).

In the next part of the essay some of the most significant and interesting neural networks which have been implemented onto autonomous vehicles to perform various visually guided tasks will be discussed.

During the last decade, a number of autonomous vehicles have been assembled on the basis of biological data. The principles identified for the computation of flow fields in nature have been taken as a model approach for the extraction of useful motion information in autonomous vehicles. This has been achieved by implementing algorithms, deduced from neurophysiological or behavioural data, neural architectures and specific neural circuitries, inspired by the brain's physiological properties and its internal connectivity (Osorio, 2001).

One of the most recent implementations was modelled on the bee's ability to perform smooth landings on flat horizontal surfaces. From the analysis of a series of landing trajectories, two key principles were recognised as being particularly relevant to the computation of smooth landing. Firstly, bees landing on a horizontal surface, approach the surface at a relatively shallow descent angle (on average at an angle of about 28 degrees). Secondly, during landing bees tend to maintain the angular velocity of the ground constant as they approach it. This means that the horizontal speed is directly proportional to the height in a linear relationship so that the two values are very near to zero at the time of touchdown.

These two important factors were incorporated into an algorythm which was implemented in a computer-controlled gantry robot carrying a visual system, provided by a camera mounted on a gantry head which could be translated in three dimensions (x, y and z). The translatory motion of the camera was however restricted to the forward (x) and downward (-z) directions, to avoid un-necessary complications, for the purpose of implementing smooth landing. The landing surface was covered with a random black and white pattern to facilitate the measurement of image motion. The results of this implementation were successful for landings performed on flat surfaces, provided they have some visual texture to supply the vehicle with sufficiently reliable optic-flow signals, but were not very successful under any other circumstances (Srinivasan, 2001).

The example of implementation just illustrated is however no more than a simple behavioural emulation of the animal's behaviour and doesn't provide us with any further notion at the neuronal level of the animal. This kind of implementations is not very valuable from a biological point of view, while implementation which aim at fully reproducing the animal's behaviour and its underlying circuitry will be a lot more useful for testing the consistency of current biological models or developing possible alternatives.

Mark Blanchard and his team (2000) built an autonomous vehicle for collision avoidance closely based on neurophysiological and anatomical data, as well as behavioural data, collected from studies on the locust's optic lobe for the computation of collision avoidance, which I illustrated in the first part of this essay. Building a robot, which could as closely as possible resemble the animal's underlying neuronal interactions, was also needed in the attempt to resolve a discrepancy within the model itself. This was the timing response, at the neuronal level, for signalling an approaching object, for which separate studies gave conflicting results. One study showed that the spiking rate in the LGMD increased continually during the approach of an object, while the second study showed that the spike rate may peak some time before collision (Blanchard et al, 2000).

The network that was implemented was made up of three main layers, the input photoreceptive layer, designed to detect edges of approaching objects, a processing layer, which retinotopically passed on excitatory signals from the first layer while delaying the inhibitory signals and an output layer, which represented the LGMD. An additional feedforward mechanism was added to inhibit the processing of wide-field visual changes caused by ego-motion. The control structure of the robot consisted of a so-called reactive control structure, capable of controlling the robot using only input from the infrared sensors. This robot was able to perform three specific behaviours: exploratory activity, produced by the Noise group of random spike cells (forward translation), simple collision avoidance behaviour in response to infrared sensors (rotation) and avoidance of more distant obstacles in response to LGMD inputs (rotation). The sensory inputs all converged onto the MotorOut cell group (an array of 10x10 linear threshold cells), which represented a motor map with each set of cells within the array producing a specific motor output (e.g.: the cells in the upper half of the array produced forward motion).

The results showed that when the robot was far away from an obstacle, the movement of the edges was slow and the lateral inhibition outweighed the excitatory signals triggered from the detection of these edges in the visual field. When the obstacle was getting closer, on the other hand, the edges of the object moved faster than the lateral inhibition and the stimulatory (S) cells were activated. A number of trails were carried out in order to test the spiking rate of the model LGMD, in response to changes in the visual field. These traces show that the rate of spiking activity does in fact increase over time in response to a looming objects and that there is a minimum distance-threshold for initiating this spiking activity. It does not however appear, from these trails, that the spikes ever reached a peak before collision. Neverthelessthis can't be ruled out completely as this result could be attributable to technical inconsistencies present within the model.

Franceschini (1992) implemented another biologically based automaton with a purpose-specific algorithm to carry out obstacle avoidance behaviour. This wheeled synchro-drive mobile platform was also equipped with a panoramic compound eye, inspired by the fly's eye (with an array of electro-optical EMDs), and a sensor to evaluate the angular speed to assess the distance between the vehicle itself and an approaching obstacle. There are two sets of visual sensors, which make up the robot's visual system, the "frontal sensors" (responsible for movement towards a target) and the "compound eye " array of photoreceptors, which provide a panoramic view of the robot's surroundings and rely on the integration of input from correlation type emds. The implemented network is responsible for integrating the signals delivered by the EMDs and controls the steering responses of the vehicle in real time. The only major problem that the robot encountered was deadlocking when the attraction to a target and the avoidance behaviour to an obstacle deadlocked. The advantage, however, of only dealing with local data and having a tight feedback loop between perception and action is that it allows the robot to genuinely explore the environment. It also prevents the robot from undergoing any form of learning process, which might eventually alter the "synaptic weights" in the circuits of the network and create a bias in the robot's pathway (Franceschini et al, 1992).

Conclusion

Research has shown that motion detection is necessary to provide information about self-motion as it gathers information about the environment and reacts to it accordingly. The aim of this field of AI is to strive towards a complete understanding of the neural mechanisms that control all motor activities performed by animals. It may one day be possible to infer from the cellular morphology of any neuronal circuit the operations it can perform and eventually provide us with an in depth understanding of the sophisticated processings of the human brain in real time.

This kind of research, developing biologically realistic models of sensory processings applied to mobile platforms (of which the LGMD model by Blanchard is a significant example), looks very promising with regard to a more constructive cooperation between the various fields of science. The study of the principles involved in simple invertebrate nervous sytems is a very important step in the attempt to simulate complex behaviour with simplified mechanisms and fewer processing units. With the aid of artificial evolution we may be able to achieve a previously unimaginable degree of behavioural complexity with relatively simple systems. The first attempts in this direction have already been implemented in the analysis of long-term

sensorimotor adaptations. This was done with the aid of a genetic algorithm (GA) in which the visual morphology of the selected agents was allowed to evolve along with the rest of the control network (Husbands et al, 1997).

Fig 3: Wabot-2, at the Tokyo Exposition. Building this kind of robot is a challenging task because the dexterity of the human hand is perhaps the most difficult function to recreate mechanically. Although Wabot-2's performance may not be emotional, with an electronic scanning eye and quality components, the technical accuracy will be extremely high (Encarta, 1999).

References

· Egelhaaf M. and Borst A. (1992) Motion computation and visual orientation in flies (mini review). Comp. Biochem. Physiol. Vol 104A, No 4 pp.659-673

· Poggio T and Koch C (May 1987). Synapses that compute motion. Scientific American pp. 42-48

· Rind C and Simmons P J (1999). Seeing what is coming: building collision-sensitive neurones. Trend. Neurosci. Vol 22, pp. 215-220

· Srinivasan M V, Zhang S and Chahl J (2001). Landing Strategies in Honeybees, and Possible Applications to Autonomous Airborne Vehicles. Biol. Bull. Vol 200, pp. 216-221

· Blanchard M, Rind C and Verschure P F M J (2000). Collision avoidance using a model of the locust LGMD neuron. Robotics and Autonomous Systems Vol 30, pp. 17-38

· Franceschini N, Pichon J M and Blanes C (1992). From insect vision to robot vision. Phil. Trans. R. Soc. Lond. B Vol 337 pp. 283-294

· Osorio D (2001). Motion Detection & Synaptic Computation. AMI lecture handout for lectures 7 and 8

· Osorio D (2001). Robotic collision avoidance and smooth landings. AMI lecture handout for lecture 9

· Husbands P, Harvey I, Cliff D and Miller G (1997). Artificial Evolution: A New Path for Artificial Intelligence? Brain and Cognition. Vol 34, pp. 130-159

· Microsoft Encarta Encyclopedia 1999