Computational Models of the Fly Visual System

26. Juli 2010

Biological organisms navigate through complex visual environments, locating items of interest under complex, unconstrained conditions. Flying insects can pursue and intercept targets with capture rates exceeding 97%, a capability vastly superior to current artificial systems. Underlying this behaviour are two main classes of visual neurons. Small target motion detecting neurons can respond to objects smaller than a single receptor, that move within cluttered surrounds. Wide-field motion neurons are used for navigation, accurately encoding angular velocity independently of environmental structure and in the presence of significant amounts of noise. The neural mechanisms that underlie target detection and velocity estimation have now been emulated and may be exploited to design new artificial vision systems. These intelligent cameras make better use of bandwidth constraints, operating in complex lighting environments that cause other systems to fail. They can encode both ego-motion of moving platforms, whilst simultaneously enhancing salient features. Utilising these bio-inspired algorithms, as pre-processing stages, can help in the design of systems that perform higher-order tasks such as: target tracking, optical flow calculations, camouflage breaking and pattern matching across multiple view ports. This technology is a process for extracting maximum information from 2D data sources. It can be fused with existing sensor technology and need not be limited to visual wavelengths. In fact, its impressive performance under low-resolution constraints means that it is well suited for integration into both visual and IR applications.