Internal Event
15. Juli 2009
09:15 - 09:45 Uhr Dirk Köster, Thomas Schack, Robert Haschke, Thomas Hermann, Matthias Weigelt, Helge Ritter: Sensory-motor representations and error learning – experimental analysis of manual intelligence in first order reality, virtual reality and augmented reality 09:55 – 10:25 Uhr Christian Peters, Sven Wachsmuth, Thomas Hermann: Task assistance for persons with cognitive disabilities 10:35 – 11:05 Uhr Rebecca Förster, Elena Carbone, Hendrik Kösling, Thomas Hermann, Bettina Bläsing: 'Speed stacking': A scenario for studying learning, automatization, and cooperation in a complex bimanual task 11:15 – 12:00 Uhr James Bonaiuto, USC, LA: Neural Models of the Mirror System in Action Recognition and Production Abstract: The role of mirror neurons in action recognition and imitation is commonly emphasized. However, it has recently been suggested that they evolved for feedback-based control during action production and were later exapted for recognition of actions performed by others. I will present the MNS2 model of the monkey mirror system for acton recognition, and augmented competitive queuing, a model of opportunistic action scheduling that utilizes the mirror system for learning. I will also report progress on the Integrated Learning of Grasping and Affordances (ILGA) model that will provide the basis for an integration of the mirror system and online control of manual action. 12:10 - 12:55 Uhr Simone Frintrop, Universität Bonn: Visual attention for mobile robots Abstract: Visual attention is a mechanism of human perception that determines relevant regions in a scene and provides these regions to higher-level processing. This mechanism enables humans to act efficently in their visually highly complex world. Cognitive systems face similar problems as humans: a large amount of information has to be processed, usually within a limited time, often even in real-time. Many approaches during the recent years have been concerned with a pre-selection of relevant data. Computational attention systems have turned out to be an especially useful approach to enable a system to decide autonomously which parts of the data are currently relevant within a scene. This talk presents the computational attention system VOCUS and several applications for cognitive systems. VOCUS is equipped with a bottom-up, data-driven and a top-down, target-directed module. Thus, the system is on the one hand able to deal with unknown environments and situations and can on the other hand utilize pre-knowledge for the visual search of regions and objects. Since the system is real-time capable and widely robust to illumination changes, transformations, and clutter, it is well suited for mobile systems such as robots or hand-held cameras. We have used VOCUS in several real-world applications such as object recognition, robot localization and visual object tracking and present here various experiments and results 13:05 - 13:15 Uhr Short plenary session