Where do goals come from? A Generic Approach to Autonomous Goal-System Development

01 July 2014
Begin time: 
CITEC 1.016


Goals are abstractions that express agents’ intention and allow them to organize their behavior appropriately. They are essential for processes like planning as well as motor control and learning, and critically important in robotics, neuroscience, as well as psychology. How agents that start with no goals at hand can to develop goals
autonomously is an intriguing, long standing, and fundamentally unsolved research question. I will propose a detailed conceptual and computational account to this problem. I will argue that goals can only be learned if the learning of relevant effects of own actions (self-detection) is considered at the same time. Further, I will argue that goals as high level abstractions of intention should be considered as abstractions of rewards and values, i.e. lower level intention mechanisms. Both goals and self-detection can then be learned as latent variables underlying rewards and otherwise raw sensors and actuators. Thereby a reward-based (i.e. reinforcement) problem is turned into a self-supervised motor control problem, which I show is universally possible. Experiments show that the proposed method is highly effective as dimension reduction in a reward-based recommender system application. Another experiment investigates intrinsic rewards induced by visual saliency. It shows that such task-unspecific information-seeking rewards lead to self and goal representations corresponding to goal-directed reaching. Goals and action outcomes are thereby already exploited for action selection and learning by Goal Babbling within a fully closed
action-perception-learning loop.