Motivation Driven Learning of Action Affordances
Survival in the animal realm often depends on the ability to elucidate the potentialities for action offered by every situation. This paper argues that affordance learning is a powerful ability for adaptive, embodied, situated agents, and presents a motivation-driven method for their learning. The method proposed considers the agent and its environment as a single unit, thus intrinsically relating agent's interactions to fluctuations of the agent's internal motivation. Being that the motivational state is an expression of the agent's physiology, the existing causality of interactions and their effect on the motivational state is exploited as a principle to learn object affordances. The hypothesis is tested in a Webots 4.0 simulator with a Khepera robot.
Item Type | Other |
---|---|
Divisions | ?? sbu_scs ?? |
Date Deposited | 18 Nov 2024 11:39 |
Last Modified | 18 Nov 2024 11:39 |
-
picture_as_pdf - 902180.pdf