Improving Learning for Embodied Agents in Dynamic--Environments by State Factorisation

Jacob, D., Polani, D. and Nehaniv, C.L. (2004) Improving Learning for Embodied Agents in Dynamic--Environments by State Factorisation. ISSN 2945-9133
Copy

A new reinforcement learning algorithm designed--specifically for robots and embodied systems--is described. Conventional reinforcement learning methods intended for learning general tasks suffer from a number of disadvantages in this domain including slow learning speed, an inability--to generalise between states, reduced performance--in dynamic environments, and a lack of scalability. Factor-Q, the new algorithm, uses factorised state and action, coupled with multiple structured rewards, to address these issues. Initial experimental results demonstrate that Factor-Q is able to learn as efficiently in dynamic as in static environments, unlike conventional methods. Further, in the specimen task,--obstacle avoidance is improved by over two orders--of magnitude compared with standard Qlearning.


picture_as_pdf
902207.pdf
Available under Creative Commons: 4.0

View Download