A reward driven connectionist model of cognitive development
Children learn many skills under self-supervision where exemplars of target responses are not available. Connectionist models which rely on supervised learning are therefore not appropriate for modelling all forms of cognitive development. A task in this class, for which considerable data has been gathered in relationship to Karmiloff-Smith’s Model of Representational Redescription (RR) (Karmiloff-Smith, 1973, 1992); is one in which children learn through trial and error to balance objects. Data from these studies have been used to derive a training set and a new approach to modelling cognitive development has been taken in which learning through a dual backpropagation network (Munro, 1987) is reward-driven. Results have shown that the model can successfully learn and simulate aspects of children’s behaviour without explicit training information being defined. This approach however is incapable of modelling all levels of the RR Model.
Item Type | Other |
---|---|
Divisions |
?? sbu_scs ?? ?? ri_st ?? |
Date Deposited | 18 Nov 2024 11:31 |
Last Modified | 18 Nov 2024 11:31 |
-
picture_as_pdf - 903610.pdf