1.6.14

LIMOMAN (developmental Learning of Internal MOdels for robotic MANipulation based on motor primitives and multisensory integration) addresses the key problem of improving the ability of current robots in dexterous manipulation, inspired by three main aspects of the human motor control system: internal models, learning and multisensory integration.

Internal models that encode the robot sensorimotor capabilities and the dynamics of its interaction with the environment are exploited to generate predictions that improve both robot perception and motion control. These models are either acquired from scratch and/or progressively refined and adapted through learning and optimization techniques, using sensorimotor data collected by the robot during goal-directed actions. The combination of different sensory modalities (vision, proprioception, touch) and the use of probabilistic (Bayesian) techniques permits to achieve robot behaviors that are effective, robust and safe.
 
We demonstrate our solutions using the iCub humanoid robot in complex manipulation tasks with several requirements: the robot acts on common unmodeled objects (adaptability and robustness), in different contexts (flexibility and creativity), at different levels of complexity (scalability). Part of this research has contributed to the Poeticon++ project as well (http://www.poeticon.eu/). 
Code is open and available on GithHub.
LIMOMAN repo - https://github.com/lorejam/limoman
POETICON repo - https://github.com/lorejam/poeticon 

The project was funded under the European Community's 7th Framework Programme through a Marie Sklodowska-Curie IEF Grant 
[PIEF-GA-2013-628315]. The funds officially started on June 2014 and ended on May 2016; however, the research continues...