http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching
Chinellato, E,Antonelli, M,Grzyb, B J,del Pobil, A P IEEE 2011 IEEE transactions on autonomous mental development Vol.3 No.1
<P>Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.</P>
Pose Estimation Through Cue Integration: A Neuroscience-Inspired Approach
Chinellato, E.,Grzyb, B. J.,del Pobil, A. P. IEEE 2012 IEEE Transactions on Cybernetics Vol.42 No.2
<P>The aim of this paper is to improve the skills of robotic systems in their interaction with nearby objects. The basic idea is to enhance visual estimation of objects in the world through the merging of different visual estimators of the same stimuli. A neuroscience-inspired model of stereoptic and perspective orientation estimators, merged according to different criteria, is implemented on a robotic setup and tested in different conditions. Experimental results suggest that the integration of multiple monocular and binocular cues can make robot sensory systems more reliable and versatile. The same results, compared with simulations and data from human studies, show that the model is able to reproduce some well-recognized neuropsychological effects.</P>
When humanoid robots become human-like interaction partners: Corepresentation of robotic actions.
Stenzel, Anna,Chinellato, Eris,Bou, Maria A. Tirado,del Pobil, Á,ngel P.,Lappe, Markus,Liepelt, Roman American Psychological Association 2012 JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTIO Vol.38 No.5
<P>In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action corepresentation, as measured by the social Simon effect (SSE), is present when we share a task with a real humanoid robot. Further, we tested whether the believed humanness of the robot's functional principle modulates the extent to which robotic actions are corepresented. We described the robot to participants either as functioning in a biologically inspired human-like way or in a purely deterministic machine-like manner. The SSE was present in the human-like but not in the machine-like robot condition. These findings suggest that humans corepresent the actions of nonbiological robotic agents when they start to attribute human-like cognitive processes to the robot. Our findings provide novel evidence for top-down modulation effects on action corepresentation in human-robot interaction situations.</P>
A Hierarchical System for a Distributed Representation of the Peripersonal Space of a Humanoid Robot
Antonelli, Marco,Gibaldi, Agostino,Beuth, Frederik,Duran, Angel J.,Canessa, Andrea,Chessa, Manuela,Solari, Fabio,del Pobil, Angel P.,Hamker, Fred,Chinellato, Eris,Sabatini, Silvio P. IEEE 2014 IEEE transactions on autonomous mental development Vol.6 No.4
<P>Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.</P>