http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Ryll, Markus,Bulthoff, Heinrich H.,Giordano, Paolo Robuffo Institute of Electrical and Electronics Engineers 2015 IEEE transactions on control systems technology Vol. No.
<P>Standard quadrotor unmanned aerial vehicles (UAVs) possess a limited mobility because of their inherent underactuation, that is, availability of four independent control inputs (the four propeller spinning velocities) versus the 6 degrees of freedom parameterizing the quadrotor position/orientation in space. Thus, the quadrotor pose cannot track arbitrary trajectories in space (e.g., it can hover on the spot only when horizontal). Because UAVs are more and more employed as service robots for interaction with the environment, this loss of mobility due to their underactuation can constitute a limiting factor. In this paper, we present a novel design for a quadrotor UAV with tilting propellers which is able to overcome these limitations. Indeed, the additional set of four control inputs actuating the propeller tilting angles is shown to yield full actuation to the quadrotor position/orientation in space, thus allowing it to behave as a fully actuated flying vehicle. We then develop a comprehensive modeling and control framework for the proposed quadrotor, and subsequently illustrate the hardware and software specifications of an experimental prototype. Finally, the results of several simulations and real experiments are reported to illustrate the capabilities of the proposed novel UAV design.</P>
Human Areas V3A and V6 Compensate for Self-Induced Planar Visual Motion
Fischer, E.,Bulthoff, Heinrich H.,Logothetis, Nikos K.,Bartels, A. Cell Press 2012 Neuron Vol.73 No.6
Little is known about mechanisms mediating a stable perception of the world during pursuit eye movements. Here, we used fMRI to determine to what extent human motion-responsive areas integrate planar retinal motion with nonretinal eye movement signals in order to discard self-induced planar retinal motion and to respond to objective (''real'') motion. In contrast to other areas, V3A lacked responses to self-induced planar retinal motion but responded strongly to head-centered motion, even when retinally canceled by pursuit. This indicates a near-complete multimodal integration of visual with nonvisual planar motion signals in V3A. V3A could be mapped selectively and robustly in every single subject on this basis. V6 also reported head-centered planar motion, even when 3D flow was added to it, but was suppressed by retinal planar motion. These findings suggest a dominant contribution of human areas V3A and V6 to head-centered motion perception and to perceptual stability during eye movements.
The MPI CyberMotion Simulator: A Novel Research Platform to Investigate Human Control Behavior
Nieuwenhuizen, Frank M.,Bulthoff, Heinrich H. Korean Institute of Information Scientists and Eng 2013 Journal of Computing Science and Engineering Vol.7 No.2
The MPI CyberMotion Simulator provides a unique motion platform, as it features an anthropomorphic robot with a large workspace, combined with an actuated cabin and a linear track for lateral movement. This paper introduces the simulator as a tool for studying human perception, and compares its characteristics to conventional Stewart platforms. Furthermore, an experimental evaluation is presented in which multimodal human control behavior is studied by identifying the visual and vestibular responses of participants in a roll-lateral helicopter hover task. The results show that the simulator motion allows participants to increase tracking performance by changing their control strategy, shifting from reliance on visual error perception to reliance on simulator motion cues. The MPI CyberMotion Simulator has proven to be a state-of-the-art motion simulator for psychophysical research to study humans with various experimental paradigms, ranging from passive perception experiments to active control tasks, such as driving a car or flying a helicopter.
Shared Control : Balancing Autonomy and Human Assistance with a Group of Quadrotor UAVs
Franchi, A.,Secchi, C.,Ryll, M.,Bulthoff, H. H.,Giordano, P. R. IEEE 2012 IEEE robotics & automation magazine Vol.19 No.3
<P>Robustness and flexibility constitute the main advantages of multiple-robot systems with respect to single-robot ones as per the recent literature. The use of multiple unmanned aerial vehicles (UAVs) combines these benefits with the agility and pervasiveness of aerial platforms [1], [2]. The degree of autonomy of the multi-UAV system should be tuned according to the specificities of the situation under consideration. For regular missions, fully autonomous UAV systems are often appropriate, but, in general, the use of semiautonomous groups of UAVs, supervised or partially controlled by one or more human operators, is the only viable solution to deal with the complexity and unpredictability of real-world scenarios as in, e.g., the case of search and rescue missions or exploration of large/cluttered environments [3]. In addition, the human presence is also mandatory for taking the responsibility of critical decisions in high-risk situations [4].</P>
Active In-Hand Object Recognition on a Humanoid Robot
Browatzki, Bjorn,Tikhanoff, Vadim,Metta, Giorgio,Bulthoff, Heinrich H.,Wallraven, Christian IEEE 2014 IEEE TRANSACTIONS ON ROBOTICS Vol.30 No.5
<P>For any robot, the ability to recognize and manipulate unknown objects is crucial to successfully work in natural environments. Object recognition and categorization is a very challenging problem, as 3-D objects often give rise to ambiguous, 2-D views. Here, we present a perception-driven exploration and recognition scheme for in-hand object recognition implemented on the iCub humanoid robot. In this setup, the robot actively seeks out object views to optimize the exploration sequence. This is achieved by regarding the object recognition problem as a localization problem. We search for the most likely viewpoint position on the viewsphere of all objects. This problem can be solved efficiently using a particle filter that fuses visual cues with associated motor actions. Based on the state of the filter, we can predict the next best viewpoint after each recognition step by searching for the action that leads to the highest expected information gain. We conduct extensive evaluations of the proposed system in simulation as well as on the actual robot and show the benefit of perception-driven exploration over passive, vision-only processes at discriminating between highly similar objects. We demonstrate that objects are recognized faster and at the same time with a higher accuracy.</P>
Bilateral Teleoperation of Groups of Mobile Robots With Time-Varying Topology
Franchi, A.,Secchi, C.,Hyoung Il Son,Bulthoff, H. H.,Giordano, P. R. IEEE 2012 IEEE transactions on robotics Vol.28 No.5
<P>In this paper, a novel decentralized control strategy for bilaterally teleoperating heterogeneous groups of mobile robots from different domains (aerial, ground, marine, and underwater) is proposed. By using a decentralized control architecture, the group of robots, which is treated as the slave side, is made able to navigate in a cluttered environment while avoiding obstacles, interrobot collisions, and following the human motion commands. Simultaneously, the human operator acting on the master side is provided with a suitable force feedback informative of the group response and of the interaction with the surrounding environment. Using passivity-based techniques, we allow the behavior of the group to be as flexible as possible with arbitrary split and join events (e.g., due to interrobot visibility/packet losses or specific task requirements) while guaranteeing the stability of the system. We provide a rigorous analysis of the system stability and steady-state characteristics and validate performance through human/hardware-in-the-loop simulations by considering a heterogeneous fleet of unmanned aerial vehicles (UAVs) and unmanned ground vehicles as a case study. Finally, we also provide an experimental validation with four quadrotor UAVs.</P>