RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Temporal processing of self-motion: modeling reaction times for rotations and translations

        Soyka, Florian,,lthoff, Heinrich H.,Barnett-Cowan, Michael Springer-Verlag 2013 Experimental brain research Vol.228 No.1

        <P>In this paper, we show that differences in reaction times (RT) to self-motion depend not only on the duration of the profile, but also on the actual time course of the acceleration. We previously proposed models that described direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). As these models have the potential to describe RT for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations), we validated these models by measuring RTs in human observers for a direction discrimination task using both translational and rotational motions varying in amplitude, duration and acceleration profile shape in a within-subjects design. In agreement with previous studies, amplitude and duration were found to affect RT, and importantly, we found an influence of the profile shape on RT. The models are able to fit the measured RTs with an accuracy of around 5 ms, and the best-fitting parameters are similar to those found from identifying the models based on threshold measurements. This confirms the validity of the modeling approach and links perceptual thresholds to RT. By establishing a link between vestibular thresholds for self-motion and RT, we show for the first time that RTs to purely inertial motion stimuli can be used as an alternative to threshold measurements for identifying self-motion perception models. This is advantageous, since RT tasks are less challenging for participants and make assessment of vestibular function less fatiguing. Further, our results provide strong evidence that the perceived timing of self-motion stimulation is largely influenced by the response dynamics of the vestibular sensory organs.</P>

      • Nonlinear ego-motion estimation from optical flow for online control of a quadrotor UAV

        Grabe, Volker,,lthoff, Heinrich H.,Scaramuzza, Davide,Giordano, Paolo Robuffo SAGE Publications 2015 The International journal of robotics research Vol.34 No.8

        <P>For the control of unmanned aerial vehicles (UAVs) in GPS-denied environments, cameras have been widely exploited as the main sensory modality for addressing the UAV state estimation problem. However, the use of visual information for ego-motion estimation presents several theoretical and practical difficulties, such as data association, occlusions, and lack of direct metric information when exploiting monocular cameras. In this paper, we address these issues by considering a quadrotor UAV equipped with an onboard monocular camera and an inertial measurement unit (IMU). First, we propose a robust ego-motion estimation algorithm for recovering the UAV scaled linear velocity and angular velocity from optical flow by exploiting the so-called <I>continuous</I> homography constraint in the presence of planar scenes. Then, we address the problem of retrieving the (unknown) metric scale by fusing the visual information with measurements from the onboard IMU. To this end, two different estimation strategies are proposed and critically compared: a first exploiting the classical extended Kalman filter (EKF) formulation, and a second one based on a novel nonlinear estimation framework. The main advantage of the latter scheme lies in the possibility of imposing a <I>desired</I> transient response to the estimation error when the camera moves with a constant acceleration norm with respect to the observed plane. We indeed show that, when compared against the EKF on the same trajectory and sensory data, the nonlinear scheme yields considerably superior performance in terms of convergence rate and predictability of the estimation. The paper is then concluded by an extensive experimental validation, including an onboard closed-loop control of a real quadrotor UAV meant to demonstrate the robustness of our approach in real-world conditions.</P>

      • Multisensory integration in the estimation of walked distances.

        Campos, Jennifer L,Butler, John S,B?lthoff, Heinrich H Springer-Verlag 2012 Experimental brain research Vol.218 No.4

        <P>When walking through space, both dynamic visual information (optic flow) and body-based information (proprioceptive and vestibular) jointly specify the magnitude of distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, less is known about how they are integrated when simultaneously present. Many studies have shown that sensory information is integrated using a weighted linear sum, yet little is known about whether this holds true for the integration of visual and body-based cues for travelled distance perception. In this study using Virtual Reality technologies, participants first travelled a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. The visual stimulus consisted of a long hallway and was presented in stereo via a head-mounted display. Body-based cues were provided either by walking in a fully tracked free-walking space (Exp. 1) or by being passively moved in a wheelchair (Exp. 2). Travelled distances were provided either through optic flow alone, body-based cues alone or through both cues combined. In the combined condition, visually specified distances were either congruent (1.0) or incongruent (0.7 or 1.4) with distances specified by body-based cues. Responses reflect a consistent combined effect of both visual and body-based information, with an overall higher influence of body-based cues when walking and a higher influence of visual cues during passive movement. When comparing the results of Experiments 1 and 2, it is clear that both proprioceptive and vestibular cues contribute to travelled distance estimates during walking. These observed results were effectively described using a basic linear weighting model.</P>

      • Render me real? : investigating the effect of render style on the perception of animated virtual humans

        McDonnell, Rachel,Breidt, Martin,,lthoff, Heinrich H. Association for Computing Machinery 2012 ACM transactions on graphics Vol.31 No.4

        <P>The realistic depiction of lifelike virtual humans has been the goal of many movie makers in the last decade. Recently, films such as Tron: Legacy and The Curious Case of Benjamin Button have produced highly realistic characters. In the real-time domain, there is also a need to deliver realistic virtual characters, with the increase in popularity of interactive drama video games (such as L. A. Noire (TM) or Heavy Rain (TM)). There have been mixed reactions from audiences to lifelike characters used in movies and games, with some saying that the increased realism highlights subtle imperfections, which can be disturbing. Some developers opt for a stylized rendering (such as cartoon-shading) to avoid a negative reaction [Thompson 2004]. In this paper, we investigate some of the consequences of choosing realistic or stylized rendering in order to provide guidelines for developers for creating appealing virtual characters. We conducted a series of psychophysical experiments to determine whether render style affects how virtual humans are perceived. Motion capture with synchronized eye-tracked data was used throughout to animate custom-made virtual model replicas of the captured actors.</P>

      • 음함수 곡면 기반의 3차원 변형가능 얼굴 모델 생성

        신아영(Ahyoung Shin),Christian Wallraven,Heinrich B?lthoff,이성환(Seong-Whan Lee) 한국정보과학회 2010 한국정보과학회 학술발표논문집 Vol.37 No.2C

        3차원 변형가능 얼굴 모델은 조명과 포즈 변화에 강인한 얼굴 인식, 표정 합성, 얼굴 복원, 애니메이션 등 다양한 분야에서 사용되고 있다. 이러한 3차원 변형가능 얼굴 모델을 생성하기 위해서는 다수의 얼굴 데이터에 대하여 정점의 수를 일치시키고 정렬하는 대응점 찾기 과정이 필요하다. 하지만 기존 방법들은 귀와 같이 굴곡이 많은 부분에서는 대응점 찾기 결과가 부정확하거나 세부적인 부분에서의 표현력이 떨어지고 얼굴 곡면을 매끄럽게 표현하는데 한계가 있다. 본 논문에서는 변형가능 얼굴의 정확한 대응점 찾기와 부드러운 곡면의 표현이 가능한 음함수 곡면을 이용한 대응점 찾기 방법 기반의 3차원 변형가능 얼굴 모델 생성 방법을 제안한다. 그리고 생성된 3차원 변형가능 얼굴 모델의 파라미터를 조정하여 사실적인 얼굴 속성 변화 실험을 하였다.

      • Walking improves your cognitive map in environments that are large-scale and large in extent

        Ruddle, Roy A.,Volkova, Ekaterina,,lthoff, Heinrich H. Association for Computing Machinery 2011 ACM transactions on computer-human interaction Vol.18 No.2

        <P>This study investigated the effect of body-based information (proprioception, etc.) when participants navigated large-scale virtual marketplaces that were either small (Experiment 1) or large in extent (Experiment 2). Extent refers to the size of an environment, whereas scale refers to whether people have to travel through an environment to see the detail necessary for navigation. Each participant was provided with full body-based information (walking through the virtual marketplaces in a large tracking hall or on an omnidirectional treadmill), just the translational component of body-based information (walking on a linear treadmill, but turning with a joystick), just the rotational component (physically turning but using a joystick to translate) or no body-based information (joysticks to translate and rotate). In large and small environments translational body-based information significantly improved the accuracy of participants' cognitive maps, measured using estimates of direction and relative straight line distance but, on its own, rotational body-based information had no effect. In environments of small extent, full body-based information also improved participants' navigational performance. The experiments show that locomotion devices such as linear treadmills would bring substantial benefits to virtual environment applications where large spaces are navigated, and theories of human navigation need to reconsider the contribution made by body-based information, and distinguish between environmental scale and extent.</P>

      • Decentralized rigidity maintenance control with range measurements for multi-robot systems

        Zelazo, Daniel,Franchi, Antonio,,lthoff, Heinrich H.,Robuffo Giordano, Paolo SAGE Publications 2015 The International journal of robotics research Vol.34 No.1

        <P>This work proposes a fully decentralized strategy for maintaining the formation rigidity of a multi-robot system using only range measurements, while still allowing the graph topology to change freely over time. In this direction, a first contribution of this work is an extension of rigidity theory to <I>weighted frameworks</I> and the <I>rigidity eigenvalue</I>, which when positive ensures the infinitesimal rigidity of the framework. We then propose a distributed algorithm for estimating a common relative position reference frame amongst a team of robots with only range measurements in addition to one agent endowed with the capability of measuring the bearing to two other agents. This first estimation step is embedded into a subsequent distributed algorithm for estimating the rigidity eigenvalue associated with the weighted framework. The estimate of the rigidity eigenvalue is finally used to generate a local control action for each agent that both maintains the rigidity property and enforces additional constraints such as collision avoidance and sensing/communication range limits and occlusions. As an additional feature of our approach, the communication and sensing links among the robots are also left free to change over time while preserving rigidity of the whole framework. The proposed scheme is then experimentally validated with a robotic testbed consisting of six quadrotor unmanned aerial vehicles operating in a cluttered environment.</P>

      • Learning to recognize face shapes through serial exploration.

        Wallraven, Christian,Whittingstall, Lisa,B?lthoff, Heinrich H Springer-Verlag 2013 Experimental brain research Vol.226 No.4

        <P>Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.</P>

      • It is all me: the effect of viewpoint on visual-vestibular recalibration.

        Schomaker, Judith,Tesch, Joachim,B?lthoff, Heinrich H,Bresciani, Jean-Pierre Springer-Verlag 2011 Experimental brain research Vol.213 No.2

        <P>Participants performed a visual-vestibular motor recalibration task in virtual reality. The task consisted of keeping the extended arm and hand stable in space during a whole-body rotation induced by a robotic wheelchair. Performance was first quantified in a pre-test in which no visual feedback was available during the rotation. During the subsequent adaptation phase, optical flow resulting from body rotation was provided. This visual feedback was manipulated to create the illusion of a smaller rotational movement than actually occurred, hereby altering the visual-vestibular mapping. The effects of the adaptation phase on hand stabilization performance were measured during a post-test that was identical to the pre-test. Three different groups of subjects were exposed to different perspectives on the visual scene, i.e., first-person, top view, or mirror view. Sensorimotor adaptation occurred for all three viewpoint conditions, performance in the post-test session showing a marked under-compensation relative to the pre-test performance. In other words, all viewpoints gave rise to a remapping between vestibular input and the motor output required to stabilize the arm. Furthermore, the first-person and mirror view adaptation induced a significant decrease in variability of the stabilization performance. Such variability reduction was not observed for the top view adaptation. These results suggest that even if all three viewpoints can evoke substantial adaptation aftereffects, the more naturalistic first-person view and the richer mirror view should be preferred when reducing motor variability constitutes an important issue.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼