RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Optimization of Multiple Sensor Data Pipeline for Real-time 3D Terrain Reconstruction

        Seoungjae Cho,Seongjo Lee,Kyhyun Um,Kyungeun Cho,Sungdae Sim,Yong Woon Park 제어로봇시스템학회 2015 제어로봇시스템학회 국제학술대회 논문집 Vol.2015 No.10

        Remote-control technology is required in an unmanned vehicle such that it can replace humans for executing tasks in various extreme environments. In particular, a remote scene must be reconstructed using 3D meshes for enabling a user to remotely control an unmanned vehicle easily and intuitively. To this end, a large amount of multiple sensor data would be processed using various algorithms in real time. Considering the limited hardware specifications in extreme environments, it is difficult to implement 3D terrain reconstruction in high-quality and real time. This paper proposes the optimization of the architecture of a multiple-sensor-data pipeline. The improved performance resulting from the optimized architecture was analyzed through experimental comparison with a non-optimized system.

      • SCIESCOPUS

        Simulation framework of ubiquitous network environments for designing diverse network robots

        Cho, Seoungjae,Fong, Simon,Park, Yong Woon,Cho, Kyungeun North-Holland 2017 Future generations computer systems Vol.76 No.-

        <P><B>Abstract</B></P> <P>Smart homes provide residents with services that offer convenience using sensor networks and a variety of ubiquitous instruments. Network robots based on such networks can perform direct services for these residents. Information from various ubiquitous instruments and sensors located in smart homes is shared with network robots. These robots effectively help residents in their daily routine by accessing this information. However, the development of network robots in an actual environment requires significant time, space, labor, and money. A network robot that has not been fully developed may cause physical damage in unexpected situations. In this paper, we propose a framework that allows the design and simulation of network robot avatars and a variety of smart homes in a virtual environment to address the above problems. This framework activates a network robot avatar based on information obtained from various sensors mounted in the smart home; these sensors identify the daily routine of the human avatar residing in the smart home. Algorithms that include reinforcement learning and action planning are integrated to enable the network robot avatar to serve the human avatar. Further, this paper develops a network robot simulator to verify whether the network robot functions effectively using the framework.</P> <P><B>Highlights</B></P> <P> <UL> <LI> We proposed a framework to simulate a network robot in a virtual smart home. </LI> <LI> A network robot agent identifies daily routines of a resident and executes service. </LI> <LI> The framework shows a network robot could help and reduce tasks of a human agent. </LI> <LI> The simulator verified the framework reduces costs of developing network robots. </LI> </UL> </P>

      • Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

        Cho, Seoungjae,Kim, Jonghyun,Ikram, Warda,Cho, Kyungeun,Jeong, Young-Sik,Um, Kyhyun,Sim, Sungdae Hindawi Publishing Corporation 2014 The Scientific World Journal Vol.2014 No.-

        <P>A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.</P>

      • KCI등재

        A Fast Ground Segmentation Method for 3D Point Cloud

        ( Phuong Chu ),( Seoungjae Cho ),( Sungdae Sim ),( Kiho Kwak ),( Kyungeun Cho ) 한국정보처리학회 2017 Journal of information processing systems Vol.13 No.3

        In this study, we proposed a new approach to segment ground and nonground points gained from a 3D laser range sensor. The primary aim of this research was to provide a fast and effective method for ground segmentation. In each frame, we divide the point cloud into small groups. All threshold points and start-ground points in each group are then analyzed. To determine threshold points we depend on three features: gradient, lost threshold points, and abnormalities in the distance between the sensor and a particular threshold point. After a threshold point is determined, a start-ground point is then identified by considering the height difference between two consecutive points. All points from a start-ground point to the next threshold point are ground points. Other points are nonground. This process is then repeated until all points are labelled.

      • 무인 차량의 이동 장애물 회피를 위한 동적 객체 영역 탐지 기법

        이성조 ( Seongjo Lee ),조성재 ( Seoungjae Cho ),심성대 ( Sungdae Sim ),곽기호 ( Kiho Kwak ),박용운 ( Yong Woon Park ),엄기현 ( Kyhyun Um ),조경은 ( Kyungeun Cho ) 한국정보처리학회 2016 한국정보처리학회 학술대회논문집 Vol.23 No.1

        무인 차량의 자율 주행을 위해 장애물 회피, 주행 가능 도로 판단 등의 기술이 연구되고 있다. 이러한 연구를 실제 환경에서의 자율 주행에 활용하기 위해서는 주변 환경에 동적으로 움직이는 장애물의 위치를 고려할 필요가 있다. 본 연구는 차량에 탑재된 LIDAR로부터 획득한 포인트의 분포 변화를 이용하여 차량 주변에 동적 장애물이 존재하는 지역을 검출하는 방법을 제안한다. 해당 방법은 포인트에 대한 통계치를 활용하여 동적 객체가 존재하는 영역을 추정함으로써 동적 객체 영역을 고속으로 탐색할 수 있다.

      • Automated Space Classification for Network Robots in Ubiquitous Environments

        Choi, Jiwon,Cho, Seoungjae,Chu, Phuong,Vu, Hoang,Um, Kyhyun,Cho, Kyungeun Hindawi Limited 2015 Journal of sensors Vol.2015 No.-

        <P>Network robots provide services to users in smart spaces while being connected to ubiquitous instruments through wireless networks in ubiquitous environments. For more effective behavior planning of network robots, it is necessary to reduce the state space by recognizing a smart space as a set of spaces. This paper proposes a space classification algorithm based on automatic graph generation and naive Bayes classification. The proposed algorithm first filters spaces in order of priority using automatically generated graphs, thereby minimizing the number of tasks that need to be predefined by a human. The filtered spaces then induce the final space classification result using naive Bayes space classification. The results of experiments conducted using virtual agents in virtual environments indicate that the performance of the proposed algorithm is better than that of conventional naive Bayes space classification.</P>

      • Persim 3D: Context-Driven Simulation and Modeling of Human Activities in Smart Spaces

        Jae Woong Lee,Seoungjae Cho,Sirui Liu,Kyungeun Cho,Sumi Helal IEEE 2015 IEEE transactions on automation science and engine Vol.12 No.4

        <P>Automated understanding and recognition of human activities and behaviors in a smart space (e.g., smart house) is of paramount importance to many critical human-centered applications. Recognized activities are the input to the pervasive computer (the smart space) which intelligently interacts with the users to maintain the application's goal be it assistance, safety, child-development, entertainment or other goals. Research in this area is fascinating but severely lacks adequate validation which often relies on datasets that contain sensory data representing the activities. Availing adequate datasets that can be used in a large variety of spaces, for different user groups, and aiming at different goals is very challenging. This is due to the prohibitive cost and the human capital needed to instrument physical spaces and to recruit human subjects to perform the activities and generate data. Simulation of human activities in smart spaces has therefore emerged as an alternative approach to bridge this deficit. Traditional event-driven approaches have been proposed. However, the complexity of human activity simulation was proved to be challenging to these initial simulation efforts. In this paper, we present Persim 3D-an alternative context-driven approach to simulating human activities capable of supporting complex activity scenarios. We present the context-activity-action nexus and show how our approach combines modeling and visualization of actions with context and activity simulation. We present the Persim 3D architecture and algorithms, and describe a detailed validation study of our approach to verify the accuracy and realism of the simulation output (datasets and visualizations) and the scalability of the human effort in using Persim 3D to simulate complex scenarios. We show positive and promising results that validate our approach.</P>

      • Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

        Song, Wei,Cho, Seoungjae,Xi, Yulong,Cho, Kyungeun,Um, Kyhyun Hindawi Publishing Corporation 2014 The Scientific World Journal Vol.2014 No.-

        <P>A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.</P>

      • 2D 카메라를 이용한 A* 경로 계획 기법

        신승엽 ( Seungyoub Ssin ),조성재 ( Seoungjae Cho ),김예지 ( Yeji Kim ),심소현 ( Sohyun Sim ),엄기현 ( Kyhyun Um ),조경은 ( Kyungeun Cho ) 한국정보처리학회 2013 한국정보처리학회 학술대회논문집 Vol.20 No.2

        본 논문에서는 로봇이 정해진 폐구간을 이동하기 위해서 위에서 아래로 촬영한 카메라 정보를 활용한다. 로봇을 특정위치로 이동시키기 위해서는 카메라를 제어하는 서버 시스템과 로봇의 위치를 인식하기 위한 마크가 필요하다. 서버는 로봇의 위치를 로봇으로 인식하는 마크의 색 값으로 카메라로부터 인지하고 로봇에 위치 이동 명령을 수행할 서버와 로봇이 네트워크를 통해 Planning 을 수행한다. 본 연구에서 휴머노이드 로봇인 나오와 로봇에 위치를 촬영할 카메라 그리고 이미지 처리를 하기 위해 OpenCV 와 이동 알고리즘으로 A*를 활용하여 Planning 을 구현한다.

      • SCOPUSKCI등재

        A Fast Ground Segmentation Method for 3D Point Cloud

        Chu, Phuong,Cho, Seoungjae,Sim, Sungdae,Kwak, Kiho,Cho, Kyungeun Korea Information Processing Society 2017 Journal of information processing systems Vol.13 No.3

        In this study, we proposed a new approach to segment ground and nonground points gained from a 3D laser range sensor. The primary aim of this research was to provide a fast and effective method for ground segmentation. In each frame, we divide the point cloud into small groups. All threshold points and start-ground points in each group are then analyzed. To determine threshold points we depend on three features: gradient, lost threshold points, and abnormalities in the distance between the sensor and a particular threshold point. After a threshold point is determined, a start-ground point is then identified by considering the height difference between two consecutive points. All points from a start-ground point to the next threshold point are ground points. Other points are nonground. This process is then repeated until all points are labelled.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼