http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
오늘 본 자료
윤준용(Junyong Yun),박정인(Jeongin Park),백석영(Seokyeong Baek),성우석(Woosuk Sung) 대한전자공학회 2019 대한전자공학회 학술대회 Vol.2019 No.11
In this paper, we propose virtual lane generation schemes to enable our mobile robots to traverse intersections where one line marking does not exist. The proposed schemes commonly detect a line on one side and based on its characteristics, they generate a virtual line on the other side. However, detailed schemes to virtualize a line is quite different. Three different schemes are comparatively validated in our test environment. It is demonstrated that the first two schemes are limited to use in relatively low curvatures. In comparison, the final scheme is able to yield better virtual lines, irrespective of the radius of curvature at the intersection.
지역 환경 및 차량 상호작용을 활용한 two-stage 멀티모달 차량 미래 경로 예측 기술
최세환(Sehwan Choi),윤준용(Junyong Yun),김정호(Jungho Kim),최준원(Jun Won Choi) 한국자동차공학회 2023 한국자동차공학회 부문종합 학술대회 Vol.2023 No.5
In this paper, we propose a two-stage multi-modal future trajectory prediction framework designed to effectively utilize significant inter-agent interaction and local scene context. This two-stage motion prediction architecture, referred to as GL-Pred, consists of two networks: the proposal trajectory network and the refinement trajectory network. The proposal trajectory network produces multi-modal trajectory proposals by leveraging past trajectories and global environmental information. The refinement trajectory network enhances each of the trajectory proposals using group-query attention and localquery attention mechanisms. Group-query attention mechanism further enhances the trajectory proposals by modeling the interagent interaction by grouping the proposal trajectory of the neighboring agents. Local-query attention mechanism is used to aggregate local scene context features collected from around trajectory proposals. Finally, we combine group-query attention and local-query attention features to produce the multi-modal future trajectory. The experiments conducted on the Argoverse dataset demonstrate that the proposed GL-Pred outperforms existing motion prediction methods.
Gazebo 시뮬레이터를 활용한 모바일 로봇의 장애물 회피 알고리즘 검증
박정인(Jeongin Park),백석영(Seokyeong Baek),윤준용(Junyong Yun),성우석(Woosuk Sung) 한국자동차공학회 2019 한국자동차공학회 부문종합 학술대회 Vol.2019 No.5
This paper deals with the validation of an obstacle avoidance algorithm, the dynamic window approaches, in different in-door navigation environments. The navigation environments come from the AutoRace challenge where a ROSenabled TurtleBot3 completes missions during self-driving. Among six missions, we selected two requiring obstacle avoidance, which are called roadworks and tunnel, respectively. They differ in the sense that the roadworks feature static obstacles while the tunnel is filled with obstacles whose number and position are not predetermined fixed. In order for TurtleBot3 to cope with the many different cases in navigating through obstacles, a ROS-compatible robot simulator, Gazebo, is used for the purpose of validating fine-tuned parameters in the ROS navigation package. By applying Gazebo prior to actual tests, the validation can be done in time-effective way. This enables TurtleBot3 to pass through roadworks and tunnel specifically with 3 obstacles in 13 and 14 seconds, respectively.
박정인(Jeongin Park),백석영(Seokyeong Baek),윤준용(Junyong Yun),성우석(Woosuk Sung) 대한전자공학회 2019 대한전자공학회 학술대회 Vol.2019 No.11
We propose the pipeline of a lane detection algorithm that can compensate for performance degradation of lane detection due to image blurring while transforming perspectives. The proposed pipeline combines the advantages of two different lane detection pipelines. Prior to image blurring by the perspective transform, we first detect lanes by using their color features. In the top view, we subsequently perform additional image processing such as applying ROI (region of interest) and generating virtual lanes. Experimental results show that the proposed pipeline improves the detection performance specifically against distant lanes at dark environments.
백석영(Seokyeong Baek),박정인(Jeongin Park),윤준용(Junyong Yun),성우석(Woosuk Sung) 대한전자공학회 2019 대한전자공학회 학술대회 Vol.2019 No.11
In this work, we incorporate lidar point clouds into a camera image in order to map obstacles in the lane. To this end, two different processes are required; one is spatial fusion and the other is temporal fusion. The spatial fusion transforms the lidar coordinates into the camera coordinates such that the lidar point clouds could be projected onto the camera image. By doing this, we are able to combine the lidar point clouds representing the obstacles and the camera image indicating the lane marking. The temporal fusion down-samples the lidar point clouds, attaining time-synchronization between them and the camera image. Through these two processes, the lidar point clouds are fused with the camera image, thereby generating a virtual lane that enables our mobile robot to avoid the obstacles while its lane-following.
백석영(Seokyeong Baek),박정인(Jeongin Park),윤준용(Junyong Yun),성우석(Woosuk Sung) 한국자동차공학회 2019 한국자동차공학회 부문종합 학술대회 Vol.2019 No.5
In this work, we adopt an object detection algorithm to a TurtleBot3 which is small-sized, low-cost but well-known as a ROS (robot operating system) standard platform. This aims at the real-time detection of traffic signs in the AutoRace track where the TurtleBot3 attempts to complete missions during self-driving. As an effort for the real-time guarantee, a YOLO (you only look once) is selected as an unified object detection algorithm. In order to run this deep neural network-based algorithm in real-time, a Nvidia Jetson TX2 is employed as a single board computer in the TurtleBot3. While training the network in the YOLO, we suffer from much lower recall levels in distinguishing between left-turn and right-turn signs than others. It turns out that this stems from horizontal flipping, one of the built-in means to augment data in the YOLO. By disabling horizontal flipping, we finally obtain a recall level of over 90% across 12 classes of the traffic signs at a speed of 10fps. The achieved performance is good enough for the TurtleBot3 to follow missions in real-time.