RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        A new visual tracking approach based on salp swarm algorithm for abrupt motion tracking

        ( Huanlong Zhang ),( Junfeng Liu ),( Zhicheng Nie ),( Jie Zhang ),( Jianwei Zhang ) 한국인터넷정보학회 2020 KSII Transactions on Internet and Information Syst Vol.14 No.3

        Salp Swarm Algorithm (SSA) is a new nature-inspired swarm optimization algorithm that mimics the swarming behavior of salps navigating and foraging in the oceans. SSA has been proved to enable to avoid local optima and enhance convergence speed benefiting from the adaptive nonlinear mechanism and salp chains. In this paper, visual tracking is considered to be a process of locating the optimal position through the interaction between leaders and followers in successive images. A novel SSA-based tracking framework is proposed and the analysis and adjustment of parameters are discussed experimentally. Besides, the qualitative analysis and quantitative analysis are performed to demonstrate the tracking effect of our proposed approach by comparing with ten classical tracking algorithms. Extensive comparative experimental results show that our algorithm has good performance in visual tracking, especially for abrupt motion tracking.

      • SCIESCOPUS

        Robust visual tracking based on global-and-local search with confidence reliability estimation

        Fang, Yang,Ko, Seunghyun,Jo, Geun-Sik Elsevier 2019 Neurocomputing Vol.367 No.-

        <P><B>Abstract</B></P> <P>Visual object tracking is an open and challenging problem, an online tracker must be able to keep track of the target object for a long time period even in complex scenarios, such as target drift and background occlusion. Discriminative correlation filters (DCF) have shown excellent performance in short-term target tracking problems thanks to their circular dense sampling mechanism and fast computation with a discrete Fourier transform. However, they tend to drift from the target when the target encounters drastic deformation, fast motion, or background occlusion. This can result in a bad model update since the tracker searches the target in a local region centered at the position where target was located in the previous frame. There is no recovery mechanism for target re-identification and re-location. To handle this issue, this paper proposes a global-and-local-search technique that applies a DCF-based tracking model with a novel target-aware detector in a collaborative way. Our tracking model performs the local search process with high tracking confidence, and the target-aware detector is executed to re-identify and locate the target via global search from the entire frame when the model instability and confidence fluctuation are detected by proposed tracking system. Additionally, we designed an enhanced peak-to-sidelobe ratio (EPSR) for confidence estimation, which indicates system instability and fluctuation degree. Thus, the local tracking model and target-aware detector are collaboratively applied for both final target state estimation and online model updates. This not only avoids model corruption from bad updates, but also prevents our tracker from drifting problems for long-term tracking. Experiments on OTB-100 and VOT2016 benchmarks demonstrate that the proposed tracking method achieves state-of-the-art tracking performance in terms of accuracy and robustness, with 22 fps tracking speed (close to realtime) run on a single GPU.</P>

      • Real-time visual tracking by deep reinforced decision making

        Choi, Janghoon,Kwon, Junseok,Lee, Kyoung Mu Elsevier 2018 Computer vision and image understanding Vol.171 No.-

        <P><B>Abstract</B></P> <P>One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel real-time visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the appropriate template for tracking a given frame. The template selection strategy is self-learned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm runs in real-time speed of 43 fps and the proposed policy network effectively decides the appropriate template for successful visual tracking.</P> <P><B>Highlights</B></P> <P> <UL> <LI> Using deep reinforced template update strategy is beneficial for visual tracking. </LI> <LI> Noteworthy performance gain over other naïve update strategies. </LI> <LI> Runs at real-time speed of 43 fps while maintaining a competitive performance compared to other real-time algorithms. </LI> </UL> </P>

      • KCI등재

        다중 도메인 데이터 기반 구별적 모델 예측 트레커를 위한 동적 탐색 영역 특징 강화 기법

        이준하,원홍인,김병학 대한임베디드공학회 2021 대한임베디드공학회논문지 Vol.16 No.6

        Visual object tracking is a challenging area of study in the field of computer vision due to many difficult problems, including a fast variation of target shape, occlusion, and arbitrary ground truth object designation. In this paper, we focus on the reinforced feature of the dynamic search area to get better performance than conventional discriminative model prediction trackers on the condition when the accuracy deteriorates since low feature discrimination. We propose a reinforced input feature method shown like the spotlight effect on the dynamic search area of the target tracking. This method can be used to improve performances for deep learning based discriminative model prediction tracker, also various types of trackers which are used to infer the center of the target based on the visual object tracking. The proposed method shows the improved tracking performance than the baseline trackers, achieving a relative gain of 38% quantitative improvement from 0.433 to 0.601 F-score at the visual object tracking evaluation. Visual object tracking is a challenging area of study in the field of computer vision due to many difficult problems, including a fast variation of target shape, occlusion, and arbitrary ground truth object designation. In this paper, we focus on the reinforced feature of the dynamic search area to get better performance than conventional discriminative model prediction trackers on the condition when the accuracy deteriorates since low feature discrimination. We propose a reinforced input feature method shown like the spotlight effect on the dynamic search area of the target tracking. This method can be used to improve performances for deep learning based discriminative model prediction tracker, also various types of trackers which are used to infer the center of the target based on the visual object tracking. The proposed method shows the improved tracking performance than the baseline trackers, achieving a relative gain of 38% quantitative improvement from 0.433 to 0.601 F-score at the visual object tracking evaluation.

      • KCI등재

        상관 필터를 활용한 영상 추적에서 재탐지 영역 크기 조절

        박가영 한국컴퓨터정보학회 2020 韓國컴퓨터情報學會論文誌 Vol.25 No.7

        In this paper, we propose an scalable re-detection for correlation filter in visual tracking. In real world, there are lots of target disappearances and reappearances during tracking, thus failure detection and re-detection methods are needed. One of the important point for re-detection is that a searching area must be large enough to find the missing target. For robust visual tracking, we adopt kernelized correlation filter as a baseline. Correlation filters have been extensively studied for visual object tracking in recent years. However conventional correlation filters detect the target in the same size area with the trained filter which is only 2 to 3 times larger than the target. When the target is disappeared for a long time, we need to search a wide area to re-detect the target. Proposed algorithm can search the target in a scalable area, hence the searching area is expanded by 2% in every frame from the target loss. Four datasets are used for experiments and both qualitative and quantitative results are shown in this paper. Our algorithm succeed the target re-detection in challenging datasets while conventional correlation filter fails. 본 논문에서는 상관필터를 이용한 영상 추적에서 탐색 영역의 크기 조절이 가능한 재탐지 방법을 제안한다. 실제 장비를 통해 영상 추적 기능을 실행할 때에는 표적이 특정 물체에 가리고 다시 나타나는 일이 빈번하게 일어나는데, 따라서 표적의 소실 판단과 재탐지 방법이 필요하다. 본알고리즘은 강인한 추적을 위해 커널 상관필터를 사용한다. 일반적인 상관필터를 활용한 영상 추적 알고리즘에서는 표적을 탐지하는 범위가 학습된 필터의 크기에 국한된다. 하지만 표적의 가림이 오랜 시간 지속될수록 표적의 위치는 예측된 위치에서 벗어날 가능성이 커지고, 따라서 충분히 큰 범위에서 표적의 탐색이 이루어져야 한다. 제안하는 방법은 매 프레임 2%씩 탐색 범위를넓히며 재탐지를 시도하여 성공률을 높인다. 실험은 항공에서 촬영된 4가지 영상을 활용하였고, 제안한 알고리즘은 재탐지가 어려운 데이터셋에서도 성공적인 결과를 보였다.

      • KCI등재

        Robust human tracking via key face information

        ( Weisheng Li ),( Xinyi Li ),( Lifang Zhou ) 한국인터넷정보학회 2016 KSII Transactions on Internet and Information Syst Vol.10 No.10

        Tracking human body is an important problem in computer vision field. Tracking failures caused by occlusion can lead to wrong rectification of the target position. In this paper, a robust human tracking algorithm is proposed to address the problem of occlusion, rotation and improve the tracking accuracy. It is based on Tracking-Learning-Detection framework. The key auxiliary information is used in the framework which motivated by the fact that a tracking target is usually embedded in the context that provides useful information. First, face localization method is utilized to find key face location information. Second, the relative position relationship is established between the auxiliary information and the target location. With the relevant model, the key face information will get the current target position when a target has disappeared. Thus, the target can be stably tracked even when it is partially or fully occluded. Experiments are conducted in various challenging videos. In conjunction with online update, the results demonstrate that the proposed method outperforms the traditional TLD algorithm, and it has a relatively better tracking performance than other state-of-the-art methods.

      • KCI등재

        시선추적을 이용한 카페 공간 마감재 차이의 시각주의력 특성

        최진경(Choi, Jin-Kyung),김주연(Kim, Ju-Yeon) 한국실내디자인학회 2018 한국실내디자인학회논문집 Vol.27 No.2

        This study aims to investigate whether there is intensionally changing eye - gaze on the cafe space images with floor finishing materials. In the Yarbus’ experiment, he argued that changing information that an observer is asked to obtain from an image changes pattern of eye movements. Based on the scan path evidence, this research have questions as (1) the difference of visual attention on finishing floor material stimulus, (2) visual attention of initial activity time and type of movement paths on AOIs, and (3) visual relation floor area with another AOIs. Eye movements were recorded with the SMI REDn Scientific, which sampled eye position at 30Hz and lasted 2 minutes(120s). Although viewing was binocular, only the right eye was tracked. Of the 66 observers(mean age 22 years, standard deviation: ±1.82) who participated in the experiment done by the four point calibration and validation procedures at the beginning tasks. Analyzing qualitative data from the number of fixation and duration on AOIs divided into four parts (AOI Ⅰ-Floor, AOI Ⅱ-Wall, AOI Ⅲ-Ceiling, and AOI Ⅳ-Counter) in the stimulus. The results from this experiment analyzed as follows. First, it was significant in the difference of the average number of AOIs fixation times observed for the spatial image using the wood tile flooring material and the polishing tile. The wood tile flooring of stimulus had higher fixation number on AOI-II, AOI-III, and AOI-IV than the polishing tile. On seeing AOI-I was higher attention in the polishing tile stimulus. Second, the observers examined AOI-II intensively in both stimuli. However, the visual intensity was also followed by on the AOI-Ⅳ and AOI-I in the wood tile flooring stimulus, and on AOI-I, AO-IV in the polishing tile. Third, visual attention data on each AOIs have divided into the time range of 「5 sec」for both images. In the wood tile stimulus, the horizontal movement path followed by AOI-II, AOI-IV, and AOI-II. In the polished tile stimulus, the movement path followed by moving vertically to AOI-Ⅱ, AOI-Ⅰ, and AOI-Ⅱ. This study approached meaningfully and found out the characteristics of visual attention, according to the different intentions of visual attention, the relationship pathways of visual mechanism appeared and also activated by eye-tracking experiments.

      • KCI등재

        시선추적장치(Eye Tracking)를 활용한 인공지능(AI) 창작물과 사람의 창작물에 대한 시지각 비교 연구

        황미경,주이모,박민희,권만우 한국멀티미디어학회 2022 멀티미디어학회논문지 Vol.25 No.2

        This study analyzes the visual perceptual difference of observers in the artworks created by human artists and artificial intelligence(AI) through eye-tracking. More specifically, the study analyzes the degree of visual attention through a fixation experiment on non-linguistic sources such as the formation and expression of artworks. As a result of this study, the subjects had guessed that one out of four artworks were created by AI (in actuality, 61.1% of the artworks were created by The Next Rembrandt). This demonstrates that most of the subjects hardly recognized the difference between the artwork of human artists and AI. From the comparative analysis of visual perceptual differences found through eye-tracking, more visual attention was found to be demanded for catching details of more stimulating visuals compared to less stimulating visuals. In the gender difference analysis, both of the female and male subjects were likely to stare more intently at the flowers of still-life paintings (Deep Dream & Vincent Van Gogh) while the eyes of a portrait painting (Rembrandt & The Next Rembrandt); this demonstrates no significant differences in gender. Various opinions on AI and art creation from different perspectives arose, therefore, this research is meaningful in a way that it suggests an objective examination through experiments with an artistic perspective.

      • 비전기반 드론 추적 제어 시스템

        이동희(Donghee Lee),양원석(Wonseok Yang),이준학(Junhak Yi),박우룡(Wooryong Park),김현우(Hyeonwoo Kim),남우철(Woochul Nam) 국방로봇학회 2023 국방로봇학회 논문집 Vol.2 No.4

        Unmanned aerial vehicles (UAVs) are widely utilized throughout various fields. The swiftness of UAVs enables them to track moving objects by processing their vision data. Tracking a ground object is achieveable with relatively simple control due to the wide field of view of the UAV of the ground acquired from above and the gradual movement of the target object. However, tracking objects with nimble and unexpected movements such as UAVs is difficult. The tracking UAV needs to be controlled swiftly according to the sudden movement of the target object. Abrupt movement of the target object could also provoke errors in visual tracking. Thus, we developed a new tracking system for UAVs that enables them to track another UAV. A state-of-the-art visual detector YOLO-V5 was adopted for visual tracking, and different control schemes were applied to track the target UAV. This system was verified by real flight experiments with a micro air vehicle (DJI TELLO).

      • KCI등재후보

        Configuration of Projective Modular Active Shape Model for Object Tracking

        김원 한국정보기술학회 2008 한국정보기술학회논문지 Vol.6 No.5

        Boundary contour tracking is useful in the field of visual analysis such as motion estimation and 3-D reconstruction. Because active contour model, Snake, is fast and easy, it has been used widely in the tracking. Tracking performance depends on stable preservation of correspondence relation and the robustness to the local minima around targets. However, Snake is weak and sensitive to noises and local minima because it is edge-based tracking system. This paper shows that active shape model can be appliable to track two dimensional objects while containing their shape information in it. The point distribution models(PDMs) are generated projectively in consideration of projective relations of camera system to world coordinates. For each PDM analysis, the corresponding eigenvector is obtained, which contains contour variational information in image space due to camera motion. Then active shape model is modularly composed of these eigenvectors to construct the projective modular active shape model(MASM). This model has capabilities to cover contour motion generated by camera 6-DOF motion and to overcome edge noises. Moreover, it is of good performance in preserving correspondence relations since including boundary shape as a model and its variational information. The feasibilities are shown by the experimental results on object tracking with strong edge disturbances around target.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼