RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        A study on unmanned combat vehicle path planning for collision avoidance with enemy forces in dynamic situations

        Ahn Jisoo,Jung Sewoong,Kim Hansom,Hwang Ho-Jin,전홍배 한국CDE학회 2023 Journal of computational design and engineering Vol.10 No.6

        This study focuses on the path planning problem for unmanned combat vehicles (UCVs), where the goal is to find a viable path from the starting point to the destination while avoiding collisions with moving obstacles, such as enemy forces. The objective is to minimize the overall cost, which encompasses factors like travel distance, geographical difficulty, and the risk posed by enemy forces. To address this challenge, we have proposed a heuristic algorithm based on D* Lite. This modified algorithm considers not only travel distance but also other military-relevant costs, such as travel difficulty and risk. It generates a path that navigates around both fixed unknown obstacles and dynamically moving obstacles (enemy forces) that change positions over time. To assess the effectiveness of our proposed algorithm, we conducted comprehensive experiments, comparing and analyzing its performance in terms of average pathfinding success rate, average number of turns, and average execution time. Notably, we examined how the algorithm performs under two UCV path search strategies and two obstacle movement strategies. Our findings shed light on the potential of our approach in real-world UCV path planning scenarios.

      • KCI등재

        인공광 이용 식물공장형 육묘시스템의 환경 프로파일 및 오이 생장 평가

        안세웅(Sewoong An),이혜진(Hye Jin Lee),심하선(Ha Seon Sim),안수란(Su Ran Ahn),김성태(Sung Tae Kim),김성겸(Sung Kyeom Kim) (사)한국생물환경조절학회 2021 생물환경조절학회지 Vol.30 No.2

        기후변화로 인하여 이상 기상이 잦은 빈도로 발생하고 있어 사계절 균일한 규격의 채소 접목묘를 생산하기 위한 새로운 시스템이 필요하다. 인공광 이용 식물공장형 육묘시스템은 사계절 외부 기상의 영향을 받지 않고 균일한 모종을 생산할 수 있어 공정육묘장에서 도입을 검토하고 있다. 인공광 이용식물공장형 육묘시스템의 환경 프로파일을 위하여 광량(수직분포; 광원으로부터의 거리 255, 205 및 105mm, 수평 분포; 150 × 150mm 간격으로 총 45점), 광질, 기온 및 상대습도(수직 분포; 지면으로부터 615, 980, 1,345 및 1,710mm, 총 12개 지점)에 대하여 프로파일을 하였다. 오이 육묘 및 환경 프로파일 기간 육묘 모듈의 광량은 150μmol·m<SUP>-2</SUP>·s<SUP>-1</SUP>, 일장은 16/8h, 기온은 25/20℃ 및 상대습도는 70/85%로 설정하였다. 인공광 이용 식물공장형 육묘시스템이 균일한 모종을 생산하는지 평가하기 위하여 ‘조은백다다기’ 오이를 파종 후 8일에 생장을 조사하였다(n=20). 육묘 모듈의 광량은 광원으로부터 거리가 255mm였을 때 167.2 ± 35.7였으며 설정치와 유사하였다. 광원으로부터 거리가 가까워진 곳에서 광량은 각각 11과 23% 증가하였으나, 표준편차가 1.8배 증가하였다. 인공 광원의 적색광/근적색광의 비율은 3.6이었다. 지면으로부터 615, 980, 1345 및 1710mm 떨어진 곳에서 육묘 모듈의 주/야 기온은 각각 24.7/19.5, 24.6/19.5, 24.7/19.4 및 24.7/19.6℃였다. 육묘 모듈의 높이에 의한 위치별 기온의 차이는 없었으나, 주/야 기온의 설정 값과는 각각 0.3 및 0.5℃의 차이는 있었다. 육묘 모듈의 상대습도도 위치별로 차이가 없었으며(71/84%), 상대습도의 설정값과 비교해도 1%의 차이로 매우 정밀하게 제어되었다. 파종 8일 후 오이 모종의 초장, 엽면적, 생체중 및 건물중은 각각 4.1 ± 0.1cm, 24.1 ± 3.7㎠, 0.7 ± 0.13g 및 0.05 ± 0.008g이었으며, 초장의 변이 계수가 약 2.4%이하로 매우 균일한 오이 모종을 생산하였다. 인공광 이용 식물공장형 육묘시스템에서 모종 생산에 큰 영향을 미치는 환경요인들을 수직· 수평으로 프로파일하여 분석하였을 때 기온 및 상대습도는 매우 정밀하고 정확하게 제어되었으며, 광량 및 광질도 오이 모종을 생산하기에 충분히 적절 하였다. 본 인공광 이용 식물공장형 육묘시스템을 보급한다면, 연중 균일한 우량접목묘 생산을 위한 접수· 대목을 육묘 할 수 있을 것으로 기대된다. Due to the climate change such as high temperature in summer and low sunlight in winter, vegetable seedling growers have been facing difficulties to produce uniform seedlings in all four seasons. A plant factory with an artificial lighting (PFAL) would be considered as an effective alternative tool in that it can control environment conditions and produce uniform seedlings without outside weather conditions. Therefore, this study investigated changes of environment parameters, such as light uniformity, temperature and the relative humidity and uniformity of seedlings cultivated in a PFAL to evaluate plant factory transplant production system with an artificial lighting. Cucumber seedlings were grown in a PFAL at the light intensity 250 μmol·m<SUP>-2</SUP>·s<SUP>-1</SUP>, the photoperiod 16/8h, the temperature 25/20℃ and the relative humidity 70/85%. In the light intensity uniformity, as closer to the light source from 255 to 105mm, the amount of light increased by 11 and 23%, respectively, but the standard deviation increased by 1.8 times. For the temperature and the relative humidity by four different height positions (615, 980, 1,345, 1,710 mm distance from the floor), temperature did not show much difference at each location, 24.7/19.5, 24.6/19.5, 24.7/19.4 and 24.7/19.6°C, respectively. Also, the relative did not differ by locations (71/84%). Additionally, cucumber seedling characteristics of plant height, leaf area, fresh weight and dry weight of cucumber seedlings 8 days after sowing showed highly uniform quality, 4.1 ± 0.1 cm, 24.1 ± 3.7 ㎠, 0.7 ± 0.13 g and 0.05 ± 0.008 g, respectively. Considering the results of environment parameter profiling and cucumber seedling uniformity, vegetable seedling production in a PFAL can be a promise tool in the era of climate change.

      • SCISCIESCOPUS

        Visual Preference Assessment on Ultra-High-Definition Images

        Kim, Haksub,Ahn, Sewoong,Kim, Woojae,Lee, Sanghoon [Institute of Electrical and Electronics Engineers 2016 IEEE transactions on broadcasting Vol.62 No.4

        <P>With the recent evolution of ultra-high-definition (UHD) display technology, viewers can enjoy high-resolution content more realistically over TVs, virtual reality, portable, and wearable devices. To increase the visual attraction viewers perceive, post-processing of video content has been more powerfully conducted in such commercial devices. In this paper, we define a new terminology visual preference to quantify viewer perceptual preferences in a certain viewing environment with UHD images processed using sharpness and contrast enhancements. Viewers' visual preference for UHD images depends on the spatial resolution afforded by the UHD display, which in turn depends on the viewing geometry of the display resolution, display size, and viewing distance. In addition, viewers can perceive different degrees of quality and sharpness according to the content enhancement type and level, which leads to variation in the statistical dynamics of spatial image information. In this paper, we explore a novel methodology called the visual preference assessment model (VPAM) that accounts for content enhancement features, diverse viewing geometry, and statistical dynamics variation. The VPAM is a no-reference assessment method designed using an elaborate subjective preference assessment with support vector regression as the machine learning algorithm. The VPAM far outperforms previous methods in terms of correlation, 0.45-0.56, with the visual preference assessment.</P>

      • Blind Deep S3D Image Quality Evaluation via Local to Global Feature Aggregation

        Heeseok Oh,Sewoong Ahn,Jongyoo Kim,Sanghoon Lee IEEE 2017 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.26 No.10

        <P>Previously, no-reference (NR) stereoscopic 3D (S3D) image quality assessment (IQA) algorithms have been limited to the extraction of reliable hand-crafted features based on an understanding of the insufficiently revealed human visual system or natural scene statistics. Furthermore, compared with full-reference (FR) S3D IQA metrics, it is difficult to achieve competitive quality score predictions using the extracted features, which are not optimized with respect to human opinion. To cope with this limitation of the conventional approach, we introduce a novel deep learning scheme for NR S3D IQA in terms of local to global feature aggregation. A deep convolutional neural network (CNN) model is trained in a supervised manner through two-step regression. First, to overcome the lack of training data, local patch-based CNNs are modeled, and the FR S3D IQA metric is used to approximate a reference ground-truth for training the CNNs. The automatically extracted local abstractions are aggregated into global features by inserting an aggregation layer in the deep structure. The locally trained model parameters are then updated iteratively using supervised global labeling, i.e., subjective mean opinion score (MOS). In particular, the proposed deep NR S3D image quality evaluator does not estimate the depth from a pair of S3D images. The S3D image quality scores predicted by the proposed method represent a significant improvement over those of previous NR S3D IQA algorithms. Indeed, the accuracy of the proposed method is competitive with FR S3D IQA metrics, having similar to 91% correlation in terms of MOS.</P>

      • Deep Visual Discomfort Predictor for Stereoscopic 3D Images

        Oh, Heeseok,Ahn, Sewoong,Lee, Sanghoon,Bovik, Alan Conrad IEEE 2018 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.27 No.11

        <P>Most prior approaches to the problem of stereoscopic 3D (S3D) visual discomfort prediction (VDP) have focused on the extraction of perceptually meaningful handcrafted features based on models of visual perception and of natural depth statistics. Toward advancing performance on this problem, we have developed a deep learning-based VDP model named deep visual discomfort predictor (DeepVDP). The DeepVDP uses a convolutional neural network (CNN) to learn features that are highly predictive of experienced visual discomfort. Since a large amount of reference data is needed to train a CNN, we develop a systematic way of dividing the S3D image into local regions defined as patches and model a patch-based CNN using two sequential training steps. Since it is very difficult to obtain human opinions on each patch, instead a proxy ground-truth label that is generated by an existing S3D visual discomfort prediction algorithm called 3D-VDP is assigned to each patch. These proxy ground-truth labels are used to conduct the first stage of training the CNN. In the second stage, the automatically learned local abstractions are aggregated into global features via a feature aggregation layer. The learned features are iteratively updated via supervised learning on subjective 3D discomfort scores, which serve as ground-truth labels on each S3D image. The patch-based CNN model that has been pretrained on proxy ground-truth labels is subsequently retrained on true global subjective scores. The global S3D visual discomfort scores predicted by the trained DeepVDP model achieve the state-of-the-art performance as compared with previous VDP algorithms.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼