RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Convolutional LSTM을 이용한 유의 파고 및 파향의 실시간 추정 기법 연구

        노영빈(Youngbin Ro),최희정(Heejeong Choi),이정호(Jungho Lee),서승완(Seungwan Seo),강필성(Pilsung Kang) 대한산업공학회 2020 대한산업공학회지 Vol.46 No.6

        Real-time estimation of wave condition is essential to improve sailing efficiency. However, existing methodologies are uneconomical due to the expensive radar and high computational complexity. To this end, we propose a neural network model capable of real-time estimation of significant wave height and direction by using raw ocean images collected from operating vessels. In the proposed method, multiple consecutive ocean images are concatenated as a single clip. Then, Convolutional Long Short-Term Memory (ConvLSTM), which combines Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM), was trained on the clips. The final estimation is performed through regression or classification using the extracted spatiotemporal feature map. Based on the datasets collected from two different ships, our proposed method achieved the absolute error of 8cm and a relative error of 5% for significant wave height estimation. Besides, the proposed method yielded an absolute error of 6° for wave direction.

      • 구어체 적응 사전 학습을 통한 한국어 감정 분류 성능 향상

        이정훈(Junghoon Lee),김동화(Donghwa Kim),노영빈(Youngbin Ro),강필성(Pilsung Kang) 대한산업공학회 2021 대한산업공학회지 Vol.47 No.4

        Language models (LMs) pretrained on a large text corpus and fine-tuned on a task data have a remarkable performance for document classification task. Recently, an adaptive pretraining method that re-pretrains the pretrained LMs using an additional dataset in the same domain with the given task to make up the domain discrepancy has reported significant performance improvements. However, current adaptive pretraining methods only focus on the domain gap between pretraining data and fine-tuning data. The writing style is also different because the pretraining data, e.g., Wikipedia, is written in a literary style, but the task data, e.g., customer review, is usually written in a colloquial style. In this work, we propose a colloquial-adaptive pretraining method that re-pretrains the pretrained LM with informal sentences to generalize the LM to colloquial style. We verify the proposed method based on multi-emotion classification datasets. The experimental results show that the proposed method yields improved classification performance on both low- and high-resource data.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼