RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      SCOPUS KCI등재

      항공 라이다 점군 분류를 위한 기하학적 특성정보 기반 딥러닝

      한글로보기

      https://www.riss.kr/link?id=A109374531

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      The use of photogrammetry is rapidly increasing in many fields due to the development of various types of advanced sensors that can be mounted on a variety of payloads, along with state-of-the-art data processing technology, including DL (Deep Learning), which can efficiently produce user-oriented spatial information products. The performance of DL is affected by various factors, including neural network architecture, training data and method. This paper presents multimodal DL for LiDAR (Light Detection and Ranging) point cloud classification by utilizing geometric features derived from the 3D coordinates of LiDAR data. In particular, omnivariance, eigenentropy, anisotropy, surface variation, sphericity, and verticality, as geometric features computed from the eigenvalues of the point clouds, were utilized for training the DL model. Each feature represents unique intrinsic information about the objects. By revealing these characteristics inherent in the 3D coordinates of LiDAR data, a synergy effect in DL model training can be achieved to improve DL performance. Additionally, fusion is an important issue in multimodal DL. In this paper, we analyzed the classification from early-fusion and hybrid method based on late-fusion. The overall accuracy of the classification improved by up to 35% for test data by utilizing geometric features with early-fusion. Therefore, multimodal DL could be an effective training strategy by utilizing intrinsic feature information.
      번역하기

      The use of photogrammetry is rapidly increasing in many fields due to the development of various types of advanced sensors that can be mounted on a variety of payloads, along with state-of-the-art data processing technology, including DL (Deep Learnin...

      The use of photogrammetry is rapidly increasing in many fields due to the development of various types of advanced sensors that can be mounted on a variety of payloads, along with state-of-the-art data processing technology, including DL (Deep Learning), which can efficiently produce user-oriented spatial information products. The performance of DL is affected by various factors, including neural network architecture, training data and method. This paper presents multimodal DL for LiDAR (Light Detection and Ranging) point cloud classification by utilizing geometric features derived from the 3D coordinates of LiDAR data. In particular, omnivariance, eigenentropy, anisotropy, surface variation, sphericity, and verticality, as geometric features computed from the eigenvalues of the point clouds, were utilized for training the DL model. Each feature represents unique intrinsic information about the objects. By revealing these characteristics inherent in the 3D coordinates of LiDAR data, a synergy effect in DL model training can be achieved to improve DL performance. Additionally, fusion is an important issue in multimodal DL. In this paper, we analyzed the classification from early-fusion and hybrid method based on late-fusion. The overall accuracy of the classification improved by up to 35% for test data by utilizing geometric features with early-fusion. Therefore, multimodal DL could be an effective training strategy by utilizing intrinsic feature information.

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼