RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Semantic Segmentation of Outcrop Images using Deep Learning Networks Toward Realization of Carbon Capture and Storage

        Kodai Sato,Hirokazu Madokoro,Takeshi Nagayoshi,Shun Chiyonobu,Paolo Martizzi,Stephanie Nix,Hanwool Woo,Takashi K. Saito,Kazuhito Sato 제어로봇시스템학회 2021 제어로봇시스템학회 국제학술대회 논문집 Vol.2021 No.10

        This study was conducted to classify outcrop images using semantic segmentation methods based on deep learning algorithms. Carbon capture and storage (CCS) is an epoch-making approach to reduce greenhouse gases in the atmosphere. This study specifically examines outcrops because geological layer measurements can lead to production of a highly accurate geological model for feasible CCS inspections. Using a digital monocular RGB camera, we obtained 13 outcrop images annotated with four classes along with strata. Subsequently, we compared segmentation accuracies with changing input image sizes of three types and semantic segmentation methods of four backbones: SegNet, U-Net, ResNet-18, and Xception-65. The ResNet-18 and Xception-65 backbones were implemented using DeepLabv3+. Experimentally obtained results demonstrated that data expansion with random sampling improved the accuracy. Regarding evaluation metrics, global accuracy and local accuracy are higher than the mean intersection over union (mIoU) for our outcrop image dataset with unequal numbers of pixels in the respective classes. These experimentally obtained results revealed that resizing for input images is unnecessary for our method.

      • Development of Micro Air Vehicle Using Aerial Photography for Safe Rowing and Coaching

        Hirokazu Madokoro,Kazuhito Sato,Nobuhiro Shimoi 제어로봇시스템학회 2016 제어로봇시스템학회 국제학술대회 논문집 Vol.2016 No.10

        This study was undertaken to establish basic technologies and knowledge of aerial photography and its application to support safe rowing. For the water sport of rowing, managers and coaches use a motorboat to follow a rowing boat for coaching and safe rowing observation. Utilization of a motorboat gives rise to numerous problems in terms of pulled waves, narrow visual ranges, limited tracking of boats at any one time, fuel consumption, and maintenance costs. Moreover, rowing boats present collision risks to other rowing boats or obstacles floating on water, especially for a cox-less rowing boat because the visual direction for rowers is opposite to the moving direction. The aim of this study is to actualize rowing aerial photography using a Micro Air Vehicle (MAV): a radio-controlled small multi-rotor helicopter that has become popularly used for numerous applications recently. We obtained rowing movies using three-camera compositional patterns with changing altitudes and tilt angles. We examined the benefits of rowing aerial photography compared with movies obtained from a motorboat with consideration of safety improvement.

      • Detection of Distracted State Based on Head Posture and Facial Expression

        Tomoki Washizu,Kazuhito Sato,Yuma Matsui,Hanwool Woo,Hirokazu Madokoro,Sakura Kadowaki 제어로봇시스템학회 2019 제어로봇시스템학회 국제학술대회 논문집 Vol.2019 No.10

        Research and development of automatic driving have been progressing actively. Level 3 autonomous driving requires shifting of driving activity between the system and a driver. Such shifting poses high risks of a severe traffic accident if a driver is in a distracted state. Nevertheless, no distracted state detection method has been established. For our earlier study, research was conducted to extract characteristic driving behavior patterns in distracted states by quantifying eye movement. Results indicated a challenging task for further development: detecting gaze information with high accuracy. For this study, time-series changes of head posture and facial expressions in a driving concentration state and distracted state are quantified using hierarchical growth type recurrent SOM and a U-matrix. We assess the possibility of detecting driving behavior patterns that involve head posture and facial expressions and which characterize a distracted driver state.

      • Automatic Calibration of Bed-Leaving Sensor Signals Based on Genetic Evolutionary Learning

        Daiju Hiramatsu,Hirokazu Madokoro,Kazuhito Sato,Kazuhisa Nakasho,Nobuhiro Shimoi 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        This paper presents a method to generate filters for shaping sensor signals using genetic network programming (GNP) for automatic calibration to absorb individual differences. In our previous study, we developed a prototype that incorporates bed-leaving detection sensors using piezoelectric films and a machine-learning-based behavior recognition method using counter-propagation networks (CPNs). The system can learn topology and relations between input features and teaching signals. However, our method based on CPNs was insufficient to address individual differences in parameters such as weight and height used for bed-learning behavior recognition. For this study, we actualize automatic calibration of sensor signals for invariance relative to these body parameters. This paper presents two experimentally obtained results obtained using sensor signals obtained in our previous study. For the preliminary experiment, we optimized the original sensor signals to approximate high-accuracy ideal sensor signals using generated filters. We used fitness to assess the difference between original signal patterns and ideal signal patterns. For the application experiment, we used fitness calculated from the recognition accuracy of CPNs. The experimentally obtained results reveal that the mean accuracy improved 6.53 percentage point for three datasets.

      • Classification and Visualization of Long-Term Life-monitoring Sensor Signals Using Topological Characteristics of Category Maps

        Kazuya Iguchi,Hirokazu Madokoro,Kazuhito Sato,Kazuhisa Nakasho,Nobuhiro Shimoi 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        This paper presents a novel extraction and visualization method of human behavior patterns as life rhythms from sensor signals obtained using our originally developed life-monitoring system. Our method visualizes categorical relations and distribution characteristics on category maps and their fired units. For creating category maps that preserve data topology, we optimized three main parameters: vigilance thresholds, mapping size, and learning iterations. The mapping size related to classification granularity and expression ability must be changed along with analysis of the data length. Experimentally obtained results reveal that the distribution of burst units is spread evenly along with the setting of learning iterations greater than the data size. This characteristic indicates that it is necessary to increase learning iterations when the mapping size is increased. Moreover, we demonstrate characteristics of integration and division of categories for the relation between fired units and category maps with changing of the vigilance parameter.

      • Semantic Indoor Scene Recognition of Time-Series Arial Images from a Micro Air Vehicle Mounted Monocular Camera

        Hirokazu Madokoro,Shinya Ueda,Kazuhito Sato 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        This paper presents a semantic scene recognition method from indoor areal time-series images obtained using a micro air vehicle (MAV). Using category maps, topologies of image features are mapped into a low-dimensional space based on competitive and neighborhood learning. The proposed method comprises two phases: a codebook feature description phase and a recognition phase using category maps. For the former phase, codebooks are created automatically as visual words using self-organizing maps (SOMs) after extracting part-based local features using a part-based descriptor from time-series scene images. For the latter phase, category maps are created using counter propagation networks (CPNs) with extraction of category boundaries using a unified distance matrix (U-Matrix). With manual MAV operation, we obtained areal time-series image datasets of five sets for two flight routes: a round flight route and a zigzag flight route. The experimentally obtained results with leave-one-out cross-validation (LOOCV) for datasets divided with 10 zones revealed respective mean recognition accuracies for the round flight datasets and zigzag flight datasets of 71.7% and 65.5%. The created category maps addressed the complexity of scenes because of segmented categories in both flight datasets.

      • Occlusion-Robust Segmentation for Multiple Objects using a Micro Air Vehicle

        Asahi Kainuma,Hirokazu Madokoro,Kazuhito Sato,Nobuhiro Shimoi 제어로봇시스템학회 2016 제어로봇시스템학회 국제학술대회 논문집 Vol.2016 No.10

        This paper presents a novel object extraction method using a micro air vehicle (MAV) for improving the robustness of occlusion. The proposed method is based on saliency of objects for extracting regions of interest (RoIs) using scale invariant feature transform (SIFT) features and segmentation of target objects using GrabCut, which requires advance learning. We obtained original aerial photographic time-series image datasets using a MAV. Results of experiments revealed that object extraction accuracies measured using precision, recall, and F-measure improved according to the MAV movement for images with changing rates of collusion between two objects: a chair and a table. Especially for images of a chair, which is smaller than the table, our method functioned well for the extraction of object regions. For improving extraction accuracy based on the result to extract the table, an advanced mechanism combined with flight patterns is necessary to adjust the suitable distance between the MAV and a target object.

      • Semantic Scene Recognition and Zone Labeling for Mobile Robot Benchmark Datasets based on Category Maps

        Ryoma Fukushi,Hirokazu Madokoro,Kazuhito Sato 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        For this study, we focus on autonomous locomotion based on visual landmarks that recognizes surrounding environments based on saliency characteristics. This paper presents a feature extraction method combined with saliency maps (SMs), histograms of oriented gradients (HOG) features, and accelerated KAZE (AKAZE) descriptors to describe image features as visual landmarks without removing human regions as dynamic objects. As semantic scene recognition, we used a method combined with self-organizing maps (SOMs) based on bag of features for creating codebooks as visual words and counter propagation networks (CPNs) based on topological learning of neighborhood and competition for creating a category maps (CMs) that converts input features into a low dimensional space. We used a mobile robot for obtaining clockwise datasets (CWDs) and counter CW datasets (CCWDs). The experimental obtained results revealed that recognition accuracies (RAs) for CWDs and CCWDs, were, respectively 70.76% for 26 categories and 72.24% for 25 categories. Based on this result as an original ground truth (GT) pattern, we change label patterns (LPs) of five types according to mapping results on CMs for selection.

      • Development of Octo-Rotor UAV Prototype with Night-vision Stereo Camera System Used for Nighttime Visual Inspection

        Hirokazu Madokoro,Hanwool Woo,Kazuhito Sato,Nobuhiro Shimoi 제어로봇시스템학회 2019 제어로봇시스템학회 국제학술대회 논문집 Vol.2019 No.10

        This paper presents an octo-rotor unmanned air vehicle (UAV) prototype and its vision system used for nighttime visual infrastructure inspection. After developing a stereo vision system using two inexpensive night-vision cameras for obtaining depth information, we conducted a comprehensive evaluation experiment to assess its practical use, to support the design and manufacture of our prototype, to build a camera system including a dedicated camera mount, and to compare and evaluate stereo matching algorithms. The nighttime inspection yielded depth images using stereo matching algorithms of four types for nighttime aerial photography of parallax images. We evaluated optimal stereo matching for nighttime aerial photographs. The experimentally obtained results revealed that the contrast between structure outlines and depth information was extracted clearly for the highest accuracy stereo matching result. Results show that our system concept can open up a new field of inspecting structures using nighttime aerial photography.

      • Unrestrained Sensors Using Piezoelectric Elements for Bed-Leaving Prediction

        Hirokazu Madokoro,Nobuhiro Shimoi,Kazuhito Sato 제어로봇시스템학회 2013 제어로봇시스템학회 국제학술대회 논문집 Vol.2013 No.10

        This paper presents a sensor system that predicts behavior patterns that occur when a patient leaves a bed. We originally developed plate-shaped sensors using piezoelectric elements. Existing sensors such as clip sensors and mat sensors require that patients be restrained. The features of our sensors are that they require no power supply or patient restraint for privacy problems. Moreover, we developed machine-learning algorithms to predict behavior patterns without setting thresholds. We evaluated our system for three subjects at an experimental environment constructed in reference to a clinical site. The mean recognition accuracy was 78.6% for seven behavior patterns. Especially, the recognition accuracies of lateral sitting and terminal sitting were each 94.4%. We consider that these capabilities are useful for bed-leaving prediction in practical use.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼