RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 학술지명
        • 주제분류
        • 발행연도
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Obstacle Detection using Stereo Camera for Combine Robot

        ( Ryo Asada ),( Michihisa Iida ),( Masahiko Suguri ),( Ryohei Masuda ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        Now in Japan, autonomous agricultural vehicles are attracting a lot of attention and being developed due to the decreasing and aging Japanese farmers. It is critical for autonomous drive of agricultural vehicles to detect obstacles in the field and take appropriate safety actions. Many researchers have studied various sensing technologies to detect obstacles to avoid collisions. Stereo vision sensors can obtain the ranges between an object and the cameras by utilizing the disparity between images captured by two cameras. Compared with laser and ultrasonic sensors, stereo vision has higher spatial resolution and wider scope without the need of scanning. This research aims to develop a collision avoidance system by detecting obstacles using a stereo camera. The detection algorism consists of functions for stereo image processing and point cloud analysis. Stereo processing establishes correspondence between image features in different views of the scene and calculates the disparity values of each pixel. Then the three-dimensional location is determined and 3D point cloud data can be generated. The point cloud data includes obstacle points, rice plants points, weed points and some noises. After transforming from the camera coordinates to the world coordinates, rice plants points can be almost entirely eliminated based on the height information. To distinguish the obstacle and noise area, we divided the space into small sells and extracted obstacle areas based on the point density of each cell. Then a 2D grid map which indicates obstacle area and free area in the view could be generated. In stationary tests, obstacles could be well detected from the grid map. The RMSE between the real range and the measured range was 0.104m when the obstacle was placed at a distance of 1m to 10m from the stereo camera, so it was confirmed that high distance accuracy could be achieved. Then we mounted the stereo camera on the combine robot, and conducted field tests. The combine robot took actions according to the position of the detected obstacle. If an obstacle is detected and it is in the stop area or in the slow-down area, the combine is controlled to stop or slow-down. The results indicated that the system could detect the obstacle and avoid collisions while the combine was harvesting. Nevertheless, the system had blind areas in the vicinity of the vehicle because of the angle of the view. So the use of plural stereo cameras or combined use of stereo camera and other sensors will achieve safer detection and collision avoidance system.

      • Human Detection in a Paddy Field by using Thermal Images

        ( Kazuya Arai ),( Ryohei Masuda ),( Masahiko Suguri ),( Michihisa Iida ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        At present, agriculture in Japan is facing with the problem of labor shortage and aging. In order to realize stable food supply in the future, it is necessary to reduce the workload, expand the scale of management, secure new farmers. In the field of high efficiency and labor saving technology development for paddy field work, robotization of various agricultural machines has been studied. In Kyoto University, research on robotic combine harvester which determine its traveling route by navigation data acquired from GNSS and GPS compass is being studied. However, since this robotic combine harvester merely runs on a predetermined route, there is a safety problem that it cannot be stopped if humans intrude in the traveling direction. So as to solve the problem, several human detection methods have been studied. In this study, we propose a method using thermal images as a new approach to human detection method in paddy field using image processing. The thermal image has an advantageous property to human detection as compared with the visible image. It is not affected by ambient light, and human temperature often lies within a certain range so it is relatively easy to extract human area. Therefore, it is used in the traffic and security fields. This time, we examined whether it can be applied to the agricultural field. We focused on human detection during harvesting by combine harvester in paddy field. We mounted a thermographic camera on a combine harvester, shot a video showing crops and humans, and captured images from there. Then, we performed image processing, and extracted regions with high temperature as regions of interest (ROIs). Finally, feature values were calculated for each ROI, and judgement whether each ROI includes humans or not was made by an artificial neural network (ANN) classifier. We generated a confusion matrix from the judgment results for all the acquired images, and calculated the accuracy rate. By setting the temperature range and the classification categories in the output of the ANN classifier appropriately, the accuracy rate was 99.3% for the images a human exists, 94.9% for the images a human does not exist, and 97.1% on average. As such a high accuracy rate is obtained, it can be said that our proposing human detection by thermal images is effective. This time, we shot a movie in the field and conducted human detection later. In the future, we plan to detect humans who intrude during harvesting by combine harvester in real time. It is also a future subject to formulate a measure for determining the temperature range.

      • Improvement of Human Detecting Accuracy by 3D-LIDAR for Combine Robot

        ( Kyoeung Lee ),( Michihisa Iida ),( Masahiko Suguri ),( Ryohei Masuda ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        Rice farming including ‘sirokaki(puddling)’ and ‘harvesting’, require burdensome labor. After World War 2, various agricultural machines have been introduced in Japan that resulted in only reducing manpower required for farming but also increasing productivity. In recent years however, the number of farmers is decreasing, while the ratio of farmers over 65 years old is increasing, which is a serious threat against Japanese farming. Automation of farming by unmanned agricultural robot is an effective solution for this problem. Safety is one of the most important issue when developing agricultural robots. In former research, various sensors are used to detect obstacles on path, including stereo cameras, thermographic cameras, 3D-LIDAR (Laser Range Finder) sensors. In this paper, 3D-LIDAR sensor is installed on the combine robot and used to detect human presence. When the sensor detects humans during harvesting, the robot automatically slows down and stops. Because of the presence of obstacles (e.g. rice plants, weeds) in rice fields however, the sensor often fails to distinguish humans from obstacles. To solve this problem, clustering and thresholding methods are introduced. In former methods, all points inside the detection area are considered as a single group, which causes difficulty in defining humans when multiple objects are detected. Clustering method divides point cloud into multiple groups which indicate different object. Points with in 20cm are grouped by a recursive function. The resulting point clouds are labeled respectively either as ‘Human’ or as ‘Weed’. To distinguish ‘Human’ from ‘Weed’, thresholds (e.g. width, depth, point density, curvature) of each point cloud are calculated. To verify that chosen thresholds are effective, Thresholds are collected from 254 point cloud samples of ‘Human’ and ‘Weed’ to plot the ‘(width)/(depth)’ versus ‘(point density)<sup>*</sup>(curvature)’ graph. To classify samples of ‘Human’ and ‘Weed’, plotted samples are trained with SVM (Support Vector Machine). As a result, 179 out of 186 ‘Human’ and 42 out of 68 ‘Weed’ are correctly classified. The detection rate is 87.0%.

      • Cooperative Operation of Two Combine Robots for Rice Harvesting

        ( Michihisa Iida ),( Syota Harada ),( Ryosuke Sasaki ),( Masahiko Suguri ),( Ryohei Masuda ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        In order to solve shortage and aging of working force in Japanese agriculture, automation of agricultural machinery is promoted and is merged with information and communication technology (ICT) currently. Therefore, two robots, that are 4-row head-feeding combines, have been developed to automate rice harvesting work in Kyoto University. They are installed a multi-GNSS receiver, a GPS compass, and an IMU as navigation sensors. To increase the efficiency of farming operation, a cooperative harvesting system by combination of two robots was reported in ISMAB 2016. In that system, two robots could harvest rice side-by-side along target spiral path in the same field. As the next step, a block harvesting method by two robots is proposed in this study. After a human-driven combine harvested rice crop to make the turning space at the headland, two robots independently harvest two blocks of rice crops divided by “Nakawari”, that is the harvesting method to divide rice area into two. In this harvesting operation, the combine robot turns by 180-deg-turn (U-turn) at the headland. For the purpose, the 180-deg-turn control for combine robot was developed. In order to evaluate performance of cooperative harvesting operations by two combine robots, harvesting test was conducted in a rice paddy field. As a result, the efficiency of harvesting operation increases up to 24 % in comparison with the side-by-side harvesting method.

      • Laser-range-finder based Field Ridge Detecting for Transplanter

        ( Jiajun Zhu ),( Michihisa Iida ),( Hoang-son Le ),( Masahiko Suguri ),( Ryohei Masuda ),( Kouji Miyake ),( Satoru Konishi ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        When a rice transplanter runs automatically in a rice paddy field, field ridge detection is an important part, it can provide the stop position for transplanter. In this paper, a method to detect the field ridge by a single Laser-Range-Finder equipped on the transplanter is presented. The base machine is an 8-row rice transplanter, this transplanter can be operated up to a maximum speed of 1.85 m/s. The LRF can determine the distance from transplanter to ridge in the real time, and depending on this distance, the transplanter decreases speed and stops automatically. The field test is conducted to evaluate the performance of the LRF based field ridge detection in the field. As the result, the detection accuracy is high enough for actual application.

      • Branch Detection with Deep Learning -Developing Branch Angle Detector-

        ( Ryoma Otake ),( Ryohei Masuda ),( Michihisa Iida ),( Masahiko Suguri ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        In Japan, decreasing the number of farmers and aging are big problem, so mechanization and automatization of agriculture has been promoted. In fruit growing, measuring tree height and stem diameter is carried out for the purpose of calculating annual growth or deciding proper amount of fertilizer and chemicals. But it consumes a lot of time and labor because it is measured by human power. So, we address automatization of tree mensuration by using image processing. If we can apply it to tree mensuration easily, it will contribute to precision agriculture and improve the quality and quantity of agricultural products. Our proposing method for tree mensuration consists of three steps. First, detect tree part from pictures of orchard tree. Second, make 3D model from the pictures of tree part. Finally, calculate tree volume from the 3D model. In this study, we focused on the first step, especially branch detection from pictures. However, it is difficult to detect branch directly from picture because it need to distinguish branch from background and other trees. So in this study, we first cope with a method of detecting branch angle with deep learning to make subsequent blanch detection easier. The aim of this study is branch angle detection and classing branch picture by its angle. After classifying pictures by branch angle, it becomes easy to extract the branch from orchard tree image. We adopted convolutional neural network (CNN) for the classification system that achieved many remarkable results in image recognition field. We developed 8 layer CNN classifier. This was four class (0°, 45°, 90°, 135°) classifier, and achieved a recall of 93.43%. The result shows that machine learning can be used for detecting branch. To know which part in the image contribute to the judgement, we implemented Grad-CAM that is the visualization tool of CNN system. Through the result of Grad-CAM, we considered that the system didn't pay attention to the branches but may watch the whole of the picture.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼