RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Development of a Robot Boat for Aquatic Weed Management in Shallow Ponds

        ( Takeshi Yusa ),( Yutaka Kaizu ),( Kenji Imou ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        There are 50 Ramsar sites in Japan; although efforts to conserve the wetlands and lakes are underway, deterioration of the ecosystem is a problem. Specifically, growth of the lotus plant tends to interfere with river improvement works. Lake Izunuma-Uchinuma, Miyagi prefecture, Japan, is a shallow lake with considerable levels of eutrophication, wherein lotus plants grow to cover 85% of the lake’s surface annually. These lotus plants are cut as they adversely affect the surrounding ecosystems and landscapes. However, the existing cutting methods require manual labor. Therefore, to decrease the cost of vegetation management work, we developed a robotic boat to cut the lotus plants. We used an open source system to reduce the cost of the robot system. The robot boat was developed by modifying a 2.4-m-long and 1.2-m-wide plastic boat. The boat was equipped with an electric clipper and an electric paddle propulsion system that can navigate on the surface of water with vegetation. We used Pixhawk/ArduPilot, an open source flight controller used in drones, as a navigation controller based on GNSS and IMU. We conducted lotus-cutting experiments in the months of June and August by autonomous navigation in Lake Izunuma-Uchinuma. The experimental areas were 30 × 100 m with 25 target paths at 1.2 m intervals. The experiment in June was completed within 70 min for all areas, and the experiment in August was completed within 69 min for 1/3<sup>rd</sup> of the experimental area. Although the navigation accuracy was not very high, a safe and labor-saving vegetation management method using a robot boat was achieved.

      • Semantic Segmentation with Rgbd Camera and Real-time 2D Mapping in Fields for Robot Mower

        ( Masahiro Moriya ),( Yutaka Kaizu ),( Kenji Imou ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        Mowing is burdensome and automation by the robot mower is expected. For autonomous running of the robot mower, the accurate information of fields with various types of objects is necessary. In this research, we developed a real-time outdoor 2D mapping method based on semantic segmentation using RGBD camera for autonomous running of robot mower. In our method, first, pixel-wise classification image is obtained from RGB-D image using semantic segmentation, which is a technique of image processing by deep convolutional neural network (DCNN), and using the classification image and the depth image, the environmental information about the location and the type of objects in the surroundings is obtained. At the same time, the state (position, attitude, etc.) is estimated from output of GNSS receiver and Inertial Measurement Unit (IMU) using Extended Kalman Filter (EKF). By combining the environmental information and the state, 2D map is created in real-time. To verify our method, we used the differential driven four-wheel vehicle (with two driving wheels, two driven wheels) imitating an actual mower (with four driving wheels). The vehicle has an RGBD camera, a GNSS receiver, IMU and a control computer. As a result, the vehicle properly classified the surrounding objects and 2D map was created in real time. By using the map obtained by our method, autonomous running of the robot mower can be performed even in fields with various types of objects.

      • Development of a Sensor System for Agricultural Machines using Stereo Vision and Deep Learning

        ( Kosuke Inoue ),( Yutaka Kaizu ),( Kenji Imou ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        Autonomous navigation of agricultural machines by employing the global navigation satellite system (GNSS) has developed rapidly in recent times. However, a machine that is only based on GNSS cannot detect obstacles such as humans, which will cause an increased risk of collision with such obstacles. Further, conventional distance sensors cannot accurately determine the distances to various obstacles because of grasses or crops that may lie between the sensors and obstacles, blocking the sight of the sensor. To overcome this problem, we have developed a sensor system that can precisely determine the distances from the sensors to humans even in the aforementioned circumstances. We combine human detection, based on deep learning, with distance detection by means of a stereo camera. Human detection with deep learning can be used to obtain an RGB image from the stereo camera in order to classify obstacles and to detect their locations in the image. When humans are detected, the detection image is compared with a depth image, and the location in the distance image is determined. Further, the median of the distance values corresponding to the pixels at the detected location is calculated. Using this sensor system, we measure the distances from the sensors to a human who was standing in a vegetated region. The errors are 2.2, 4.9, and 14.5 cm, respectively, for distances of 2, 3, and 4 m from the camera. The results depicted that this sensor system exhibits sufficient accuracy in case of agricultural machines.

      • Image Recognition of Position and Orientation of Persimmons for Automatic Peeling Machine

        ( Xuefeng Wang ),( Yutaka Kaizu ),( Kenji Imou ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        Peeling in fruit processing has been a painstaking and time-wasting task for manual labor, thus the automation of peeling could effectively improve productivity especially for persimmons that has limited storage life. To provide the specific peeling machine with persimmons via a robot arm made by corporate partners requires data on position and orientation of persimmon with certain accuracy. To meet such demand,we designed two systems which respectively using openCV on Visual C++ to measure current persimmon orientation by detecting its symmetry axis and using Deep Learning on a library named Keras programmed by Python in recognizing the center point of persimmon’s pedicel from above. Up till now, we achieved 92% accuracy in detecting symmetry axis with processing speed of 1 second and a 85% accuracy in recognizing pedicel center points. The former is satisfactory and we are now still working on improving the accuracy of Deep Learning performance.

      • Three-Dimensional Mapping of Agricultural Fields Using 3D LiDAR and Simultaneous Localization and Mapping Algorithm

        ( Sho Igarashi ),( Yutaka Kaizu ),( Kenji Imou ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        In recent years, robotic technology has been adopted in agriculture to improve productivity and competitiveness. Precise and robust environmental perception is an important requirement to resolve issues such as safe interaction with obstacle and localization for autonomous robot vehicle. Accordingly, we propose the use of a Simultaneous Localization and Mapping algorithm, to generate local maps of agricultural fields. In our study, the 3D data required for the mapping process were collected using 3D LiDAR(Velodyne VLP-16). The resulting 3D map is formed by a 3D point cloud data, and its distance accuracy was evaluated. Evaluation of distance accuracy was made by comparing with positioning data with absolute coordinates measuring by dual-frequency Global Navigation Satellite System(GNSS). The results show that the mean deviation error and RMSE are approximately 0.03 m and 0.13 m, respectively. We conclude that our 3D point clouds maps have acceptable quality and can support process automation of fields by autonomous robot vehicle.

      • Image Recognition of Natural Scene Uging Deep Learning for Autonomous Vegetation Mangement Robot Boat in Lake

        ( Keishiro Kuma ),( Takeshi Yusa ),( Yutaka Kaizu ),( Kenji Imou ) 한국농업기계학회 2018 한국농업기계학회 학술발표논문집 Vol.23 No.1

        Degradation of wetlands due to the overgrowth of aquatic plants is a problem in various areas; hence, vegetation management using robot boats is under development. Herein, we proposed a method to recognize aquatic plants via real-time image processing to enable the automation of a robot boat. We adopted a segmentation method using deep learning for image processing and conducted deep learning and testing on our own dataset. An NVIDIA Jetson TX 2 embedded AI computing device achieved an execution time of 1.31 fps (image size: 576 × 324 px). The traveling speed of the robot boat was considerably slow at 0.3 m s<sup>-1</sup>; hence, the boat can be implemented as a real-time system even at a processing speed of 1.31 fps.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼