http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Transfer Learning 기법을 이용한 가스 누출 영역 분할 성능 비교
( Marshall ),( Jang-sik Park ),( Seong-mi Park ) 한국산업융합학회 2020 한국산업융합학회 논문집 Vol.23 No.3
Safety and security during the handling of hazardous materials is a great concern for anyone in the field. One driving point in the security field is the ability to detect the source of the danger and take action against it as quickly as possible. Via the usage of a fully convolutional network, it is possible to create the label map of an input image, indicating what object is occupying the specific area of the image. This research employs the usage of U-net, which was constructed in biomedical field segmentation to segment cells, instead of the original FCN. One of the challenges that this research faces is the availability of ground truth with precise labeling for the dataset. Testing the network after training resulted in some images where the network pronounces even better detail than the expected label map. With better detailed label map, the network might be able to produce better segmentation is something to be studied in further research.
Performance Comparison of Gas Leak Region Segmentation Based on Transfer Learning
Marshall, Marshall,Park, Jang-Sik,Park, Seong-Mi The Korean Society of Industry Convergence 2020 한국산업융합학회 논문집 Vol.23 No.3
Safety and security during the handling of hazardous materials is a great concern for anyone in the field. One driving point in the security field is the ability to detect the source of the danger and take action against it as quickly as possible. Via the usage of a fully convolutional network, it is possible to create the label map of an input image, indicating what object is occupying the specific area of the image. This research employs the usage of U-net, which was constructed in biomedical field segmentation to segment cells, instead of the original FCN. One of the challenges that this research faces is the availability of ground truth with precise labeling for the dataset. Testing the network after training resulted in some images where the network pronounces even better detail than the expected label map. With better detailed label map, the network might be able to produce better segmentation is something to be studied in further research.