RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        전산화 단층 촬영(Computed tomography, CT) 이미지에 대한 EfficientNet 기반 두개내출혈 진단 및 가시화 모델 개발

        윤예빈,김민건,김지호,강봉근,김구태,Youn, Yebin,Kim, Mingeon,Kim, Jiho,Kang, Bongkeun,Kim, Ghootae 대한의용생체공학회 2021 의공학회지 Vol.42 No.4

        Intracranial hemorrhage (ICH) refers to acute bleeding inside the intracranial vault. Not only does this devastating disease record a very high mortality rate, but it can also cause serious chronic impairment of sensory, motor, and cognitive functions. Therefore, a prompt and professional diagnosis of the disease is highly critical. Noninvasive brain imaging data are essential for clinicians to efficiently diagnose the locus of brain lesion, volume of bleeding, and subsequent cortical damage, and to take clinical interventions. In particular, computed tomography (CT) images are used most often for the diagnosis of ICH. In order to diagnose ICH through CT images, not only medical specialists with a sufficient number of diagnosis experiences are required, but even when this condition is met, there are many cases where bleeding cannot be successfully detected due to factors such as low signal ratio and artifacts of the image itself. In addition, discrepancies between interpretations or even misinterpretations might exist causing critical clinical consequences. To resolve these clinical problems, we developed a diagnostic model predicting intracranial bleeding and its subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and epidural) by applying deep learning algorithms to CT images. We also constructed a visualization tool highlighting important regions in a CT image for predicting ICH. Specifically, 1) 27,758 CT brain images from RSNA were pre-processed to minimize the computational load. 2) Three different CNN-based models (ResNet, EfficientNet-B2, and EfficientNet-B7) were trained based on a training image data set. 3) Diagnosis performance of each of the three models was evaluated based on an independent test image data set: As a result of the model comparison, EfficientNet-B7's performance (classification accuracy = 91%) was a way greater than the other models. 4) Finally, based on the result of EfficientNet-B7, we visualized the lesions of internal bleeding using the Grad-CAM. Our research suggests that artificial intelligence-based diagnostic systems can help diagnose and treat brain diseases resolving various problems in clinical situations.

      • KCI등재후보

        딥러닝 기술을 활용한 복숭아 ‘미황’의 성숙도 자동 분류

        Lee Sang Jun,신미희,Jayasooriya L. Sugandhi Hirushika,Wijethunga W.M. Upeksha Darshani,Lee Seul Ki,Cho Jung Gun,Jang Si Hyeong,Cho Byoung-Kwan,김진국 한국원예학회 2024 원예과학기술지 Vol.42 No.1

        소비자에게 전달되는 복숭아는 숙도에 따라서 품질이 달라지기 때문에 섭취하기에 적합한 숙도를 고려하여 유통하는 과정이 필요하다. 또한, 숙도는 복숭아의 상품성 및 저장성에 영향을 미칠 수 있어 적합한 수확 시기를 선정하는 작업이 요구되지만, 현재 노지 과수 작목의 숙도 판별에 대한 국내 연구는 미미한 실정이다. 그렇기 때문에 본 연구에서는 딥러닝 객체 탐지 분류모델을 활용하여 복숭아 ‘미황’에 대한 숙도 분류 모델을 개발하였다. 실험실 내부 및 야외에서 촬영된 각 2,800장의 이미지를활용하여 데이터 셋을 구축하였고, 수확 날짜 및 복숭아 과정부(apex)의 색도 a* 값을 기준으로 하는 두 개의 데이터 셋으로구성하여 각 셋의 구분 기준에 따라 미숙, 적숙 그리고 과숙 3개의 class로 분류하였다. Train : Validation : Test 데이터 셋은7 : 2 : 1의 비율로 분류하였고 데이터의 다양성 향상 및 unbalance를 해결하기 위해 augmentation을 실시하였다. 딥러닝 모델은 EfficientNet, YOLOv5 그리고 Vision Transformer를 활용하였으며 EfficientNet에서 가장 우수한 분류 모델 성능을 기록하였다. 날짜 기준 분류 모델은 분류 모델 성능 평가 지표 기준 최저 및 최대 100%의 정확도를 달성하였고, 색도 a* 값 기준 분류모델은 최저 94.7%, 최대 98.2%의 높은 정확도를 보였다. 본 연구에서 개발된 객체 탐지 기반 복숭아 숙도 분류 모델은 향후노지 과수 작목의 숙도 분류를 통한 기계수확 적기 판정 작업에 활용될 수 있을 것으로 판단된다. Peach must be delivered to market when at their proper ripeness, as its fruit quality declines quickly after harvest. Therefore, it is necessary to consider suitable ripeness for consumption and distribution. However, research on ripeness judgments for peaches in the orchard is scarce. This study used deep learning technology to develop a ripeness classification model for ‘Mihwang’ peaches. A dataset was prepared using 2,800 images, each taken from a peach orchard (outside dataset) and a laboratory (inside dataset) with the same fruit. The dataset was constructed based on the harvest date of the peaches and the peach apex’s skin color (a* value). It uses three classes, immature, ripe, and overripe, according to the classification criteria of the two datasets. The model was trained with a ratio of 7:2:1 of training data, validation data, and test data, and image data augmentation was carried out to improve the diversity of the data and to solve any imbalances. Among EfficientNet, YOLOv5, and Vision Transformer, the deep learning model recorded the best classification model performance on EfficientNet. Based on the classification model and performance evaluation index, the harvest-date-based classification model achieved the highest accuracy of 100%. The classification model based on the apex color a* value of peaches showed high accuracy with a minimum rate of 94.7% and a maximum rate of 98.2%. The peach ripeness classification model developed in this study can be used for determining the proper time for the mechanical harvesting of fruit from an orchard.

      • KCI등재

        EfficientNet과 ONNX를 이용한 온라인 검체 분류제어 시스템 개발

        김성웅,황영배 한국차세대컴퓨팅학회 2022 한국차세대컴퓨팅학회 논문지 Vol.18 No.6

        In this study, to introduce artificial intelligence technology in the field of diagnostic testing, online classification using images is applied to specimen classification equipment, which is a pre-processing tool for examination. For this purpose, EfficientNet implemented in Python was applied to the C# program, which is the actual control software. The image classification algorithm is implemented in the C# based control program by converting the PyTorch model to the ONNX model and calling the deep learning model through the pipeline. After conversion, the classification is verified in real environment to show whether the algorithm is properly performed. The existing PyTorch model showed a high accuracy of 99% in the offline test, but during onlie verification, 9 errors occurred due to trigger time error among a total of 500 verification samples, and the actual classification error are 16 cases, showing the classification accuracy of 95.91%. In order to achieve product-level high classification accuracy in various medical environments, it is necessary to collect and learn more image data under varying lighting conditions. 본 연구에서는 진단검사 분야에 인공지능 기술을 도입하기 위해 검사를 위한 전처리 도구인 검체 분류 장비에 영상 기반의 온라인 분류를 적용하였다. 이를 위해 실제 제어 소프트웨어인 C# 프로그램에 Python으로 구현된 EfficientNet을 적용하였다. 이미지 분류 알고리즘은 PyTorch 모델을 ONNX 모델로 변환하고 파이프라인을 통해 딥 러닝 모델을 호출하여 C# 기반 제어 프로그램에서 구현된다. 변환 후 실제 환경에서 분류를 검증하여 알고리즘이 제대로 수행되었는지를 확인한다. 기존 PyTorch 모델은 오프라인 테스트에서는 99%의 높은 정확도를 보였지만, 온라인 검증 시 총 500개의 검증 샘플 중 트리거 시간 오차로 인해 9개의 오차가 발생했으며, 실제 분류 오차는 16건으로 95.91%의 분류 정확도를 보였다. 다양한 의료 환경에서 제품 수준의 높은 분류 정확도를 달성하려면 다양한 조명 조건에서 더 많은 이미지 데이터를 수집하고 학습해야 할 것으로 생각된다.

      • EfficientNet 모델 종류에 따른 비디오 분류 성능 비교

        김찬민,박운상 한국차세대컴퓨팅학회 2023 한국차세대컴퓨팅학회 학술대회 Vol.2023 No.06

        이미지 분류, 객체 감지 등에 널리 쓰이는 EfficientNet은 기존에 수동으로 모델의 깊이, 너비, 입력 이미 지의 크기를 조절해 모델의 정확도를 높인 것과는 달리 이 3가지 요인마다의 상관관계를 찾아내었고 이 를 수식으로 만들었다. 이를 실제 데이터인 UCF-Crime 데이터셋에 적용하여 최적의 모델 정확성을 찾는 작업을 통해 적절한 접근법을 찾는 것이 중요하다 할 수 있다.

      • Hyperspectral Image Classification using EfficientNet-B4 with Search and Rescue Operation Algorithm

        S.Srinivasan,K.Rajakumar International Journal of Computer ScienceNetwork S 2023 International journal of computer science and netw Vol.23 No.12

        In recent years, popularity of deep learning (DL) is increased due to its ability to extract features from Hyperspectral images. A lack of discrimination power in the features produced by traditional machine learning algorithms has resulted in poor classification results. It's also a study topic to find out how to get excellent classification results with limited samples without getting overfitting issues in hyperspectral images (HSIs). These issues can be addressed by utilising a new learning network structure developed in this study.EfficientNet-B4-Based Convolutional network (EN-B4), which is why it is critical to maintain a constant ratio between the dimensions of network resolution, width, and depth in order to achieve a balance. The weight of the proposed model is optimized by Search and Rescue Operations (SRO), which is inspired by the explorations carried out by humans during search and rescue processes. Tests were conducted on two datasets to verify the efficacy of EN-B4, with Indian Pines (IP) and the University of Pavia (UP) dataset. Experiments show that EN-B4 outperforms other state-of-the-art approaches in terms of classification accuracy.

      • KCI등재

        Transfer-learning-based classification of pathological brain magnetic resonance images

        Serkan Savas,Cagri Damar 한국전자통신연구원 2024 ETRI Journal Vol.46 No.2

        Different diseases occur in the brain. For instance, hereditary and progressive diseases affect and degenerate the white matter. Although addressing, diagnosing, and treating complex abnormalities in the brain is challenging, different strategies have been presented with significant advances in medical research. With state-of-art developments in artificial intelligence, new techniques are being applied to brain magnetic resonance images. Deep learning has been recently used for the segmentation and classification of brain images. In this study, we classified normal and pathological brain images using pretrained deep models through transfer learning. The EfficientNet-B5 model reached the highest accuracy of 98.39% on real data, 91.96% on augmented data, and 100%on pathological data. To verify the reliability of the model, fivefold cross-validation and a two-tier cross-test were applied. The results suggest that the proposed method performs reasonably on the classification of brain magnetic resonance images.

      • KCI등재

        간병 로봇을 위한 합성곱 신경망 (CNN) 기반 의약품 인식기 설계

        김현돈,김동현,서필원,배종석,Kim, Hyun-Don,Kim, Dong Hyeon,Seo, Pil Won,Bae, Jongseok 대한임베디드공학회 2021 대한임베디드공학회논문지 Vol.16 No.5

        Our final goal is to implement nursing robots that can recognize patient's faces and their medicine on prescription. They can help patients to take medicine on time and prevent its abuse for recovering their health soon. As the first step, we proposed a medicine classifier with a low computational network that is able to run on embedded PCs without GPU in order to be applied to universal nursing robots. We confirm that our proposed model called MedicineNet achieves an 99.99% accuracy performance for classifying 15 kinds of medicines and background images. Moreover, we realize that the calculation time of our MedicineNet is about 8 times faster than EfficientNet-B0 which is well known as ImageNet classification with the high performance and the best computational efficiency.

      • A Remote Sensing Scene Classification Model Based on EfficientNetV2L Deep Neural Networks

        Aljabri, Atif A.,Alshanqiti, Abdullah,Alkhodre, Ahmad B.,Alzahem, Ayyub,Hagag, Ahmed International Journal of Computer ScienceNetwork S 2022 International journal of computer science and netw Vol.22 No.10

        Scene classification of very high-resolution (VHR) imagery can attribute semantics to land cover in a variety of domains. Real-world application requirements have not been addressed by conventional techniques for remote sensing image classification. Recent research has demonstrated that deep convolutional neural networks (CNNs) are effective at extracting features due to their strong feature extraction capabilities. In order to improve classification performance, these approaches rely primarily on semantic information. Since the abstract and global semantic information makes it difficult for the network to correctly classify scene images with similar structures and high interclass similarity, it achieves a low classification accuracy. We propose a VHR remote sensing image classification model that uses extracts the global feature from the original VHR image using an EfficientNet-V2L CNN pre-trained to detect similar classes. The image is then classified using a multilayer perceptron (MLP). This method was evaluated using two benchmark remote sensing datasets: the 21-class UC Merced, and the 38-class PatternNet. As compared to other state-of-the-art models, the proposed model significantly improves performance.

      • KCI등재

        깊은 합성곱 신경망 모델에 따른 유방 초음파 영상 분류 성능 비교

        박주영(Juyoung Park),김이삭(Yisak Kim),유창완(Chang-Wan Ryu),김형석(Hyungsuk Kim) 대한전기학회 2021 전기학회논문지 Vol.70 No.1

        Breast ultrasound has been widely utilized for classifying tumors into benignancy and malignancy. The limitations of traditional breast ultrasound are the handcrafted features obtained by well-trained sonographers and subjective decision according to different individual experiences. Recently, CNN-based deep learning techniques have exhibited better performance in medical images. However, most research for deep learning in medical ultrasound adopts CNN models developed for natural images due to the lack of common standard and dataset. In this paper, we compare six DCNN models which exhibit good performance for natural images - VGGNet, ResNet, InceptionNet, DenseNet, and EfficientNet. Our classification results demonstrate that CNN models of relatively lower performance on natural images show better performance on gray-scale ultrasound images and further study of CNN models are needed focusing on the features of medical images.

      • KCI등재

        EfficientNet 활용한 딸기 병해 진단 서비스

        이창준,심춘보,김진성,김준영,박준,박성욱,정세훈 (사)한국스마트미디어학회 2022 스마트미디어저널 Vol.11 No.5

        In this paper, images are automatically acquired to control the initial disease of strawberries among facility cultivation crops, and disease analysis is performed using the EfficientNet model to inform farmers of disease status, and disease diagnosis service is proposed by experts. It is possible to obtain an image of the strawberry growth stage and quickly receive expert feedback after transmitting the disease diagnosis analysis results to farmers applications using the learned EfficientNet model. As a data set, farmers who are actually operating facility cultivation were recruited and images were acquired using the system, and the problem of lack of data was solved by using the draft image taken with a cell phone. Experimental results show that the accuracy of EfficientNet B0 to B7 is similar, so we adopt B0 with the fastest inference speed. For performance improvement, Fine-tuning was performed using a pre-trained model with ImageNet, and rapid performance improvement was confirmed from 100 Epoch. The proposed service is expected to increase production by quickly detecting initial diseases. 본 논문에서는 시설재배 작물 중 딸기의 초기 병해를 방제하고자 이미지를 자동으로 취득하고, EfficientNet 모델을 활용해 병해를 분석하여 농민에게 병해 여부를 알려주고, 전문가를 통한 병해 진단 서비스를 제안한다. 딸기 생육단계의 이미지를 취득하고, 학습된 EfficientNet 모델을 활용해 병해 진단 분석결과를 농민의 애플리케이션으로 전송 후 전문가의 피드백을 신속하게 받을 수 있다. 데이터 세트로는 실제 시설재배를 운영하는 농민을 섭외하여 시스템을 이용해 이미지를 취득하였고, 핸드폰으로 촬영한 이미지의 초안을 활용하여 데이터가 부족한 문제를 해결했다. 실험 결과 EfficientNet B0부터 B7까지의 정확도는 유사하여 추론 속도가 가장 빠른 B0를 채택했다. 성능향상을 위해 ImageNet으로 사전학습 된 모델을 사용해 Fine-tuning 했고, 100 Epoch부터 급격한 성능향상을 확인했다. 제안하는 서비스는 초기 병해를 빠르게 탐지하여 생산량을 증대시킬 것으로 기대한다.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼