RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템

        이교혁(Kyohyuk Lee),김태연(Taeyeon Kim),김우주(Wooju Kim) 한국지능정보시스템학회 2020 지능정보연구 Vol.26 No.2

        In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturin

      • KCI등재

        Low-Quality Banknote Serial Number Recognition Based on Deep Neural Network

        장운수,Kun Ha Suh,이의철 한국정보처리학회 2020 Journal of information processing systems Vol.16 No.1

        Recognition of banknote serial number is one of the important functions for intelligent banknote counterimplementation and can be used for various purposes. However, the previous character recognition method islimited to use due to the font type of the banknote serial number, the variation problem by the solid status, andthe recognition speed issue. In this paper, we propose an aspect ratio based character region segmentation anda convolutional neural network (CNN) based banknote serial number recognition method. In order to detect thecharacter region, the character area is determined based on the aspect ratio of each character in the serial numbercandidate area after the banknote area detection and de-skewing process is performed. Then, we designed andcompared four types of CNN models and determined the best model for serial number recognition. Experimental results showed that the recognition accuracy of each character was 99.85%. In addition, it wasconfirmed that the recognition performance is improved as a result of performing data augmentation. Thebanknote used in the experiment is Indian rupee, which is badly soiled and the font of characters is unusual,therefore it can be regarded to have good performance. Recognition speed was also enough to run in real timeon a device that counts 800 banknotes per minute.

      • Handwritten Devanagari Characters and Numeral Recognition using Multi-Region Uniform Local Binary Pattern

        Prabhanjan S,R Dinesh 보안공학연구지원센터 2016 International Journal of Multimedia and Ubiquitous Vol.11 No.3

        Automated offline handwritten character recognition of Devanagari script is a growing area of research in the field of pattern recognition. A new approach for Devanagari handwritten character / digit recognition has been proposed in this paper. This approach employs Uniform Local Binary Pattern (ULBP) operator as the feature extraction method. This operator has great performance in research areas such as texture classification and object recognition, but it has not been used in Devanagari handwritten character/digit recognition problem. The proposed method extracts both local and global features. The proposed method have two steps, in the first step image is preprocessed to remove noise and to convert it to binary image and then resizing it to a fixed size of 48x48. In the second step, ULBP operator is applied to the image to extract global features then input image is divided into 9 blocks, ULBP operator is applied to each block to extract local features. Finally, global and local features are used to train Support Vector Machine(SVM). The proposed method has been tested on large set of handwritten character and numeral database and empirical results reveals that the proposed method yields very good accuracy (98.77%) . To establish the superiority of the proposed method, it has also been compared with the contemporary algorithms. The comparative analysis shows that the proposed method out performs the existing methods.

      • KCI등재

        Improved Lexicon-driven based Chord Symbol Recognition in Musical Images

        Dinh, Cong Minh,Do, Luu Ngoc,Yang, Hyung-Jeong,Kim, Soo-Hyung,Lee, Guee-Sang The Korea Contents Association 2016 International Journal of Contents Vol.12 No.4

        Although extensively developed, optical music recognition systems have mostly focused on musical symbols (notes, rests, etc.), while disregarding the chord symbols. The process becomes difficult when the images are distorted or slurred, although this can be resolved using optical character recognition systems. Moreover, the appearance of outliers (lyrics, dynamics, etc.) increases the complexity of the chord recognition. Therefore, we propose a new approach addressing these issues. After binarization, un-distortion, and stave and lyric removal of a musical image, a rule-based method is applied to detect the potential regions of chord symbols. Next, a lexicon-driven approach is used to optimally and simultaneously separate and recognize characters. The score that is returned from the recognition process is used to detect the outliers. The effectiveness of our system is demonstrated through impressive accuracy of experimental results on two datasets having a variety of resolutions.

      • KCI등재

        객체 검출과 한글 손글씨 인식 알고리즘을 이용한 차량 번호판 문자 추출 알고리즘

        나민원(Min Won Na),최하나(Ha Na Choi),박윤영(Yun Young Park) 한국IT서비스학회 2021 한국IT서비스학회지 Vol.20 No.6

        Recently, with the development of IT technology, unmanned systems are being introduced in many industrial fields, and one of the most important factors for introducing unmanned systems in the automobile field is vehicle licence plate recognition(VLPR). The existing VLPR algorithms are configured to use image processing for a specific type of license plate to divide individual areas of a character within the plate to recognize each character. However, as the number of Korean vehicle license plates increases, the law is amended, there are old-fashioned license plates, new license plates, and different types of plates are used for each type of vehicle. Therefore, it is necessary to update the VLPR system every time, which incurs costs. In this paper, we use an object detection algorithm to detect character regardless of the format of the vehicle license plate, and apply a handwritten Hangul recognition(HHR) algorithm to enhance the recognition accuracy of a single Hangul character, which is called a Hangul unit. Since Hangul unit is recognized by combining initial consonant, medial vowel and final consonant, so it is possible to use other Hangul units in addition to the 40 Hangul units used for the Korean vehicle license plate.

      • Text and Sign Recognition for Indoor Localization

        Arpan Ghosh,Jeongwon Pyo,Tae-Yong Kuc 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10

        In this paper, we propose a modular approach to estimate the position and rotation of any mobile robot more precisely in an indoor environment using text and sign recognition. The modular approach for the text and sign recognition is performed in a twofold method in figure 1. First is the detection of the region with texts and various signs in the image which is done by an object detection system. The second part is the character recognition, where the detected textual region from the image will be passed onto an optical character recognition engine(OCR) engine to be recognized. This modular approach can be modified at any point based on any mobile robot in an indoor environment with texts and signs to help localize its position and rotation.

      • KCI등재

        ESRGAN 기반 영상 초해상화를 통한 전기설비 상태정보 추출 연구

        송현제,문재현,이기연,채동주,임승택 대한전기학회 2022 전기학회논문지 Vol.71 No.10

        Electrical equipment performs external communication for monitoring. When a communication malfunction occurs, there is a need for a way to extract information from the electrical equipment. One approach is to capture the display of electrical equipment with an image using devices like CCTV and then extract the information from the image using optical character recognition. However, the images are low-resolution, so the optical character recognition does not work well on the image. This paper proposes a simple method to improve the performance of optical character recognition with a super-resolution model. The proposed method converts the low-resolution image to a high-resolution image through the super-resolution model trained with a proper electrical equipment image dataset. As a result, optical character recognition can extract information from high-resolution images. Experiments on a real-world electrical equipment image dataset show that the proposed method helps to extract information from electrical equipment images

      • KCI등재

        FPN(Feature Pyramid Network)을 이용한 고지서 양식 인식

        김대진,황치곤,윤창표 한국정보통신학회 2021 한국정보통신학회논문지 Vol.25 No.4

        In the era of the Fourth Industrial Revolution, technological changes are being applied in various fields. Automation digitization and data management are also in the field of bills. There are more than tens of thousands of forms of bills circulating in society and bill recognition is essential for automation, digitization and data management. Currently in order to manage various bills, OCR technology is used for character recognition. In this time, we can increase the accuracy, when firstly recognize the form of the bill and secondly recognize bills. In this paper, a logo that can be used as an index to classify the form of the bill was recognized as an object. At this time, since the size of the logo is smaller than that of the entire bill, FPN was used for Small Object Detection among deep learning technologies. As a result, it was possible to reduce resource waste and increase the accuracy of OCR recognition through the proposed algorithm. 4차산업 혁명 시대를 맞아, 기술의 변화가 다양한 분야에 적용되고 있다. 고지서 분야에서도 자동화, 디지털화, 데이터관리가 되고 있다. 사회에서 유통되는 고지서의 형태는 수만 가지 이상이며, 이를 자동화, 디지털화, 데이터관리를 위해서는 고지서 인식이 필수적이다. 현재 다양한 고지서들을 관리하기 위해서 OCR(Optical Character Recognition) 기술을 활용한다. 이때, 정확도를 높이기 위해, 먼저 고지서 양식을 인식하면, OCR 인식 시 더 높은 인식률을 가질 수 있다. 본 논문에서는 고지서 양식을 구분하기 위해 인덱스로 사용할 수 있는 로고를 객체 인식하였으며, 이때 로고의 크기가 전체 고지서 대비 작으므로 딥러닝 기술 중 FPN(Feature Pyramid Network)을 작은 객체 검출에 활용하였다. 결과적으로, 제안하는 알고리즘을 통해서 자원 낭비를 줄이고, OCR 인식 정확도를 높일 수 있었다.

      • KCI우수등재

        오류 패턴 기반의 OCR 오류 수정

        김나라,조용석,박호현 한국정보과학회 2024 정보과학회논문지 Vol.51 No.3

        The development of Optical Character Recognition (OCR) has made it possible to digitize analog documents. It shows very high recognition accuracy for standardized documents. However, OCR errors still occur frequently in complex documents. To resolve these issues, an OCR error correction procedure is required. The majority of OCR errors are repeated for the same characters. Accordingly, OCR error information has an important meaning in OCR error correction work. However, there are few studies utilizing OCR error information. In order to identify patterns, this study examines OCR error data. It then suggests an OCR mistake correction technique based on neural machine translation. Experiments were carried out using the English dataset from the ICDAR 2017/2019 Post-OCR text correction competition in order to validate the proposed method. The experimental results showed that the model using OCR error information demonstrated a higher improvement rate than the model without OCR error information. It also showed up to 8%P improved results compared to the existing state of the art.

      • Low Complexity Orientation Compensation Algorithm for Orientation- Invariant Character Recognition

        M. Y. Abbass,HyungWon Kim 대한전자공학회 2017 대한전자공학회 학술대회 Vol.2017 No.1

        This paper introduces a novel technique of orientation estimation for character image recognition. Conventional methods of rotating images often use interpolation methods with a number of iterations of image rotations and adjustments. The accuracy of a character recognition algorithm depends on the quality of features and their invariance to deformation. The proposed method ensures that the extracted features are invariant to rotation deformation for character images. In addition it has low complexity, and thus are well suited for low power embedded applications. The key idea behind the low complexity lies in the accurate estimation of the orientation of a rotation-deformed image by calculating the center of mass of bipartite image. We demonstrate the performance of the proposed method using simulations with character images captured by a rotated camera.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼