http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Ayed Ahmad Hamdan Al-Radaideh,Mohd Shafry bin Mohd Rahim,Wad Ghaban,Majdi Bsoul,Shahid Kamal,Naveed Abbas 한국인터넷정보학회 2023 KSII Transactions on Internet and Information Syst Vol.17 No.7
Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution auto-encoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.
Automation Monitoring With Sensors For Detecting Covid Using Backpropagation Algorithm
( Pravin R. Kshirsagar ),( Hariprasath Manoharan ),( Vineet Tirth ),( Mohd Naved ),( Ahmad Tasnim Siddiqui ),( Arvind K. Sharma ) 한국인터넷정보학회 2021 KSII Transactions on Internet and Information Syst Vol.15 No.7
This article focuses on providing remedial solutions for COVID disease through the data collection process. Recently, In India, sudden human losses are happening due to the spread of infectious viruses. All people are not able to differentiate the number of affected people and their locations. Therefore, the proposed method integrates robotic technology for monitoring the health condition of different people. If any individual is affected by infectious disease, then data will be collected and within a short span of time, it will be reported to the control center. Once, the information is collected, then all individuals can access the same using an application platform. The application platform will be developed based on certain parametric values, where the location of each individual will be retained. For precise application development, the parametric values related to the identification process such as sub-interval points and intensity of detection should be established. Therefore, to check the effectiveness of the proposed robotic technology, an online monitoring system is employed where the output is realized using MATLAB. From simulated values, it is observed that the proposed method outperforms the existing method in terms of data quality with an observed percentage of 82.