http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
An Energy-Efficient 64-bit Prefix Adder based on Semidynamic and Bypassing Structures
Hwang, Jaemin,Choi, Seongrim,Nam, Byeong-Gyu The Institute of Electronics and Information Engin 2015 Journal of semiconductor technology and science Vol.15 No.1
An energy-efficient 64-bit prefix adder is proposed for micro-server processors based on both semidynamic and bypassing structures. Prefix adders consist of three main stages i.e. propagate-generate (PG) stage, carry merge (CM) tree, and sum generators. In this architecture, the PG and CM stages consume most of the power because these are based on domino circuits. This letter proposes a semidynamic PG stage for its energy-efficiency. In addition, we adopt the bypassing structure on the CM tree to reduce its switching activity. Experimental results show 19.1% improvement of energy efficiency from prior art.
An Energy-Efficient 64-bit Prefix Adder based on Semidynamic and Bypassing Structures
Jaemin Hwang,Seongrim Choi,Byeong-Gyu Nam 대한전자공학회 2015 Journal of semiconductor technology and science Vol.15 No.1
An energy-efficient 64-bit prefix adder is proposed for micro-server processors based on both semidynamic and bypassing structures. Prefix adders consist of three main stages i.e. propagate-generate (PG) stage, carry merge (CM) tree, and sum generators. In this architecture, the PG and CM stages consume most of the power because these are based on domino circuits. This letter proposes a semidynamic PG stage for its energy-efficiency. In addition, we adopt the bypassing structure on the CM tree to reduce its switching activity. Experimental results show 19.1% improvement of energy efficiency from prior art.
Efficient Large Dataset Construction using Image Smoothing and Image Size Reduction
Jaemin HWANG,Sac LEE,Hyunwoo LEE,Seyun PARK,Jiyoung LIM 한국인공지능학회 2023 인공지능연구 (KJAI) Vol.11 No.1
With the continuous growth in the amount of data collected and analyzed, deep learning has become increasingly popular for extracting meaningful insights from various fields. However, hardware limitations pose a challenge for achieving meaningful results with limited data. To address this challenge, this paper proposes an algorithm that leverages the characteristics of convolutional neural networks (CNNs) to reduce the size of image datasets by 20% through smoothing and shrinking the size of images using color elements. The proposed algorithm reduces the learning time and, as a result, the computational load on hardware. The experiments conducted in this study show that the proposed method achieves effective learning with similar or slightly higher accuracy than the original dataset while reducing computational and time costs. This color-centric dataset construction method using image smoothing techniques can lead to more efficient learning on CNNs. This method can be applied in various applications, such as image classification and recognition, and can contribute to more efficient and cost-effective deep learning. This paper presents a promising approach to reducing the computational load and time costs associated with deep learning and provides meaningful results with limited data, enabling them to apply deep learning to a broader range of applications.
비x86 플랫폼 상에서의 CUDA 컴퓨팅을 위한 QEMU 및 GPGPU-Sim 기반 시뮬레이션 프레임워크 개발
황재민(Jaemin Hwang),최종욱(Jong-Wook Choi),최성림(Seongrim Choi),남병규(Byeong-Gyu Nam) 한국산업정보학회 2014 한국산업정보학회논문지 Vol.19 No.2
본 논문에서는 QEMU와 GPGPU-Sim에 기반하여 비x86 플랫폼을 위한 CUDA 시뮬레이션 프레임워크를 제안한다. 기존 CPU-GPU 이종 컴퓨팅 시뮬레이터는 x86 CPU 모델만을 지원하거나 CUDA를 지원하지 않는 한계를 가진다. 제안된 시뮬레이터는 이러한 문제를 해결하기 위해 x86을 포함하여 비x86 CPU 모델을 지원 가능한 QEMU와 CUDA를 지원하는 GPU 시뮬레이터인 GPGPU-Sim을 통합하였다. 이를 통해 비x86 기반의 CUDA 컴퓨팅 환경을 시뮬레이션할 수 있도록 하였다. This paper proposes a CUDA simulation framework for non-x86 computing platforms based on QEMU and GPGPU-sim. Previous simulators for heterogeneous computing platforms did not support for non-x86 CPU models or CUDA computing platform. In this work, we combined the QEMU and the GPGPU-Sim to support the non-x86 CPU models and the CUDA platform, respectively. This approach provides a simulation framework for CUDA computing on non-x86 CPU models.
구글 티처블 머신을 활용한 군사장애물 분류 모델 설계 및 실증 연구
황재민(Jaemin Hwang),마정목(Jungmok Ma) (사)한국CDE학회 2022 한국CDE학회 논문집 Vol.27 No.2
With the recent development of Obstacle Clearance Tank (K-600) that can overcome minefield, rockfall and road crator, ROK Army can shorten the time required to overcome obstacles and increase operation efficiency. However, in order to overcome the lack of military service resources in the future and be guaranteed to survive operator, Unmanned Obstacle Clearance Tank should be introduced along with artificial intelligence technologies. In order to develop the Unmanned Obstacle Clearance Tank, the initial recognition stage is critical among “recognitioncontrol-action” stages. This study aims to build the obstacle recognition and classification model based on Google teachable machine and verify the model using the real RC-car camera test environment.