http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
GPUedHadoop : Hadoop as Parallel Processing Framework for GPU Cluster
Bongen Gu,Seokil Song,Yoonsik Kwak 한국정보기술학회 2016 한국정보기술학회논문지 Vol.14 No.6
Many research groups try to use GPGPU to enhance the performance of Hadoop. In this paper, we propose new approach to enhance the performance of Hadoop Map task and Combiner by using GPGPU on Hadoop Cluster. Our approach is that the whole HDFS block called split is passed to Map task for GPU processing. And then, the result of Mapper enabling GPU processing is also passed to Combiner for GPU processing. In other words, accelerated steps via GPU are Mapper and Combiner in Hadoop. GPU-enabled Hadoop adopting our approach has the same characteristics as native Hadoop, and additionally high performance feature. To show that our approach is effective to enhance the performance of Hadoop by using GPU, we experiment on GPU-accelerated Hadoop. Our experimental results show that speedup factor of our approach is between 3.27 and 4.19. So, we can conclude that our approach for GPU-enabled Hadoop is effective to enhance the performance.
WSP: Whole-Split Passing Technique for Accelerating Hadoop Map Ttask by Using GPU
Bongen Gu 한국정보기술학회 2014 한국정보기술학회논문지 Vol.12 No.6
MapReduce is a programming model for distributed computing to handle large-scale data. Hadoop is one of MapReduce implementation frameworks, and includes a distributed file system called as HDFS. A compute-intensive MapReduce application is an important class of applications. Many research projects try to accelerate MapReduce task of these applications by using GPU. Our work is one of these researches. Especially, we focus on the accelerating Map task by using GPU. In this paper, we propose the whole-split passing technique. Our technique is that Map task transfers the whole-split block to GPU instead of one or a small number of records. Therefore our technique can reduce the communicating overhead between Hadoop Map task and kernel executed on CPU and GPU, respectively, and fully use GPU’s parallel computing power by initiating threads as many as the number of records in split. To show the validation of our technique, we perform experiments. The results of our experiments show that the whole-split passing technique can reduce the execution time of Map task.
얼굴의 깊이 정보를 이용한 실제 얼굴 여부를 판단하는 생-얼굴 검사 방법
구본근(Bongen Gu) 한국정보기술학회 2021 한국정보기술학회논문지 Vol.19 No.12
Because previous face recognition systems analyze a face image captured by a camera, these systems successfully carry out identification or verification process even though using a non-live face, such as images printed on paper or presented on a smart device screen. In this case, it is possible to deceive face recognition systems using a non-live face. To solve this problem, in this paper, we propose a live face test method that decides whether a captured face is live or not by using of distribution depth information on a face. And, we implement our live face test method, and experimentally run our implementation using a live face, an image printed on paper, and a slightly wrapped image. From a result of experimental runs, we show that our method proposed in this paper is effective and can contribute as a preprocessing step of a face recognition system.
구본근(Bongen Gu) 한국정보기술학회 2023 한국정보기술학회논문지 Vol.21 No.2
The performance of RAM disk that is volatile storage media is high. However, RAM Disk has serious problems, such as data loss when any power supply problem occurs. In this paper, we propose a recovery method by using remote backup to solve the data loss problem of the RAM disk. Our method transfers disk blocks for disk write requests to the remote system while the RAM disk driver processes disk i/o requests. And our proposed method recovers the whole RAM disk file system from the remote system when the device driver module for the RAM disk is loaded. Therefore, our proposed backup and recovery method for RAM Disk can effectively recover the file system in new hardware environment in the case that required to change a host because of hardware fault or job relocation of organization. To show that our recovery method is valid, we experimentally implemented our way and verified that our implementation successfully recovered the RAM disk file system from the remote system.