http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Automatic Transformation of Korean Fonts using Unbalanced U-net and Generative Adversarial Networks
Pangjia(방가),Seunghyun Ko(고승현),Yang Fang(방양),Geun-sik Jo(조근식) Korean Institute of Information Scientists and Eng 2019 정보과학회논문지 Vol.46 No.1
In this paper, we study the typography transfer problem: transferring a source font, to an analog font with a specified style. To solve the typography transfer problem, we treat the problem as an image-to-image translation problem, and propose an unbalanced u-net architecture based on Generative Adversarial Network(GAN). Unlike traditional balanced u-net architecture, architecture we proposed consists of two subnets: (1) an unbalanced u-net is responsible for transferring specified fonts style to another, while maintaining semantic and structure information; (2) an adversarial net. Our model uses a compound loss function that includes a L1 loss, a constant loss, and a binary GAN loss to facilitate generating desired target fonts. Experiments demonstrate that our proposed network leads to more stable training loss, with faster convergence speed in cheat loss, and avoids falling into a degradation problem in generating loss than balanced u-net.
Backbone Network for Object Detection with Multiple Dilated Convolutions and Feature Summation
Vani Natalia Kuntjono(바니 나탈리아 쿤트조노),Seunghyun Ko(고승현),Yang Fang(방양),Geunsik Jo(조근식) Korean Institute of Information Scientists and Eng 2018 정보과학회논문지 Vol.45 No.8
The advancement of CNN leads to the trend of using very deep convolutional neural network which contains more than 100 layers not only for object detection, but also for image segmentation and object classification. However, deep CNN requires lots of resources, and so is not suitable for people who have limited resources or real time requirements. In this paper, we propose a new backbone network for object detection with multiple dilated convolutions and feature summation. Feature summation enables easier flow of gradients and minimizes loss of spatial information that is caused by convolving. By using multiple dilated convolution, we can widen the receptive field of individual neurons without adding more parameters. Furthermore, by using a shallow neural network as a backbone network, our network can be trained and used in an environment with limited resources and without pre-training it in ImageNet dataset. Experiments demonstrate we achieved 71% and 38.2% of accuracy on Pascal VOC and MS COCO dataset, respectively.