http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
엄하영(Hayoung Eom),김정환(Jeonghwan Kim),지승윤(Seungyun Ji),최희열(Heeyoul Choi) 한국디지털콘텐츠학회 2020 한국디지털콘텐츠학회논문지 Vol.21 No.2
With the advances in deep learning algorithms, reinforcement learning has shown considerable accomplishments in such tasks as game and physics-based models that require continuous actions. Many platforms and methods like OpenAI Gym were devised to evaluate and compare multiple reinforcement learning algorithms and thus made significant contributions to the deep learning community. In addition to such developments, considering the increasing demand for autonomous vehicles and rule-based parking assistance systems based on attached sensors, we need a parking simulator where reinforcement learning can be applied. In this paper, we develop a new autonomous car parking simulator which allows the learning agent to be trained with reinforcement learning algorithms. The results show the simulator being successfully trained with Deep Deterministic Policy Gradient (DDPG) algorithm.
Alpha-Integration Pooling for Convolutional Neural Networks
Hayoung Eom(엄하영),Heeyoul Choi(최희열) Korean Institute of Information Scientists and Eng 2021 정보과학회논문지 Vol.48 No.7
Convolutional neural networks (CNNs) have achieved remarkable performance in many applications, especially in image recognition tasks. As a crucial component of CNNs, sub-sampling plays an important role for efficient training or invariance property, and max-pooling and arithmetic average-pooling are commonly used sub-sampling methods. In addition to the two pooling methods, however, there are many other pooling types, such as geometric average, harmonic average, among others. Since it is not easy for algorithms to find the best pooling method, usually the pooling types are predefined, which might not be optimal for different tasks. As other parameters in deep learning, however, the type of pooling can be driven by data for a given task. In this paper, we propose α-integration pooling (αI-pooling), which has a trainable parameter α to find the type of pooling. αI-pooling is a general pooling method including max-pooling and arithmetic average-pooling as a special case, depending on the parameter α. Experiments show that αI-pooling outperforms other pooling methods, in image recognition tasks. Also, it turns out that each layer has a different optimal pooling type.