http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Depth 추정 기법을 이용한 쓰러짐 및 낙상 감지 방법
오승진(Seung-Jin Oh),라승탁(Seung-Tak Ra),이태윤(Tae-Yoon Lee),오준혁(Jun-Hyeok Oh),신인영(In-Young Shin),이승호(Seung-Ho Lee) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.6
In this paper, we propose a falling down detection method using Depth estimation techniques[1]. The proposed method consists of four processes: Depth estimation using Depthformer[2], three points on the floor plane based on the generated Depth Map, equation derivation of the plane, three-dimensional relative distance coordinate calculation, and falling down detection using calculated relative distance coordinates. As a result of the experiment, 9 out of 10 falling down distribution videos distributed by the Korea Internet & Security Agency (KISA) were detected, proving effectiveness.
라승탁(Seung-Tak Ra),오승진(Seung-Jin Oh),이태윤(Tae-Yoon Lee),오준혁(Jun-Hyeok Oh),신인영(In-Young Shin),이승호(Seung-Ho Lee) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.6
In this paper, loitering and intrusion algorithms for intelligent CCTV were developed. First, object detection was performed based on Yolo_X, a deep learning model capable of real-time object detection. Second, the false detection removal algorithm was applied to remove falsely detected objects. Finally, the event situation is determined by applying loitering and intrusion algorithms to each detected object. As a result of the experiment, loitering and intrusion events were detected at the correct time in 59 out of 60 images.
의약 용기용 다중 카메라 인라인 검사 시스템에서 정확도 향상을 위한 딥러닝 네트워크 및 레이어에 대한 연구
이태윤(Tae-Yoon Lee),라승탁(Seung-Tak Ra),오승진(Seung-Jin Oh),오준혁(Jun-Hyeok Oh),신인영(In-Young Shin),이승호(Seung-Ho Lee) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.6
In this paper, we propose a study on the effect of deep learning networks and layers to improve the accuracy of inline inspection system for medicinal appliances. The base network was tested while crossing CNN, Resnet50 and Vision Transformer. As a result of the experiment, the cost and accuracy of the learning result of Resnet50 was lower than the learning result of CNN and Vision Transformer. Therefore, it seems appropriate to use Resnet50 to improve the accuracy of multi-camera inline inspection.
고령 남자에서 ST 분절상승 심근경색증으로 발현한 관상동맥 벽내혈종의 증례
한인섭 ( In Sub Han ),이혜원 ( Hye Won Lee ),박진섭 ( Jin Sup Park ),오준혁 ( Jun Hyeok Oh ),최정현 ( Jung Hyeon Choi ),이한철 ( Han Cheol Lee ),차광수 ( Kwang Soo Cha ) 대한내과학회 2015 대한내과학회지 Vol.89 No.4
An intramural hematoma is a rare, challenging cause of myocardial infarction generally seen in middle-aged females with no atherosclerotic risk factors. Intravascular ultrasound is useful in diagnosing and managing intramural hematomas. Here, we present anintramural hematoma presenting with ST-elevation myocardial infarction without definite intimal dissection in an elderly man who was diagnosed using intravascular ultrasound and managed accordingly. (Korean J Med 2015;89:444-447)
NeRF 기법에서 사용되는 UV Position Map 생성을 위한 Auto Encoder와 Variational Auto Encoder 비교에 관한 연구
김홍직(Hong-Jik Kim),이희열(Hee-Yeol Lee),라승탁(Seung-Tak Ra),김정윤(Jeong-Yoon Kim),오승진(Seung-Jin Oh),김기범(Gi-Beom Kim),유하영(Ha-Young Yoo),이태윤(Tae-Yoon Lee),오준혁(Jun-Hyeok Oh),이승호(Seung-Ho Lee) 대한전자공학회 2022 대한전자공학회 학술대회 Vol.2022 No.11
In this paper, Auto Encoder and Variational Auto Encoder were compared in generating UV Position Map, which is one of the important factors for 3D face reconstruction. Both models were trained from the same MNIST data, and as a result of training, the performance of Variational Auto Encoder was better. This seems to be the effect of the reparameterization trick that Auto Encoder does not have. Since the encoder extracts the mean and variance of the input data and uses them, the decoder knows the distribution information of the input data, so more sophisticated images can be created. Through this, by using the flow field of the continuous UV position map generated by VAE, it can be added as a new input to NeRF, and a novel view with more natural and various angles can be created than that of the existing NeRF.
2D 이미지를 이용한 3D 얼굴 생성 과정에서 GAN을 이용한 얼굴 texture 복원에 관한 연구
이희열(Hee-Yeol Lee),이영지(Young-Ji Lee),라승탁(Seung-Tak Ra),김정윤(Jeong-Yoon Kim),이태윤(Tae-Yoon Lee),오준혁(Jun-Hyeok Oh),신인영(In-Young Shin),권용우(Yong-Woo Kwon),이승호(Seung-Ho Lee) 대한전자공학회 2024 대한전자공학회 학술대회 Vol.2024 No.6
In this paper, we proposed a method for facial texture restoration using GAN in the process of generating 3D faces using 2D images. 3D face object creation was performed in three steps: 3D landmark extraction, 3D face shape construction, and 3D face texture synthesis. During the 3D face texture synthesis process, there was a problem in which the generated UV texture image was lost due to the influence of invisible areas depending on the angle of the face image. A method of regenerating the lost UV texture image using GAN and applying it to the final result was developed. suggested. As a result of experiments with the proposed method, the problem of distortion caused by copying gray shades or overlapping areas in the existing method was restored with GAN, showing natural results. Meanwhile, a phenomenon such as an increase in overall brightness occurred due to restoration using GAN, and further research is needed to improve this.
L1 loss를 이용한 feature matching의 knowledge transfer가 음성 감정 인식에 미치는 영향에 관한 연구
김정윤(Jeong-Yoon Kim),이희열(Hee-Yeol Lee),이영지(Young-Ji Lee),라승탁(Seung-Tak Ra),이태윤(Tae-Yoon Lee),오준혁(Jun-Hyeok Oh),신인영(In-Young Shin),권용우(Yong-Woo Kwon),이승호(Seung-Ho Lee) 대한전자공학회 2024 대한전자공학회 학술대회 Vol.2024 No.6
The study on the impact of knowledge transfer of feature matching using L1 loss on speech emotion recognition in this paper is one of the methods for knowledge transfer from vision transformer (ViT) (teacher), which includes a relatively large-scale CNN, to relatively small-scale ViT (student), excluding even positional embedding. We studied feature matching using L1 loss. As a result, the performance of the student network that went through the feature matching step that mimics the features of the teacher network was significantly higher than that of the student network that was trained for classification from scratch. The accuracy of the teacher network was 94.17%, and the accuracy of the student who performed feature matching was 94.65%, showing higher accuracy with a smaller structure.