http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Borrmann 4형 위암 환자에서 장기 생존자의 특성과 예후인자
류제호 ( Je Ho Ryu ),육정환 ( Jeong Hwan Yook ),김병식 ( Byung Sik Kim ),오성태 ( Sung Tae Oh ),정순재 ( Soon Tae Jung ),최원용 ( Won Yong Choi ) 대한소화기학회 2003 대한소화기학회지 Vol.41 No.1
Background/Aims: Prognosis of Borrmann type 4 gastric cancer is still poor. To improve the prognosis of patients with Borrmann type 4 gastric cancer, it is important to understand the clinicopathological features of patients with a long-term survival. Thus, we compared the characteristics of the patients with a long-term survival (survival duration more than 5 years) with patients with a short-term survival. Methods: We analyzed retrospectively 370 patients who were diagnosed as having Borrmann type 4 gastric cancer and underwent gastric resection between 1989 to 1997 at our hospital. Twenty-one percent of the patients survived longer than 5 years. For comparison of clinicopathlogical factors, the chi-square test was used and multivariate analysis was performed in order to focus on prognostic factors. Results: The 5-year survival rate of the total 370 patients was 21%. Significant difference was noted in the following variables: location of tumor, size, peritoneal metastsis, hepatic metastasis, lymph node metastasis, depth of invasion, stage and curability. In multivariate analysis, the location of tumor was the most significant independent prognostic factor. Conclusions: These results suggest that even in Borrmann type 4 gastric cancer, a localized disease can be cured by a radical resection. (Korean J Gastroenterol 2003;41:9-14)
단일 이미지에서 2차원 자세 특징점을 통한 3차원 자세 정보 추론 방
김희진,류제호,이승주,이종훈 한국멀티미디어학회 2023 멀티미디어학회논문지 Vol.26 No.12
Due to the development of robotic automation and home training, the process of recognizing people's 3D poses has become necessary. Many studies are being conducted to estimate human poses from images. However, due to the nature of images in which joint information is projected onto a plane, it is often difficult to check all pose-related information in a given image due to self-occlusion or occlusion by objects. This study proposed an approach for 3D pose inference by restoring feature points defined as relationships between joint coordinates in images where only incomplete joint information can be obtained due to occlusion. 3D posture is inferred using 71 feature point information, including normalized values for the positions of joint coordinates and values defined as relative relationships between joint coordinates. By estimating 3D posture by including not only information about the 2D joints themselves but also feature point information considering the relationships between joints, posture estimation was possible without 3D standard posture regulations.
건축물 3차원 모델링을 위한 2D Grid 기반 벽 객체 추출 방법
유형준,이경로,류제호,이승주,이종훈 한국멀티미디어학회 2023 멀티미디어학회논문지 Vol.26 No.12
Today, there is a growing interest in digital twin technology, and research on automating architectural 3D modeling using point clouds continues. To automate architectural 3D modeling, it is necessary to segment each architectural components from point clouds and extract information from each object. In this paper, we proposed a method for extracting information about walls, one of the architectural components, from point clouds. We employed a 2D grid-based approach that involves floor segmentation, 2D grid generation, and wall extraction for 3D modeling. The performance of the proposed method was evaluated by comparing and analyzing recall for two floors.
3차원 건축물 모델링 자동화를 위한 딥러닝 기반 벽 구조 객체 추출 방법
유형준,이경로,류제호,이승주,이종훈 한국멀티미디어학회 2023 멀티미디어학회논문지 Vol.26 No.8
To create a digital twin, 3D modeling data that imitated represents the real-world is essential. However, people manually create modeling data by looking at photos or 3D scanning data. To address 3D modeling by hand, it is necessary to automatically extract information required for 3D modeling from 3D scanning data. In this paper, we propose a method based on deep learning-based 3D semantic segmentation and stochastic-based extraction of wall structure object from point clouds. We validate the performance of the proposed method by comparing the extracted wall structure object information from the initial point cloud with the actual 3D modeling.