http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Intelligent task robot system based on process recipe extraction from product 3D modeling file
Hyonyoung Han,Heechul Bae,Hyunchul Kang,Jiyon Son,Hyun Kim 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10
This study introduces intelligent task robot system based on process recipe extraction from standard 3D model files. In small quantity batch production and mixed flow manufacturing condition, lots of time is spent on process planning and device control such as path planning in a robot system. If these processes could be automated, mixed flow production of various products will be working efficiently. This paper suggests automated process recipe extraction module based product registration subsystem and visual servoing based intelligent assembly task robot subsystem. The recipe module extracts list of parts, each part size and position from standard 3D model file (STEP) and analyzes the structure of product between parts. The extracted product data is stored in the recipe knowledge base as a recipe format and also plan-view image of each part. Robot system consists of real-time part recognition module, part scheduling module and motion planner module. The part recognition module identifies parts by matching real-time RGB image and plan-view image in knowledge base. The part scheduling module plan the sequence of part for task using a decision tree method. The motion planner module controls assembly task robot according to process recipe depending on task type. Performance of the system was tested with five types of sample products.
Product description Recipe Generation from 3D STEP model for Autonomous Task Planning
Hyonyoung Han,Hyun Kim,Jiyon Son 제어로봇시스템학회 2021 제어로봇시스템학회 국제학술대회 논문집 Vol.2021 No.10
This paper introduces an autonomous product description recipe generator for knowledge base of teaching free robot system. Most of the work such as defining mission and planning tasks are the job of experienced human managers. Automating these jobs could dramatically reduce the setup time for tasks. The proposed system analyzes the product structure, part specifications and assembly step descriptions from the STEP model. The system consists of five modules: step analyzer, part analyzer, image analyzer, task analyzer, and recipe generator. The step analyzer extracts the bill of materials (BOM) from a STEP file, and configures components of each part separately. The part analyzer analyzes characteristics of each part such as feature size and relative position from the reference point. The image analyzer generates representative images of each part and analyzes the shape type of part. The task analyzer finds the pairs of parts that make up to the joint, and determines the assembly sequence of the joints. The recipe generator creates product recipe with descriptions of parts and tasks, and generates assembly instructions for the product. The recipe could be expected to be used as knowledge base for intelligent robots, as well as information for human managers to define tasks.
Supervised Hierarchical Bayesian Model-Based Electomyographic Control and Analysis
Han, Hyonyoung,Jo, Sungho IEEE 2014 IEEE Journal of Biomedical and Health Informatics Vol.18 No.4
<P>This work suggests a supervised hierarchical Bayesian model for surface electromyography (sEMG)-based motion classification and its strategy analysis. The proposed model unifies the optimal feature extraction and classification through probabilistic inference and learning by identifying the latent neural states (LNSs) that govern a collection of sEMG signals. In addition, the inference step provides an approach to identify distinct muscle activation strategies according to sEMG patterns based on LNSs. To validate the model, nine-class classification using four sEMG sensors on the limb motions is tested. The model performance is evaluated with relatively high and low activation levels, generalized classification across subjects and online classification. The model, based on LNSs to capture various motions, is assessed with respect to activation levels, individual subjects and transition during online classification. Our approach cannot only classify sEMG patterns, but also provide the interpretation of sEMG strategic patterns. This work supports the potential of the proposed model for sEMG control-based applications.</P>
한효녕(Hyonyoung Han),최창목(Changmok Choi),이연주(Yunjoo Lee),하성도(Sungdo Ha),김정(Jung Kim) 한국HCI학회 2008 한국HCI학회 학술대회 Vol.2008 No.2
본 논문에서는 근전도 신호 기반의 무선 착용형 컴퓨터 인터페이스를 개발하였다. 밴드 형태의 무선 착용형 단말기는 4 채널 근전도 센서와 붙어있으며, 대역통과 필터 및 차단 필터, 신호증폭기를 이용하여 구별 가능한 근전도 신호를 추출하였다. 얻어진 신호는 무선통신을 통해 컴퓨터로 전송하게 된다. 컴퓨터 인터페이스를 위해 손목 움직임을 사용하였으며, 움직임으로부터 획득된 신호를 다층 인식 신경망을 사용하여 손목 움직임을 인식하게 하였다. 이를 통하여 마우스 커서의 움직임을 제어하고, 마우스 버튼을 클릭하는 동작을 할 수 있으며, 시각 디스플레이 장치에 표시된 핸드폰 자판과 같은 유저 인터페이스를 통해 컴퓨터에 글자를 입력할 수 있게 하였다. This paper presents an EMG-based wireless and wearable computer interface. The wearable device contains 4 channel EMG sensors and is able to acquire EMG signals using signal processing. Obtained signals are transmitted to a host computer through wireless communication, EMG signals induced by the volitional movements are acquired from four sites in the lower limb to extract a user' s intention and six classes of wrist movements are discriminated by employing an artificial neural network (ANN), This interface could provide an aid to the limb disabled to directly access to computers and network environments without conventional computer interface such as a keyboard and a mouse.