http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
박수봉 中央大學校 遺傳工學硏究所 1989 遺傳工學硏究論集 Vol.2 No.1
The development of system for maturation of follicular oocyte in vitro is necessary to acquire competent oocyte and early embryos in plentiful number and at a price for studies of the many new technologies. Bovine follicular oocytes matured in vilro achieved fertilization and sudsequently cleavage, but the frequencies of both preimplantation development to blastocyst stage and live young after transfer were low compared to the oocytes matured in vivo. Hence, it has been concluded that a deficient cytoplasmic maturation occures in spontaneously maturing oocytes even though nuclear maturation appear nomal. Probably, the abnormality of spontaneously matured oocytes may have been the result of deficiency in culture system. Also, this problem may be due to unhealthy oocytes recovered from ovaries at random stages of the estrus cycle. This seems likely since hitological studies show that the majority of vesicular follicles on bovine ovaries are in some stages of atresia. These factors indicate that the studies for maturation of bovine follicular oocyte in vilro need improvement of culture conditions and to define the characteristics that might be indicative of healthy oocyte.
에지특징에 근거한 이미지영역 분리와 DCT를 이용한 얼굴 인식
박수봉,이인범 東新大學校 1998 論文集 Vol.10 No.-
In this paper, we propose a face recognition algorithm which extract characteristics of image using edge and DCT(Discrete Cosine Transform). In this algorithm, training vectors of neural networks is the extracted data. With the same luminesce and distance, the fixed CCD camera, human face was captured. Edge characteristics of face images is concentrated in eye bows and mouth. Therefore, using edge characteristics of face images, it was segmented with square region. we determined this area to the characteristics region of face images, and contains eye bows, eyes, nose and mouth. Also, after executing DCT for this square region, we extracted feature vector. This feature vector was normalized and set the input vector of neural networks. Simulation results show 100% recognition for 30 face images when face images were learned and 94% recognition rate when face images weren't learned. Also, in case of DCT processing, the proposed algorithm reduced 55% operation time than to process all face images.
한우 수란우의 임신율에 대한 hCG 영향과 혈장 요소태질소 수준과의 관계
박수봉,임석기,우제석,김일화,최선호,이장희,김인철,손동수 韓國受精卵移植學會 2000 한국동물생명공학회지 Vol.15 No.2
This study was undertaken to test the hypothesis the hypothesis that treatment with hCG (5,000 IU) at the time of embryo transfer would enhance pregnancy rates in recipients, and the concentration of plasma urea nitrogen(PUN) in recipients was related to the effect of hCG on the reproductive performance. Blood samples were taken according to experimental condition for the assessment of the endogenous plasma progesterone concentration and plasma urea nitrogen. Concentrations of progesterone in plasma were higher in cows treated with hCG on day 7(estrus=day 0) than in those untreated during 7∼43 days after insemination. The pregnancy rates were 65.5 and 54.6% for the hCG treated and untreated groups, respectively. In recipient group categorized with PUN concentration of <12 mg/이, the pregnancy rates were 68.8 and 46.7% for the hCG treated and untreated groups, respectively. The results suggest that hCG treatment at 7 days after insemination could be used to increase the pregnancy rate of embryo transfer, and transfer, and only the recipients with PUN concentration of <12 mg/dl were influenced by treatment with hCG.
CIE X_(10) Y_(10) Z_(10)색표시계에 따른 선글라스용 렌즈의 측색
박수봉,마기중 金泉大學 1992 논문집 Vol.11 No.1
시감투과율이 82%이상으로 한국규격에서 선글라스용 렌즈로 구분될 수 없는 23개 렌즈를 제외한 22개 렌즈에 대해서 CIE X_(10) Y_(10) Z_(10)색표시계에 따른 주파장과 자극순도를 조사한 결과는 다음과 같다. 1. 각기 다른 염료로 착색된 렌즈의 주파장은 각각 갈색 : 582-596nm(CIE A). 575-590nm(CIE B). 572-587nm(CIE D_(65)), 핑크색 : 592-595nm(CIE A). 589-592nm(CIE B). 588-591nm(CIE D_(65)), 청색 : 481-485nm(CIE A). 475-479nm(CIE B). 474-477nm(CIE D_(65), 녹색 : 556-564nm(CIE A). 550-556nm(CIE B). 547-553nm(CIE D_(65))이었다. 2. 자극순도가 25% 이상으로 KS규격을 초과한 렌즈의 염료별 착색시간은 각각 갈색 :30초 이상 (CIE A & D_(65)) 1분이상 (CIE B), 핑크색 : 2분 이상 (CIE A), 5분이상 (CIE D65 & 8), 청색 : 10분 이상 (CIE D_(65)), 20분 이상(CIE B), 녹색 : 20분 이상(CIE A, D_(65) & B)이었다. 한편 평균주광 (D_(65)) 의 color shifts 에 대한 ANSI규격을 초과한 렌즈는 5분 이상 갈색 염료로 착색된 렌즈와 20분 동안 착색된 핑크색 렌즈뿐이었다. The transmittance properties of 45 dyed CR-39 lenses were examined to determine whether these lenses met the KS P-4404 Lenses for Sunglass standards for excitation purity, and colorimetered according to the CIE 1964 supplementary standard colorimetric system. Approximately 50% of the sampled dyed lenses failed to meet KS requirement for luminous transmittance. Of 22 lenses that passed to KS requirement for luminous transmittance. only 11 lenses that met KS requirement for exitation purity.
박수봉 東新大學校 1999 論文集 Vol.11 No.-
In this paper, we propose a image search algorithm using back-error propagation learning based on WMRV. We capture images of 256×256 size with gray scale 256 levels from input images which were having monotonous background in the condition of same lumination and distance. After removing noises with a pre-processing process and edge detection, we detect a characteristic region. From this region we extract characteristic vectors using DCT(Discrete Cosine Transform). In case of human face image, we could see edge lineaments are distributed eye brows and a eyes in large numbers. Using this feature we detect a square area including eye brows, eyes, a nose, and a mouse which were contained most of human face information. And we use this square area as input data for multilayer neural network so that recognize a human face after learning process. In DCT processing with a square area which were extracted, the calculation time was reduced 49% than processing of entire facial image. Through simulation results, it shows many improvement in convergence speed. After simulation for 30 persons with 10 images per person(total 300 images), we proved 100% of recognition rate and shows 94% recognition rate for 50 persons which weren't learned by multilayer neural networks.