RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        〈속보논문〉 영상 공간에서의 연결성 기반 가중치 누적을 통한 코너점 검출 : 이차원 바코드 검출에의 응용

        金廷泰(Jeongtae Kim),宋振永(Jinyoung Song) 대한전기학회 2007 전기학회논문지 Vol.56 No.10

        We propse a novel corner detection algorithm for locating 2D Data Matrix barcode in an image. The proposed method accumulates weight for each cross point defined by every combination of edge points in the image, and detects the comer point of the barcode L-pattern by determining the location of the highest accumulated weight. By designing the weight considering the connectivity of two lines around the cross point, we were able to detect the comer of L-pattern even for the cases that the lines of L-patterns are short. In the experiments, the proposed method showed improved performance compared with the conventional Hough transform based method in terms of detectability and computation time.

      • KCI등재

        충남방언 ‘X]vst + 어요’의 음운론

        김정태(Kim Jeongtae) 한국언어문학회 2010 한국언어문학 Vol.75 No.-

        This study explores a phonological study of ‘X]vst + eoyo(어요)’ in Chungnam Dialect. Dialtects of four regions including Gongju, Cheonan, Boryeong, and Taean are realized as [-yu] with long sound ending, which are represented as Chungnam dialect. In fact, this is caused the mechanism of vowel raising from ‘o → u’. In addition, ‘X]vst + eoyo(어요)’ dialects are different based on whether its final word is vowel or consonant. Also, it is confirmed that there are two types of Chungnam dialect according to region. Thus, the purpose of this study is to discover what are phonological mechanisms which operate realization of those dialects. First, ‘[-u]’ is represented as Chungnam diatect among ‘X]vst + eoyo(어요)’dialects. Second, this Chungnam diatect can be categorized into two largely based on whether final word is vowel or consonant. Third, it is confirmed that there are two dialects according to region within those two categorization, or inland region such as Gongju․Cheonan regional dialect and west coastal region dialects such as Boryeong․Taean regional dialect. Forth, in case of integrated type which has a vowel in word final + ‘-eoyo’, a variety of phenomena are observed in two regions. One is that there is ‘-eo’ deletion in common in word final vowels such as ‘a, eo, ae, e, oe’. The other is that gliding is realized on the condition that word final vowel ‘o/u and I’ are integrated with ‘-eoyo’ in dialect of inland area (Gongju․Cheonan), which is a typically middle area dialect. However, replacement and deletion are realized due to expansion of vowel raising rules, ‘eo → eu’, in case that ‘-eoyo’ is integrated with word final vowel with condition of gliding. Irregular conjugation of ‘s (ㅅ) and b (ㅂ)’ experienced this realization too. Another example can be found in the case of word final vowel ‘eu’ where realization of Gongju․Cheonan shows variation of ‘-eo’, two vowels on word final, as in ‘kkeu + eoyo → kkeoyu’ like middle area dialect. On the contrary, Boryeong․Taean regional dialect shows variation of ‘-eo’, two vowels on word final, as in ‘kkeu + eoyo → kkeuu’. This tendency of realization in the case of integration type in word final vowel + ‘-eoyo’ is about the same in the integration of ‘-eoya’. Finally, various phenomena are provided depending on two regions in the case of integration type such as word final consonant + ‘-eoyo’. One is that Chungnam dialect in word final consonant + ‘-eoyo’ is realized as ‘[-u]’ in common, showing regional difference subject to the types of consonants. Another is that when word final consonant of inland area (Gongju․Cheonan) is integrated with ‘-eoyo’, it is prolonged except final consonant ‘ss’, which is a common middle area dialect. Additionally, when word final bilabials and ‘-eoyo’ are integrated in west region (Boryeong, Taean), they experience replacement due to the expansion of vowel raising rules in ‘eo → eu’ and even vowel rounding in ‘eu → u’. Also, in the case of word final alveolar, hard palate, and soft palate, variation in ‘eo →eu’, voluntary vowel fronting in ‘eu →i’, and deletion of same sound are realized. Yet, a distinctive dialects are observed where mutation in ‘eo →eu’, ‘eu → i’, and deletion of same sound are applied throughout Chungnam area In the case of ‘ss’ among alveo fricatives in stem final. In addition, ‘h (ㅎ)’, consonant in stem final is classified as dialect of stem final vowel as it is deleted in the process of integration with ‘-eoyo’.

      • KCI등재

        Gradient 방향을 고려한 허프 변환을 이용한 직선 검출 방법

        金廷泰(Jeongtae Kim) 대한전기학회 2007 전기학회논문지 Vol.56 No.1

        We have proposed a novel line detection method based on the estimated probability density function of gradient directions of edges. By estimating peaks of the density function, we determine groups of edges that have the same gradient direction. For edges in the same groups, we detect lines that correspond to peaks of the connectivity weighted distribution of the distances from the origin. In the experiments using the Data Matrix barcode images and LCD images, the proposed method showed better performance than conventional methods in terms of the processing speed and accuracy.

      • KCI등재

        이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안

        김정태(Jeongtae Kim),박은비(Eunbi Park),한기웅(Kiwoong Han),이정현(Junghyun Lee),이홍주(Hong Joo Lee) 한국지능정보시스템학회 2021 지능정보연구 Vol.27 No.3

        이미지 분류에서 딥러닝 모형을 사용하는 가장 큰 이유는 이미지의 전체적인 정보에서 각 지역 특징을 추출하여 서로의 관계를 고려할 수 있기 때문이다. 하지만 이미지의 지역 특징이 없는 감정 이미지 데이터는 CNN 모델이 적합하지 않을 수 있다. 이러한 감정 이미지 분류의 어려움을 해결하기 위하여 매년 많은 연구자들이 감정 이미지에 적합한 CNN기반 아키텍처를 제시하고 있다. 색깔과 사람 감정간의 관계에 대한 연구들도 수행되었으며, 색깔에 따라 다른 감정이 유도된다는 결과들이 도출되었다. 딥러닝을 활용한 연구에서도 색깔정보를 활용하여 이미지 감성분류에 적용하는 연구들이 있어왔으며, 이미지만을 가지고 분류 모형을 학습한 경우보다 이미지의 색깔 정보를 추가로 활용한 경우가 이미지 감성 분류 정확도를 더 높일 수 있었다. 본 연구는 사람이 이미지의 감정을 분류하는 기준 중 많은 부분을 차지하는 색감을 이용하여 이미지 감성 분류 정확도를 향상시키는 방안을 제안한다. 이미지의 RGB 값에 K 평균 군집화 방안을 적용하여 이미지를 대표하는 색을 추출하여, 각 감성 클래스 별 해당 색깔이 나올 확률을 가중치 식으로 변형 후 CNN 모델의 최종 Layer에 적용하는 이-단계 학습방안을 구현하였다. 이미지 데이터는 6가지 감정으로 분류되는 Emotion6와 8가지 감정으로 분류되는 Artphoto를 사용하였다. 학습에 사용한 CNN 모델은 Densenet169, Mnasnet, Resnet101, Resnet152, Vgg19를 사용하였으며, 성능 평가는 5겹 교차검증으로 CNN 모델에 이-단계 학습 방안을 적용하여 전후 성과를 비교하였다. CNN 아키텍처만을 활용한 경우보다 색 속성에서 추출한 정보를 함께 사용하였을 때 더 좋은 분류 정확도를 보였다. The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region"s features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image"s regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image"s color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image"s emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image"s emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image"s sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn"s Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate

      • KCI등재

        ‘바위(岩)’의 통시적 변화와 방언 분포상의 특징

        김정태(Kim Jeongtae) 한국언어문학회 2009 한국언어문학 Vol.70 No.-

        This study is to examine the diachronic changes of paü(rock) and the distribution of paü's dialect forms. One of the internalized foci of this study is whether the results from the study on the changes and characteristics of a single word, paü can be generalized as a solid consequence. Therefore, to secure universality and generalization of the study, there should be common characteristics found in other words. However, archaic words were found in old books are very helpful to assume the origin of paü', and it can be clues to trace back the procedure of paü's diachronic changes. Since the completion of Hunminjeongeum(The Correct/Proper Sounds for the Instruction of the People), because paü appeared very frequently in Eonhae(諺解) literature, dialects, and geographical names, a glimpse and review on the diachronic changes and distribution characteristics of paü can be taken referring to those materials. In the old native folk song, Heonhwaga, ‘…岩乎’ was read as paho, and assuming the phonological history of /k/, the origin form of rock, pako and pakvy were reconstructed. The physiologically developed form, pakvy was found in the old geographical name' references like Samguksagijirigi. The pakvy meant a 'rock' or 'hill', so it is clear the word had been used as polysemy and homonymy. With the meaning differentiation, the meaning of hill, pakvy was developed as kokƐ(<kokay) and pakvy itself was used only to mean a 'rock'. This form was changed to pahoy in the period of the completion of Hunminjeongeum, and transformed again to pau in modern ages. The phonological phenomena such as /k/ weakening, deletion, and vowel raising were applied in the historical process. As the forms of pau's dialect, various forms such as paü, pauy, pagu, pau, pao, paŋgu, paŋu, phagu, phaŋgu, phaŋu, pai were realized. Each dialect form shows some kind of distribution characteristics with the monophthongization (the conversion of diphthongs to monophthongs), the first letter's aspiration, the addition of /ŋ/ sound and so on. For example, the monophthongization of uy>u was realized across the country except the dialect in the midland part, and the dialects with an archaic /k/ appeared in the southwestern, southeastern, and northeastern dialects. Furthermore, the southeastern and northeastern dialects wer characterized with the /ŋ/ sound addition. Through a series of the phonological phenomena such as vowel raising, the monophthongization, /k/ sound weakening and deletion, the comparative study between midland dialects and non-midland dialects can be performed and generalized consequence will be drawn. In conclusion, the diachronic changes of paü(rock) and the distribution characteristics of paü's dialects were involved with Korean phoneme history and phonological phenomena like the /k/ weakening and deletion, the o>u vowel raising, the monophthongization, the aspiration, /ŋ/ sound addition and so on. This study proved characteristic dialect distribution and its results will be applied to classify the existing dialect areas as well.

      • KCI등재

        반복 semi-blind 워너 필터링을 이용한 이진영상의 복원

        金廷泰(Jeongtae Kim) 대한전기학회 2008 전기학회논문지 Vol.57 No.7

        We present a novel deblurring algorithm for bi-level images blurred by some parameterizable point spread function. The proposed method iteratively searches unknown parameters in the point spread function and noise-to-signal ratio by minimizing an objective function that is based on the binariness and tile difference between two intensity values of restoring image. In simulations and experiments, the proposed method showed improved performance compared with the Wiener filtering based method in terms of bit error rate after segmentation.

      • KCI등재

        에지 위치 추정을 통한 이진 파형의 복원

        金廷泰(Jeongtae Kim) 대한전기학회 2006 전기학회논문지 D Vol.55 No.7

        We have proposed an image restoration method for a bi-level waveforms whose number of edges is known to us. Based on the information, we parametrize a bi-level waveform using the locations of edges and restore the waveform by estimating the parameter. We estimated the locations by maximizing the correlation coefficients between the bi-level waveform and the measured waveform. In experiments using two dimensional barcode images of the PDF417 specification, the proposed method showed better performance than conventional methods in the sense that the proposed method was able to decode barcode images that were not decoded by the conventional methods.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼