http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
尹賢植 忠州大學校 1993 한국교통대학교 논문집 Vol.28 No.-
This paper presents a design methodology of a neuron processor using Residue Number System(RNS). Since RNS has no carry propagation when doing arithmetic multiplication, the neuron processor designed for using RNS methodology can process neural network operation with very high speed. And we present another high speed solution method by using look-up table which store the value of sigmoid function's activation field decimated in 2ⁿ. Because the processing step of digital neural network represented by recursive matrix vector operation, it is suitable for the design of an array processor. This paper presents neuron processor with systolic array architecture for the high speed neural network implementation. The proposed method would expect to adopt for the application field of neural networks because it can be realized to currently developed VLSI technology.
尹賢植 忠州大學校 1987 한국교통대학교 논문집 Vol.21 No.-
AS A PROCESSING METHOD IN THE PATTERN RECOGNITION, THIS PAPER PRESENTS RECOGNITION TECHNIQUE WHICH IS EXPRESSED BY CURVED LINE IN THE FREQUENCY DOMAIN USING FOURIER DESCRIBER. AND WE INTRODUCE AN ALGORITHM WHICH CAN BE APPLIED TO ANOTHER RECOGNITION METHODS.
소형 컴퓨터를 위한 크로스 어쌤블러의 작성기법에 관한 연구
윤현식 忠州大學校 1985 한국교통대학교 논문집 Vol.18 No.2
Since most minicomputer systems and almost all timesharing systems have BASIC interprereters or compilers available, this seems to be the most reasonable language choice for writing cross Assembler. This paper presents a method for writing cross Assemblers which is both modular and can be used for many different microprocessors with little modification.
윤현식 忠州大學校 1986 한국교통대학교 논문집 Vol.20 No.1
The paper presents a data flow architecture with a paged memory system to hold both data flow programs and data structures. The token labeling mechenism is coupled with the memory management system in order to provide for each token a unique memory location. The instruction format allows instructions with multiple operands and multiple destinations for each result Data structures are held in memory while pointers to the structures are circulating as tokens.
시스톨릭 어레이 프로세서에 의한 역전파 신경회로망의 설계
尹賢植 忠州大學校 1996 한국교통대학교 논문집 Vol.31 No.2
The algorithms in the neural network models are simple inner and outer product solutions, not complex multiple matrix inversions. The major operations in processing elements are multiplications and additions for the matrix-vector products. Implementing such operations with binary number systems, as those of conventional fixed - and floating-point notations, introduces complex hardware circuits and inefficient time-consuming operations due to the significant unbalance of speeds between multiplications and additions. Instead, RNS is proposed in this thesis for the basic operations of processing elements. Considering the input values of neural networks have small ranges, RNS proves to be a solution for simple high speed implementations for the basic operations. In RNS, inter modular operations are independent, which allows the same speed of operations in the additions and multiplications. The parallelism and the recursive feature of matrix-vector operations in the neural network processing also lead to the design of systolic array architecture. In this thesis, 1-D systolic array architecture is proposed to allow the scalable number of nodes. The possibility of overflows may occur in the case of exceeding number of nodes out of RNS range. This limits the scalability. For designing scalable architectures, the number of moduli must be increased. This may give rise the size of sigmoid function increased and the speed decreased. The recursive application of 1-D systoloic array shows a good solution for such problems. In this paper, RNS based on the systolic architecture is designed for the BP network. A special technic is proposed, which is the ROM look-up table method for the fast computation of sigmoid functions. The ROM table contains 2?? sampled data in the activation field of the sigmoid function. The simulation shows that a small scale of ROM is enough for the implementation of sigmoid function operations, which does not require the complex floating-point arithmetic operations. The simulation shows that with 256 nodes the expected speed is 5.12 micro seconds when 20 nano second speed of ROM is used for the multiplier with accumulator.
尹賢植 충주대학교 1992 한국교통대학교 논문집 Vol.26 No.-
In this paper, we will present an array processor for the implementation of digital neural networks. Back-propagation model can be formulated as a consecutive matrix-vector multiplication problem with some prespecified thresholding operation. This structure suited for the design of array processor, because the operation procedure can be recursively and repeatedly executed. Systolic array circuit architecture with Residue Number System is suggested to realize the efficient arithmetic circuit for matrix-vector multiplication and computing sigmoid function. The proposed design method expected to adopt for application field of neural networks because it can be realized to currently developed VLSI technology.
尹賢植 충주대학교 1990 한국교통대학교 논문집 Vol.24 No.-
Digital facsimile is an increasingly important component in office automation. Techniques for the compression of bi-level scanned image data are vital to the efficient storage, retrieval and transmission of office documents. A scheme for the compression of typewritten and printed documents is described. It incorporates a new pattern matching algorithem which can handle a variety of styles and sizes of text more efficiently then existing method.