We compose the output layer recurrent multi-layered neural network that returns the output of output layer in the multi-layered perceptron with 4 layers to the lower hidden layer and study on the comparison of the spoken digit recognition performance ...
We compose the output layer recurrent multi-layered neural network that returns the output of output layer in the multi-layered perceptron with 4 layers to the lower hidden layer and study on the comparison of the spoken digit recognition performance according to the variation of the number of neuron in the upper and lower hidden layer, and the predictive order, and the learning rate and self-recurrent coefficient of the state layer.
By the experimental results, when the number of neuron in the lower hidden layer is
more than the number of neuron in the upper hidden layer or two layers have the same
number of neuron, this network improves in it's recognition ability. The predictive order
doesn't contribute to the improvement of the recognition rate. In the case of the
learning rate and self-recurrent coefficient, the recognition rate is increased at 0.0001
and 0, respectively.