Unlike the discriminative model, the generative model has a characteristic that enables generation of new data by learning the distribution of data and has been attracting attention for its performance in powerful functions that can not be made with t...
Unlike the discriminative model, the generative model has a characteristic that enables generation of new data by learning the distribution of data and has been attracting attention for its performance in powerful functions that can not be made with the discriminative model. Despite its theoretical necessity, the development of the generative model, which has to solve more complex problems than the discriminative model, was relatively slow. In recent years, deep learning has dramatically improved the performance of decision models, and researches have been carried out to apply them to learning of generative models. - 52 - Using these deep learning methods, we have developed a generative model with a neural network model based on Standard Function Approximator and a generative model based on Stochastic Gradient Descent (SGD) on a large amount of data. It is called a deep generative model. GANs (Generative Adversarial Networks), a kind of deep generative model, proposed a competitive network of two neural network models that can indirectly learn data distribution while minimizing assumptions on data distribution. In particular, among the deep generative models, it was noticed for its exceptional result. Although GANs learning is generally unstable, the amazing quality of data conserved in high-dimensions is not only useful in many applications, but also in solving many difficult problems in modern machine learning researches. Therefore, it has become one of the core ideas in the field. As a matter of fact, the need for researches in improving the biggest problem of learning instability while maintaining the quality has been increased. In this paper, we propose a method to solve learning instability, a presented problem of GANs, by combining it with VAE (Variational Autoencoder) which is another deep generative model. We define a new objective function that combines GANs and VAE, which is a deep generative model based on an autoencoder that can be reliably learned, to solve the problem by solving vanishing gradient problem, which is one of the key reasons of GANs learning instability. We show that we can successfully combine VAE and GANs and that using this new structure can simultaneously achieve two key goals of a deep generative model, which are stable learning and good quality products, in our experiments