2023 International Conference on Information Networking (ICOIN)
Download PDF

Abstract

This paper compares and analyzes the classification performance of latent vectors in the encoder-decoder model. A typical encoder-decoder model, such as an autoencoder, transforms the encoder input into a latent vector and feeds it into the decoder. In this process, the encoder-decoder model learns to produce a decoder output similar to the encoder input. We can consider that the latent vector of the encoder-decoder model is well preserved by abstracting the characteristics of the encoder input. Further, it is possible to apply to unsupervised learning if the latent vector guarantees a sufficient distance between clusters in the feature space. In this paper, the classification performance of latent vectors is analyzed as a basic study for applying latent vectors in encoder-decoder models to unsupervised and continual learning. The latent vectors obtained by the stacked autoencoder and 2 types of CNN-based autoencoder are applied to 6 kinds of classifiers including KNN and random forest. Experimental results show that the latent vector using the CNN-based autoencoder with a dense layer shows superior classification performance by up to 2% compared to the result of the stacked autoencoder. Based on the results in this paper, it is possible to extend the latent vector obtained by using a CNN-based auto-encoder with dense layer to unsupervised learning.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles