build an autoencoder with me! step-by-step guide on understanding autoencoders and training them on MNIST!

Dev Shah
13 min readMay 26, 2024

--

The field of Machine Learning has been booming as of late, but this article doesn’t focus on the NLP side of ML, but on a rather unpopular subset of machine learning, auto-encoders. Despite their lower profile compared to other machine learning techniques, autoencoders hold immense potential and versatility in various domains. They excel at tasks such as data compression, image denoising, and feature extraction, making them a powerful tool for data scientists and engineers. This article will dive into the theory / math behind autoencoders, how to implement them in Python, using them on a sample dataset, and the different types of autoencoders! 😁

breaking down auto-encoders.

Before jumping into the Python implementation of autoencoders, let’s understand what they are on a theoretical level and how they work. Autoencoders are a type of neural network, more specifically, they are a type of neural network that is trained to copy its input to the output. To express this in other words, suppose our autoencoder is defined by this function, f(x), then the output of our autoencoder would be as follows: f(x) = x. This is extremely helpful as it allows the Autoencoder neural network to be able to reconstruct things such as images, text, and other data from compressed versions of themselves. However, in order to do this, it’s not as simple as returning the input data, there are many layers and components within the autoencoder, which allow it to perform this operation.

The Autoencoder architecture can be split into 2 key components, the encoder & the decoder. We can formally define the encoder as g(x) and the decoder as h(x), such that, we have x = h(g(x)). We’ll jump into the very specifics of the encoder and the decoder, but to put it briefly, the encoder layer compresses the data into a latent space representation. The compressed data is a distorted version of the original data; to the human eye, this compressed data means nothing and we can’t really draw any valuable conclusions from it. This compressed form of the data is then fed to the decoder. The decoder reverts what the encoder did, it decodes the image back to the original dimension. However, the decoded image is a lossy representation of the original piece of data; autoencoders are designed to be unable to learn to copy perfectly. This allows the neural network to learn only the important / useful properties of the data.

However, it’s not as simple as it sounds, it doesn’t exactly return the input, it performs some additional transformations to learn the most critical features of the data; these features of the data become helpful for applications such as anomaly detection. This process involves several steps and components that work together to optimize the autoencoder’s performance. There are 3 main components to the autoencoder which we’ll jump into:

  1. Encoder
  2. Bottleneck
  3. Decoder
the entire autoencoder architecture put together.

Encoder

The encoder is a crucial component of an autoencoder, tasked with the job of compressing input data into a lower-dimensional space. This compression process involves multiple steps and utilizes various types of neural network layers to achieve the desired transformation.

Compression Process

The primary goal of the encoder is to reduce the dimensionality of the input data while preserving the essential features. This process can be broken down into several key steps:

  1. Input Layer: The encoder starts with the input layer, where the original data is fed into the network. For instance, if dealing with images, the input data might be a flattened array of pixel values.
  2. Hidden Layers: Following the input layer, the encoder contains a series of hidden layers. Each hidden layer typically reduces the dimensionality of the data further. These hidden layers can be either:
  • Fully Connected (Dense) Layers: Each neuron in a dense layer is connected to every neuron in the preceding layer. This type of layer is common in simple autoencoders and is represented mathematically as: z=σ(Wx+b) where W is the weight matrix, 𝑥x is the input vector, b is the bias vector, and σ is an activation function such as ReLU.
  • Convolutional Layers: For image data, convolutional layers are often used. These layers apply convolutional filters to the input data, which helps in capturing spatial hierarchies and local patterns. A convolutional layer is represented as: z=σ(Wx+b) where ∗ denotes the convolution operation.

3. Activation Functions: Each hidden layer is typically followed by an activation function, such as ReLU (Rectified Linear Unit). The purpose of the activation function is to introduce non-linearity into the model, allowing it to learn more complex patterns. ReLU is defined as: ReLU(x)=max(0,x) Other common activation functions include sigmoid and tanh.

Latent Space Representation

The output of the final hidden layer in the encoder is the latent space representation, denoted as z. This latent vector is a compressed version of the input data, capturing the most salient features necessary for reconstruction. Mathematically, the transformation performed by the encoder can be expressed as: z=g(x)

where:

  • z is the latent representation.
  • g is the encoder function, which includes the series of transformations (i.e., layers and activations) applied to the input data x.

The size of z is typically much smaller than the size of 𝑥x, which forces the autoencoder to learn a compact and efficient encoding of the input data. We can implement this in code as follows:

In this example, we take in an input with an initial dimension of 784 and we compress it down to a 64-dimensional latent space. After defining the input layer, we pass it through 2 intermediate layers of 512 and 256 neurons, respectively. Each layer uses the ReLU activation function to introduce non-linearity, enabling the encoder to learn more complex features.

BottleNeck

The bottleneck is the intermediary portion between the encoder and decoder. It is a critical component of an autoencoder, positioned at the core of the network architecture. It serves as the compressed latent space representation of the input data and plays a crucial role in the overall functionality of the autoencoder.

The primary function of the bottleneck layer is to constrain the capacity of the autoencoder, forcing it to learn a compact and informative representation of the input data. By reducing the number of neurons in the bottleneck layer compared to the input and output layers, the autoencoder is encouraged to focus on the most salient features of the data while discarding noise and redundant information.

This dimensionality reduction is essential for several reasons:

  • Data Compression: The bottleneck layer effectively compresses the input data, making it more efficient to store and transmit.
  • Feature Extraction: By retaining only the most important features, the bottleneck layer facilitates tasks such as anomaly detection, where the critical characteristics of normal data are learned, and deviations from this norm can be detected.
  • Regularization: The bottleneck layer acts as a regularizer by preventing the network from simply learning an identity mapping. This encourages generalization and improves the network’s performance on unseen data.

Structure and Design

The bottleneck layer typically follows several hidden layers in the encoder that progressively reduce the dimensionality of the data. It is the narrowest point in the network, characterized by having fewer neurons than both the input and output layers.

For instance, in a simple autoencoder for image data, the bottleneck layer might be designed as follows:

  1. Input Layer: 784 neurons (for a flattened 28x28 image).
  2. Hidden Layer 1: 512 neurons with ReLU activation.
  3. Hidden Layer 2: 256 neurons with ReLU activation.
  4. Bottleneck Layer: 64 neurons.

In this example, the bottleneck layer compresses the data to a 64-dimensional representation, significantly reducing the original 784 dimensions.

Decoder

The decoder is a fundamental component of an autoencoder, responsible for reconstructing the original input data from the compressed latent space representation provided by the encoder. The decoder essentially performs the inverse operation of the encoder, transforming the low-dimensional latent representation back to its original high-dimensional form.

Function and Importance of the Decoder

The primary function of the decoder is to take the compressed data (latent representation) from the bottleneck layer and reconstruct it back to the original input data’s dimensions. The decoder ensures that the autoencoder can generate an output as close as possible to the original input, despite the compression that occurs in the bottleneck layer. This reconstruction process involves several steps and utilizes various types of neural network layers to achieve the desired transformation.

Key aspects of the decoder include:

  • Reconstruction: The decoder’s primary role is to reconstruct the input data from the latent space representation. This requires learning the inverse mapping of the encoder.
  • Dimensionality Expansion: The decoder gradually increases the dimensionality of the data from the compressed form to match the original input dimensions.
  • Error Minimization: During training, the decoder works in conjunction with the encoder to minimize the reconstruction error, ensuring the output is as close to the original input as possible.

Mathematical Representation

The transformation performed by the decoder can be mathematically represented as:

x’ =h(z)

where:

  • x’ is the reconstructed output.
  • h is the decoder function, which includes the series of layers and activation functions applied to the latent representation z.

Structure and Design

The decoder typically mirrors the architecture of the encoder, using similar types of layers but in reverse order. The structure of the decoder involves several key components:

  1. Latent Input: The decoder receives the latent representation 𝑧z from the bottleneck layer as its input.
  2. Hidden Layers: The decoder contains a series of hidden layers that progressively increase the dimensionality of the data. These hidden layers can be:
  • Fully Connected (Dense) Layers: Each neuron in a dense layer is connected to every neuron in the preceding layer. This type of layer is common in simple autoencoders.
  • Deconvolutional (Transposed Convolution) Layers: For image data, deconvolutional layers are often used to increase the spatial dimensions of the data. These layers perform the opposite operation of convolutional layers.

3. Activation Functions: Each hidden layer in the decoder is typically followed by an activation function, such as ReLU (Rectified Linear Unit) or sigmoid. The activation functions introduce non-linearity, enabling the network to learn more complex mappings.

4. Output Layer: The final layer of the decoder reconstructs the data to match the original input dimensions. The activation function used in the output layer depends on the nature of the input data (e.g., sigmoid for normalized pixel values). Building off of our encoder example above, here is how the full code would look with the encoder + decoder architectures:

In this example, the decoder takes the 64-dimensional latent representation and reconstructs it back to the original 784-dimensional input. The decoder uses two hidden layers with ReLU activation, followed by an output layer with sigmoid activation to match the normalized pixel values of the original input images. Now that we understand how the architecture works, let’s do an example on the infamous MNIST dataset!

mnist autoencoder example

The autoencoder can be used on the imfamous MNIST dataset to denoise the image. The autoencoder learns to compress 28x28 images into a 64-dimensional latent space and then reconstructs the images from this compressed representation. The network’s architecture ensures that only the most essential features of the input data are retained, enabling efficient data compression, noise reduction, and potentially other applications like anomaly detection. Here is the code for it and then let’s jump into explaining how it works:

The dimensions for the input and latent space are defined next. The input_dim is set to 784, corresponding to the flattened size of a 28x28 image, such as those from the MNIST dataset. The encoding_dim is set to 64, indicating the size of the latent space representation. This compressed representation is key to the autoencoder's ability to learn and generalize the most salient features of the input data.

The encoder part of the autoencoder is then defined. It starts with the Input layer, which accepts the input data of shape (784,). This is followed by a series of dense layers. The first dense layer has 512 neurons with a ReLU (Rectified Linear Unit) activation function, transforming the input data into a higher level of abstraction. The second dense layer reduces the dimensionality further with 256 neurons, also using ReLU activation. The final layer of the encoder, known as the bottleneck layer, has 64 neurons and uses ReLU activation. This layer is crucial as it holds the latent space representation of the input data, effectively compressing it to 64 dimensions.

Following the encoder, the decoder part of the autoencoder is defined. The decoder begins with a dense layer that takes the latent representation as input and increases its dimensionality to 256 neurons with ReLU activation. The next dense layer further increases the dimensionality to 512 neurons, again using ReLU activation. The final output layer of the decoder has 784 neurons with a sigmoid activation function. The use of the sigmoid activation function ensures that the output values are between 0 and 1, matching the normalized pixel values of the original input images. This layer reconstructs the input data from the compressed latent representation, aiming to produce an output as close to the original input as possible.

The encoder and decoder are then combined into a single autoencoder model using the Model class. This combined model takes the input data and outputs the reconstructed data. The model is compiled with the Adam optimizer and the mean squared error (MSE) loss function. The Adam optimizer is a popular choice for its efficient gradient-based optimization, and MSE is used here to measure the average squared difference between the input and reconstructed output, guiding the model's learning process.

To prepare for training, the MNIST dataset is loaded, and the input images are normalized by dividing by 255, converting pixel values to the range [0, 1]. The images are then reshaped into 1D arrays of 784 elements. The autoencoder is trained on this data for 50 epochs with a batch size of 256, using the training set for both input and target output. The validation set is also used to monitor the model’s performance on unseen data during training. Feel free to run this code on your own and playing around with it😁

types of autoencoders

Autoencoders come in many types, they aren’t limited to a single type. There are many types of autoencoders, but the 6 main ones I’ll be focusing on are:

  1. Basic Autoencoder (already covered)
  2. Denoising Autoencoder
  3. Sparse Autoencoder
  4. Variational Autoencoder
  5. Convolutional Autoencoder
  6. Contractive Autoencoder

Denoising Autoencoders (DAE)

Denoising autoencoders are designed to remove noise from data. They are trained by corrupting the input data with noise and then trying to reconstruct the original, noise-free data. This encourages the autoencoder to learn robust features that capture the underlying structure of the data, rather than just memorizing the input.

Functionality:

  • Noise Addition: Corrupt the input data 𝑥x with noise to obtain x’.
  • Encoder: Compresses x’ into latent representation z.
  • Decoder: Reconstructs the denoised output x’ from z.
  • Loss Function: Measures the difference between the original input x and the reconstructed output x’.

Denoising autoencoders are particularly useful for image denoising, feature extraction, and as a preprocessing step in various machine learning tasks.

Sparse Autoencoders

Sparse autoencoders introduce a sparsity constraint on the hidden units in the latent space. This means that during training, only a few neurons in the latent representation are active at any given time. This constraint is typically enforced using a regularization term in the loss function, such as the Kullback-Leibler (KL) divergence.

Functionality:

  • Encoder: Transforms input x into a sparse latent representation z.
  • Sparsity Constraint: Applies a sparsity regularization term to ensure that only a few neurons in z are active.
  • Decoder: Reconstructs the input x from the sparse representation z.
  • Loss Function: Combines reconstruction loss with sparsity regularization term.

Sparse autoencoders are useful for learning interpretable features and can be employed in anomaly detection and other tasks where feature selection is important.

Variational Autoencoders (VAE)

Variational autoencoders are a type of generative model that learn to approximate the distribution of the input data. Unlike basic autoencoders, VAEs encode the input data into a distribution over the latent space, rather than a single point. This involves learning the mean and variance of the latent space, allowing the model to generate new data by sampling from this distribution.

Functionality:

  • Encoder: Transforms input x into parameters of a latent distribution (mean μ and variance σ^2).
  • Latent Space Sampling: Samples z from the latent distribution using the reparameterization trick to ensure differentiability.
  • Decoder: Reconstructs the input x from the sampled latent representation z.
  • Loss Function: Combines reconstruction loss with a KL divergence term to regularize the latent space distribution.

VAEs are widely used in generative tasks, such as generating new images, and in applications where learning a smooth latent space representation is beneficial.

Convolutional Autoencoders (CAE)

Convolutional autoencoders are specifically designed for image data. They replace the fully connected layers with convolutional layers in the encoder and deconvolutional (transposed convolution) layers in the decoder. This allows them to better capture spatial hierarchies and local patterns in the data.

Functionality:

  • Encoder: Uses convolutional layers to transform the input image x into a lower-dimensional latent representation z.
  • Latent Space: Holds the compressed representation z.
  • Decoder: Uses deconvolutional layers to reconstruct the input image x from z.
  • Loss Function: Measures the difference between the original image and the reconstructed image.

Convolutional autoencoders are highly effective for tasks such as image compression, denoising, and feature extraction in computer vision.

Contractive Autoencoders (CAE)

Contractive autoencoders aim to make the learned representation robust to small changes in the input data. They achieve this by adding a regularization term to the loss function that penalizes the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. This encourages the model to learn a smooth mapping from input to latent space.

Functionality:

  • Encoder: Transforms input x into a latent representation z.
  • Regularization: Adds a penalty term to the loss function based on the Jacobian of the encoder activations.
  • Decoder: Reconstructs the input x from z.
  • Loss Function: Combines reconstruction loss with the contractive penalty term.

Contractive autoencoders are useful for learning robust features and have applications in unsupervised feature learning and manifold learning.

That marks the end of understanding autoencoders from a theoretical level and using them on a sample dataset. I hope that reading this article added value to you and provided a clear understanding of how to use an autoencoder and the inner workings of the encoder & decoder architecture. That was a long article, but if you’ve read till the end, thank you so much for taking time to read my article and I hope you walked away with more knowledge than you came in with :)

If you have any questions regarding this article or just want to connect, you can find me on LinkedIn or my personal website :)

--

--

Dev Shah
Dev Shah

Written by Dev Shah

i write on occasion :) | passionate about AI

No responses yet