Data Forest logo
Home page  /  Glossary / 
Transformers

Transformers

Transformers are a type of deep learning architecture that processes sequential data by leveraging *self-attention* mechanisms rather than the recurrent layers typically found in traditional models for sequential processing, such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks. Introduced in the 2017 paper *Attention is All You Need* by Vaswani et al., Transformers have revolutionized natural language processing (NLP), enabling models to capture long-range dependencies within sequences with improved efficiency. Transformers are designed with an *encoder-decoder structure*, making them versatile for a wide range of applications, from language translation to text generation and beyond.

Structure of Transformers

The Transformer model is built on a stack of encoder and decoder layers, each containing two main components: multi-head self-attention and feed-forward neural networks. Both the encoder and decoder modules are composed of identical layers repeated multiple times (often between 6 and 12), with each layer adding greater representational depth and improving the model's ability to learn complex relationships within the data.

  1. Encoder: The encoder’s role is to process the input sequence, generating context-rich representations that capture meaningful relationships across tokens. Each encoder layer includes:
    • Multi-Head Self-Attention: This mechanism allows each token in the input to attend to every other token, learning dependencies regardless of distance in the sequence. The *multi-head* aspect enables the model to learn different types of relationships simultaneously by using multiple attention heads.  
    • Feed-Forward Neural Network: Each encoder layer contains a feed-forward network (FFN) that applies two linear transformations separated by a non-linear activation function, typically ReLU, refining the attention outputs.  
    • Residual Connections and Layer Normalization: Residual connections bypass each layer’s input to its output, enabling gradients to propagate effectively. Layer normalization is then applied to stabilize and accelerate training.
  2. Decoder: The decoder produces the output sequence based on both the encoder’s output and the previously generated tokens in the target sequence. Each decoder layer includes:
    • Masked Multi-Head Self-Attention: The first attention mechanism is masked, meaning it only considers previous tokens to prevent “seeing” future tokens in the sequence. This masking is crucial for tasks like text generation, where causality must be preserved.  
    • Encoder-Decoder Attention: The second attention layer focuses on the encoder’s output, allowing the decoder to attend to relevant parts of the input sequence.  
    • Feed-Forward Neural Network: Like the encoder, the decoder includes a feed-forward network that refines the output, followed by residual connections and normalization.

Self-Attention Mechanism

Self-attention is the core mechanism of the Transformer, enabling it to process entire sequences by determining the importance of each token in relation to others. Self-attention involves creating *query (Q)*, *key (K)*, and *value (V)* vectors for each token in the sequence. For a token to attend to another, the similarity between the query and key vectors is calculated, determining the attention score. Each token’s output is a weighted sum of the values, scaled by these attention scores.

  1. Calculating Attention Scores: For a given sequence, each token is projected into Q, K, and V vectors. The attention score `A` between two tokens is computed as:  
    `A = softmax((Q * K^T) / sqrt(d_k)) * V`
    Here:
    • `Q`, `K`, and `V` represent the query, key, and value matrices, respectively,  
    • `d_k` is the dimensionality of the key vectors, used to scale the dot product, and  
    • `softmax` normalizes the attention scores, allowing the model to focus selectively on relevant tokens.
  2. Multi-Head Attention: Multi-head attention allows the model to apply several independent attention mechanisms simultaneously, enhancing the ability to capture diverse types of relationships within the sequence. Each head generates a separate attention output, and these outputs are concatenated and linearly transformed into the final multi-head attention output, providing richer context and multiple perspectives.

Positional Encoding

Since Transformers lack inherent sequence order (unlike RNNs), positional encoding is introduced to capture token position within sequences. Positional encoding adds distinct values to each token embedding based on its position, allowing the model to recognize token order. The positional encoding for a token at position `pos` with embedding dimension `i` is calculated as follows:

  • `PE(pos, 2i) = sin(pos / 10000^(2i / d_model))`
  • `PE(pos, 2i+1) = cos(pos / 10000^(2i / d_model))`

where `d_model` is the model dimension. The sine and cosine functions produce unique, continuous values that encode positional information effectively for all positions in a sequence.

Training Objective

The Transformer model is trained to predict the next token in the sequence based on the input sequence, often using a cross-entropy loss function to maximize the likelihood of generating correct tokens. In a language modeling context, for a target output sequence `Y`, the model aims to minimize the following objective:

`L = -Σ log P(y_t | y_1, ..., y_(t-1), X)`

where `y_t` is the token at position `t` in the output sequence, `X` is the input sequence, and the summation is over all tokens in `Y`. This objective ensures that each predicted token in `Y` aligns as closely as possible with the ground truth.

Efficiency and Scalability

Transformers are more computationally efficient and scalable than RNN-based architectures for sequence processing. By leveraging self-attention, the Transformer can parallelize computation, processing all tokens in a sequence simultaneously rather than sequentially. This parallelism allows the model to handle significantly larger datasets and longer sequences with improved speed, leading to advancements in large-scale NLP applications.

Transformers have become the standard for NLP tasks, leading to the development of widely used models like BERT, GPT, T5, and others that rely on this architecture. Their adaptability extends beyond NLP, impacting areas such as computer vision, speech recognition, and protein folding by capturing complex dependencies and patterns effectively. The Transformer’s self-attention mechanism and encoder-decoder structure continue to be foundational in research and applications across multiple fields.

Generative AI
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest publications

All publications
Article preview
November 20, 2024
16 min

Business Digitalization: Key Drivers and Why It Can’t Be Ignored

Article preview
November 20, 2024
14 min

AI in Food and Beverage: Personalized Dining Experiences

Article preview
November 19, 2024
12 min

Software Requirements Specification: Understandable Framework

All publications
top arrow icon