Data Forest logo
Home page  /  Glossary / 
Underfitting

Underfitting

Underfitting is a modeling error in machine learning where a model is too simplistic to capture the underlying structure and patterns of the data. It occurs when the model cannot adequately learn from the training data, resulting in high bias and poor predictive performance on both training and test data. Underfitting is typically caused by overly simple models, insufficient data representation, or insufficient training, leading to an inability to generalize to new data. This issue contrasts with overfitting, where the model is too complex and performs well on training data but poorly on unseen data due to its focus on noise rather than actual patterns.

Core Characteristics of Underfitting

  1. High Bias and Low Variance:
    • Underfitting is often associated with high bias—an error from incorrect assumptions in the model that limit its ability to learn patterns in data. High-bias models fail to represent the complexity of the data, producing consistently inaccurate predictions.  
    • Low variance in underfitting models indicates that the predictions do not change significantly in response to different data samples, as the model does not adapt flexibly to data variations. This combination of high bias and low variance results in consistently poor performance across datasets.
  2. Simplistic Model Selection:
    • Underfitting frequently arises from selecting a model that is too simple for the problem at hand. For example, using a linear model to fit data with a nonlinear relationship can cause underfitting, as the model cannot capture the complex patterns present.  
    • Common models prone to underfitting include linear regression for nonlinear data and shallow decision trees, which lack the depth to capture intricate data structures.
  3. Insufficient Training and Poor Data Representation:
    • Inadequate training time or epochs can cause underfitting, especially in neural networks, where the model requires sufficient iterations to learn patterns. Early stopping in training can result in underfitting if the model has not fully learned the relationships in the data.  
    • A limited or poorly representative dataset can also lead to underfitting, as the model lacks the necessary information to generalize patterns effectively. When features are missing or inadequate for capturing essential relationships, the model cannot learn from the data adequately.
  4. Mathematical Representation of Underfitting:
    • Underfitting can be illustrated through a high training error, which reflects the model’s inability to learn the training data structure. For example, the mean squared error (MSE) for a simple regression model with high training error can be calculated as:    
      MSE = (1/n) * Σ (y_i - ŷ_i)²        
      where \( y_i \) is the actual value, \( ŷ_i \) is the predicted value, and \( n \) is the total number of data points.  
    • In underfitting, MSE remains high for both training and test sets, indicating that the model cannot capture the underlying patterns, regardless of the data sample.
  5. Diagnosing Underfitting:
    • Underfitting is typically identified by evaluating model performance on both training and test datasets. When both datasets exhibit high error, underfitting is likely present, as the model does not perform well even on data it has been trained on.  
    • Learning curves—plots of training and test error over time or model complexity—can help diagnose underfitting. In cases of underfitting, both training and test error will converge at a high error level without showing improvements as the model is trained further or with increasing complexity.
  6. Impact on Model Performance and Generalization:
    • Underfitting results in a model that fails to generalize, leading to poor performance when making predictions on unseen data. Unlike overfitting, where the model has learned noise in the training data, an underfitted model misses essential patterns and relationships, limiting its predictive accuracy.  
    • The model’s inability to generalize stems from an oversimplified approach, where key features and interactions in the data are overlooked, resulting in significant errors across datasets.

Underfitting is a fundamental concept in data science and machine learning model development, highlighting the need for a balance between model simplicity and complexity. It serves as a reminder of the trade-off between bias and variance, essential for achieving effective predictive performance. To address underfitting, practitioners often select more complex models, add relevant features, or adjust training parameters. Understanding and diagnosing underfitting are critical steps in ensuring that machine learning models capture meaningful patterns in data, enabling accurate, generalizable predictions.

Data Science
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest publications

All publications
Article preview
December 3, 2024
7 min

Mastering the Digital Transformation Journey: Essential Steps for Success

Article preview
December 3, 2024
7 min

Winning the Digital Race: Overcoming Obstacles for Sustainable Growth

Article preview
December 2, 2024
12 min

What Are the Benefits of Digital Transformation?

All publications
top arrow icon