Gradient Boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It works by sequentially adding new models to correct the errors made by the previous models, optimizing the loss function in a gradient descent manner. Popular implementations include Gradient Boosting Machines (GBM), XGBoost, and LightGBM. Gradient boosting is known for its high predictive accuracy and ability to handle various data types and structures, making it a powerful tool for many data science applications.