1986年世界杯_世界杯年份 - duowuon.com

  • 首页
  • 世界杯比赛结果
  • 卡塔尔世界杯球场
  • 世界杯最终排名

机器学习最全合集——基于sklearn快速实现11种机器学习回归模型的训练与评估

2025-09-23 01:34:52 | 世界杯比赛结果

基于sklearn快速实现11种机器学习回归模型的训练与评估

关于作者

作者:小白熊

作者简介:精通python、matlab、c#语言,擅长机器学习,深度学习,机器视觉,目标检测,图像分类,姿态识别,语义分割,路径规划,智能优化算法,数据分析,各类创新融合等等。

联系邮箱:xbx3144@163.com

科研辅导、知识付费答疑、个性化定制以及其他合作需求请联系作者~

一、概述

在当今数据科学的领域中,选择合适的机器学习模型是推动任务决策的重要一步。随着数据规模的快速扩张,机器学习模型不仅在学术研究中发挥着至关重要的作用,而且在金融、医疗、电子商务、制造业等各行各业的实际应用中,已经成为解决复杂问题的核心工具。每个领域都积累了大量数据,并且这些数据量仍在不断增长,如何利用这些数据获得有价值的见解,驱动智能决策,是现代企业和研究机构面临的共同挑战。

然而,面对当前广泛的机器学习模型选择,需要在有限的时间内,快速、高效地搭建合适的模型并对其进行准确的评估。因此,scikit-learn 库凭借其丰富的功能和便捷的接口,成为了构建机器学习模型的首选工具之一。

本文旨在介绍如何使用 scikit-learn 实现 11 种流行的机器学习回归模型,读者将学习如何从数据准备阶段入手,依次进行模型的构建、训练和优化,最终完成模型的性能评估。本文将通过简洁明了的代码示例,帮助读者快速掌握每个回归模型的使用方法,并深入理解每种模型的特点、优缺点及其适用场景。

二、数据准备

首先需要准备一个数据集。这里使用加州房价数据集作为示例,并使用`train_test_split`函数按照8:2的比例划分训练集和验证集。

import pandas as pd

from sklearn.datasets import fetch_california_housing

from sklearn.model_selection import train_test_split

# 加载数据集

california = fetch_california_housing()

X = california.data

y = california.target

# 划分训练集和验证集

X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)

三、11种机器学习回归模型

1. 线性回归 (Linear Regression)

from sklearn.linear_model import LinearRegression

from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score

import numpy as np

# 初始化模型

lr_model = LinearRegression()

# 训练模型

lr_model.fit(X_train, y_train)

# 预测

y_train_pred = lr_model.predict(X_train)

y_val_pred = lr_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'Linear Regression:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

2. 岭回归 (Ridge Regression)

from sklearn.linear_model import Ridge

# 初始化模型

ridge_model = Ridge()

# 训练模型

ridge_model.fit(X_train, y_train)

# 预测

y_train_pred = ridge_model.predict(X_train)

y_val_pred = ridge_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'Ridge Regression:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

3. Lasso 回归 (Lasso Regression)

from sklearn.linear_model import Lasso

# 初始化模型

lasso_model = Lasso()

# 训练模型

lasso_model.fit(X_train, y_train)

# 预测

y_train_pred = lasso_model.predict(X_train)

y_val_pred = lasso_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'Lasso Regression:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

4. 决策树回归 (Decision Tree Regressor)

from sklearn.tree import DecisionTreeRegressor

# 初始化模型

dt_model = DecisionTreeRegressor()

# 训练模型

dt_model.fit(X_train, y_train)

# 预测

y_train_pred = dt_model.predict(X_train)

y_val_pred = dt_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'Decision Tree Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

5. 随机森林回归 (Random Forest Regressor)

from sklearn.ensemble import RandomForestRegressor

# 初始化模型

rf_model = RandomForestRegressor(n_estimators=100, random_state=42)

# 训练模型

rf_model.fit(X_train, y_train)

# 预测

y_train_pred = rf_model.predict(X_train)

y_val_pred = rf_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'Random Forest Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

6. 支持向量回归 (Support Vector Regression)

from sklearn.svm import SVR

# 初始化模型

svr_model = SVR()

# 训练模型

svr_model.fit(X_train, y_train)

# 预测

y_train_pred = svr_model.predict(X_train)

y_val_pred = svr_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'Support Vector Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

7. K 近邻回归 (K-Neighbors Regressor)

from sklearn.neighbors import KNeighborsRegressor

# 初始化模型

knn_model = KNeighborsRegressor()

# 训练模型

knn_model.fit(X_train, y_train)

# 预测

y_train_pred = knn_model.predict(X_train)

y_val_pred = knn_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'K-Neighbors Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

8. AdaBoost 回归 (AdaBoost Regressor)

from sklearn.ensemble import AdaBoostRegressor

from sklearn.tree import DecisionTreeRegressor

# 初始化模型

ada_model = AdaBoostRegressor(base_estimator=DecisionTreeRegressor(), n_estimators=50, random_state=42)

# 训练模型

ada_model.fit(X_train, y_train)

# 预测

y_train_pred = ada_model.predict(X_train)

y_val_pred = ada_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'AdaBoost Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

9. 梯度提升回归 (Gradient Boosting Regressor)

from sklearn.ensemble import GradientBoostingRegressor

# 初始化模型

gb_model = GradientBoostingRegressor(n_estimators=100, random_state=42)

# 训练模型

gb_model.fit(X_train, y_train)

# 预测

y_train_pred = gb_model.predict(X_train)

y_val_pred = gb_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'Gradient Boosting Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

10. XGBoost 回归 (XGBoost Regressor)

import xgboost as xgb

# 初始化模型

xgb_model = xgb.XGBRegressor(n_estimators=100, random_state=42)

# 训练模型

xgb_model.fit(X_train, y_train)

# 预测

y_train_pred = xgb_model.predict(X_train)

y_val_pred = xgb_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'XGBoost Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

11. LightGBM 回归 (LightGBM Regressor)

import lightgbm as lgb

# 初始化模型

lgb_model = lgb.LGBMRegressor(n_estimators=100, random_state=42)

# 训练模型

lgb_model.fit(X_train, y_train)

# 预测

y_train_pred = lgb_model.predict(X_train)

y_val_pred = lgb_model.predict(X_val)

# 计算指标

mse_train = mean_squared_error(y_train, y_train_pred)

mae_train = mean_absolute_error(y_train, y_train_pred)

rmse_train = np.sqrt(mse_train)

r2_train = r2_score(y_train, y_train_pred)

mse_val = mean_squared_error(y_val, y_val_pred)

mae_val = mean_absolute_error(y_val, y_val_pred)

rmse_val = np.sqrt(mse_val)

r2_val = r2_score(y_val, y_val_pred)

# 打印指标

print(f'LightGBM Regressor:')

print(f"训练集 MSE: {mse_train:.2f}, MAE: {mae_train:.2f}, RMSE: {rmse_train:.2f}, R²: {r2_train:.2f}")

print(f"验证集 MSE: {mse_val:.2f}, MAE: {mae_val:.2f}, RMSE: {rmse_val:.2f}, R²: {r2_val:.2f}")

print("*" * 100)

四、结束语

在本文中,我们实现了 11 种流行的回归模型,并使用均方误差 (MSE)、平均绝对误差 (MAE)、均方根误差 (RMSE) 和决定系数 (R²) 对模型进行了评估。根据评估结果,我们可以选择最适合我们任务的回归模型。你可以根据实际需求调整模型的参数,进一步提升模型性能。

希望这篇文章能帮助你更好地理解和运用 scikit-learn 进行机器学习回归任务!

巴博斯版的三个颜色都在这里了,我个人最喜欢黑色但又很难接受红顶,黑顶又只有红色有
他们是本世纪最出彩的50名游戏人物(上)
友情链接:
Copyright © 2022 1986年世界杯_世界杯年份 - duowuon.com All Rights Reserved.