Ordinary Least Squares is a simple linear model in scikit-learn, in this tutorial, we will write an example to explain how to implement ordinary least squares linear regression for beginners.

## Import libraries

import numpy as np from sklearn.linear_model import LinearRegression

**Prepare data ( X, y)**

X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]]) y = np.array([2, 4, 5, 7]) print("X = ") print(X) print("y = ") print(y)

In this example, we use 4 samples, each sample contains 2 features. where ** X** are samples and

**are the true value.**

*y***Create ordinary least squares to estimeate w and w_{o}**

reg = LinearRegression().fit(X, y)

We use* fit()* function to calculate the loss function of ordinary least squares and get ** w** and

*w*_{o.}**Print w and w_{o}**

coef_ = reg.coef_ print(coef_) intercept_ = reg.intercept_ print(intercept_)

** w** is containd in

*reg.coef_*and

**is containd**

*w*_{o}*reg.intercept_*. In this example, they are:

[1. 2.] -0.9999999999999982

Which means each predicted ** y_{pre}** is:

**y _{pre} = 1*x_{1} + 2*x_{2} + -0.9999999999999982**

**How about the qualities of w and w_{o}**

We should calculate r2 coefficient to estimate.

r2 = reg.score(X, y) print(r2)

The r2 coefficient is 1.0, which means the qualities of ** w**and

**are very good, they can fit the true value very well.**

*w*_{o}**How to prodict by X, w and w_{o}**

We can use *X*, *w* and *w _{o} *to predict a value.

y_predict = reg.predict(np.array([[3, 5]])) print(y_predict)

The predict value is: 12