Today I was curious why we stick the word ordinary in “Ordinary Least Squares.” OLS is the plain-vanilla linear-regression recipe: find the line (or hyper-plane) that minimises the sum of squared residuals between actual and predicted values.

“Ordinary” flags that we're using the basic, unweighted least-squares setup:

  • every data point gets equal weight
  • errors are assumed uncorrelated with constant variance
  • no fancy covariance modelling (that's where WLS or GLS step in)

When those assumptions break — say, some observations are noisier or errors are correlated—we can use Weighted or Generalised Least Squares instead.


Tiny code demo

Below are two ways to solve the exact same toy problem so you can see OLS in action.

1. Closed-form (normal equation)

import numpy as np

# Fake data: x drives y with slope≈0.7 and intercept≈1.5
x = np.array([1, 2, 3, 4, 5], dtype=float)
y = np.array([2.2, 2.9, 3.6, 4.3, 5])

# Design matrix with an intercept column
ones = np.ones(len(x))
X = np.column_stack([ones, x])
# array([[1., 1.],
#        [1., 2.],
#        [1., 3.],
#        [1., 4.],
#        [1., 5.]])

# β̂ = (XᵀX)⁻¹ Xᵀ y

# XTX is the product of X transpose and X
XTX = X.T @ X
# array([[ 5., 15.],
#        [15., 55.]])

# np.linalg provides various linear algebra functions
beta_hat = np.linalg.inv(XTX) @ X.T @ y

# Extract intercept and slope from beta_hat
intercept, slope = beta_hat

print(f'intercept: {intercept:.3f}, slope: {slope:.3f}')

Should print:

intercept: 1.5, slope: 0.7

2. Sanity-check with scikit-learn

from sklearn.linear_model import LinearRegression

model = LinearRegression().fit(x.reshape(-1, 1), y)
print(f'intercept: {model.intercept_:.3f}, slope: {model.coef_[0]:.3f}')

Should print:

intercept: 1.5, slope: 0.7

Takeaways

  • “Ordinary” = the basic unweighted, constant-variance, no-correlation flavour of least squares.
  • Those assumptions keep the math (and code) dead simple—often good enough.
  • When life gets messy (heteroskedasticity, correlated errors, etc.), swap in WLS or GLS.