top of page
  • Aristides Zenonos

Gradient Descent vs. Normal Equation for Linear Regression problems

Updated: Jun 26, 2019

Gradient Descent (GD)

Gradient descent gives one way of minimizing J. "Normal equation" performs the minimization explicitly and without resorting to an iterative algorithm. In the "Normal Equation" method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:

Advantages and disadvantages of the methods:

Gradient descent:



Works well when n is large


Need to choose alpha

Needs many iterations

Normal Equation:


No need to choose alpha

No need to iterate


O(n^3) need to calculate the inverse of X^(T)X

Slow if n is very large


With the normal equation, computing the inversion has complexity (n^3). So if we have a very large number of features, the normal equation will be slow. In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process. If X^(T)X is noninvertible, the common causes might be having :

  • Redundant features, where two features are very closely related (i.e. they are linearly dependent)

  • Too many features (e.g. m ≤ n). In this case, delete some features or use "regularization".


Andrew NG

462 views0 comments

Recent Posts

See All

Check out our article, co-authored by Phanis Ioannou and me, where we delve into the world of R Shiny. In this piece, we highlight the key benefits of this popular tool for creating user-friendly web

bottom of page