What is the difference between Ridge Regression, the LASSO, and ElasticNet?

tldr: “Ridge” is a fancy name for L2-regularization, “LASSO” means L1-regularization, “ElasticNet” is a ratio of L1 and L2 regularization. If still confused keep reading…

Logistic Regression

This article is about different ways of regularizing regressions. In the context of classification, we might use but these ideas apply just as well to any kind of regression or .

With binary logistic regression, the goal is to find a way to separate your two classes. There are a number of ways of visualizing this.

Image for post
Image for post

No matter which of these you choose to think of, we can agree logistic regression defines a decision rule

h(x|theta) = (x theta + b)

and seeks a theta which minimizes some objective function, usually

loss(theta)= ∑ y*log(h(x|theta)) + (1−y)log(1−h(x|theta))

which is obfuscated by a couple clever tricks. It is derived from the intuitive objective function:

loss(theta)= ∑ (y - h(x|theta))

i.e. the number of misclassified x, which makes sense to try to minimize.

Regularization

In many cases, you wish to regularize your parameter vector theta. This means you want to both minimize the number of misclassified examples while also minimizing the magnitude of the parameter vector. These objectives are in opposition, and so the data scientist needs to decide on the appropriate balance between those objectives using their intuition, or via many empirical tests (e.g. by cross validation).

Let’s rename our previous loss function

loss(theta)= ∑ y*log(h(x|theta)) + (1−y)log(1−h(x|theta))

the basic_loss(theta). Our new, regularized loss function will look like:

loss(theta) = basic_loss(theta) + k * magnitude(theta)

Recall we’re trying to minimize loss(theta) which means we’re applying downwards pressure on both the number of mistakes we make as well as the magnitude of theta. In the above loss function, k is a hyperparameter which modulates the tradeoff of how much downwards pressure we apply to the error of the classifier defined by theta versus the magnitude of theta. Therefore, k encodes our prior beliefs, our intuitions, as to how the process we’re modeling is most likely to behave.

Norms

Now on to the interesting part. It turns out there is not one, but many ways of defining the magnitude (also called the ) of a vector. The most commonly used norms are the , which have the following character:

Image for post
Image for post

For p = 1 we get the L1 norm (also called the ), for p = 2 we get the L2 norm (also called the ), and as p approaches ∞ the p-norm approaches the (also called the ). The Lp nomenclature comes from .

Returning to our loss function, if we choose L1 as our norm,

loss(theta) = basic_loss(theta) + k * L1(theta)

is called “”. If we choose the L2 norm,

loss(theta) = basic_loss(theta) + k * L2(theta)

is called (which also turns out to have other names). If we decide we’d like a little of both,

loss(theta) = basic_loss(theta) + k(j*L1(theta) + (1-j)L2(theta))

is called “”. Notice the addition of a second hyperparameter here. Notice also that ElasticNet encompasses both the LASSO and Ridge, by setting hyperparameter j to 1 or 0.

On the Naming of Algorithms

Academia has a complicated incentive structure. One aspect of that incentive structure is that it is desirable to have a unique name for your algorithmic invention, even when that invention is a minor derivative of another idea, or even the same idea applied in a different context. Take, for example, Principal Component Analysis.

PCA was invented in 1901 by Karl Pearson,[1] as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s.[2] Depending on the field of application, it is also named the discrete Kosambi-Karhunen–Loève transform (KLT) in signal processing, the Hotelling transform in multivariate quality control, proper orthogonal decomposition (POD) in mechanical engineering, singular value decomposition (SVD) of X (Golub and Van Loan, 1983), eigenvalue decomposition (EVD) of XTX in linear algebra, factor analysis, Eckart–Young theorem (Harman, 1960), or Schmidt–Mirsky theorem in psychometrics, empirical orthogonal functions (EOF) in meteorological science, empirical eigenfunction decomposition (Sirovich, 1987), empirical component analysis (Lorenz, 1956), quasiharmonic modes (Brooks et al., 1988), spectral decomposition in noise and vibration, and empirical modal analysis in structural dynamics.

That’s 14 unique names for PCA.

I’m writing this article because the question at the top of this piece was quite hard to find an answer to online. I ended up finding part of the answer in (written by the authors of the regularization methods above, in fact) and the rest from Karen Sachs.

I personally believe that the words Lasso, Ridge, and ElasticNet should not exist. We should call these for what they are: L1-regularization, L2-regularization, and mixed-L1-L2-regularization. A mouthful, sure, but dramatically more unambiguous.

The organization of may have been what caused my confusion in the first place. When looking through their , LASSO is its own class, despite the fact that the logistic regression class also has an L1-regularization option (the same is true for Ridge/L2). This is unexpected from a python library, since one of the core dogmas of python is:

There should be one - and preferably only one - obvious way to do it

Comparing regularization techniques — Intuition

Now that we have disambiguated what these regularization techniques are, let’s finally address the question: What is the difference between Ridge Regression, the LASSO, and ElasticNet?

The intuition is as follows:

Consider the plots of the abs and square functions.

Image for post
Image for post

When minimizing a loss function with a regularization term, each of the entries in the parameter vector theta are “pulled” down towards zero. Think of each entry in theta lying on one the above curves and being subjected to “gravity” proportional to the regularization hyperparameter k. In the context of L1-regularization, the entries of theta are pulled towards zero proportionally to their absolute values — they lie on the red curve.
In the context of L2-regularization, the entries are pulled towards zero proportionally to their squares — the blue curve.

At first, L2 seems more severe, but the caveat is that, approaching zero, a different picture emerges:

Image for post
Image for post

The result is that L2 regularization drives many of your parameters down, but will not necessarily eradicate them, since the penalty all but disappears near zero. Contrarily, L1 regularization forces non-essential entries of theta all the way to zero.

Adding ElasticNet (with 0.5 of each L1 and L2) to the picture, we can see it functions as a compromise between the two. One can imagine bending the yellow curve towards either red or blue by tuning the hyperparameter j.

Image for post
Image for post

Comparing regularization techniques — In Practice

There are a number of reasons to regularize regressions. Typically, the goal is to prevent overfitting, and in that case, L2 has some nice theoretical guarantees built into it. Another purpose for regularization is often interpretability, and in that case, L1-regularization can be quite powerful.

In my work, I deal with a lot of data. In proteomics data, you have counts for some number of proteins for some number of patients — a matrix of patients by protein abundances, and the goal is to understand which proteins play a role in separating your patients by label.

This is an Ovarian Cancer dataset. Let’s first perform logistic regression with an L2-penalty and try to understand how the cancer subtypes are distinct. This is a plot of the learned theta:

Image for post
Image for post
L2-regularized Logistic Regression

You see that many, if not all proteins are registering as significant.
Now consider the same approach but with L1-regularization:

Image for post
Image for post
L1-regularized Logistic Regression

A much clearer picture emerges of the relevant proteins to each Ovarian Cancer subtype. This is the power of L1-regularization for interpretability.

Conclusion

Regularization can be very powerful, but it’s somewhat under-appreciated, partially, I think, because the intuitions aren’t always well explained.
The ideas are mostly very simple, but not terribly well documented much of the time. I hope this article helps mend that deficit.

Thanks to Karen Sachs for explaining the intuitions behind these norms many years ago.

Thanks also to the . Despite the occasional unintuitive APIs, the code you make available is invaluable to data scientists like myself.

Written by

conscious mammalian organism, fanatical tea snob.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store