Ridge (L2) and Lasso (L1) Regression
\(\,\)
Goals
- Introduce ridge (L2) and Lasso (l1) regression
- As a complexity penalty
- As a tuneable hierarchy of models to be selected by cross-validation
- As a Bayesian posterior estimate
Reading
These notes are a supplement to Section 6.2 of Gareth et al. (2021).
Approximately colinear regressors
Sometime it makes sense to penalize very large values of \(\hat{\beta}\).
Consider the following contrived example. Let
\[ \begin{aligned} y_n ={}& z_n + \varepsilon_n \quad\textrm{where}\quad z_n \sim{} \mathcal{N}\left(0,1\right) \quad\textrm{and}\quad \varepsilon \sim{} \mathcal{N}\left(0,1\right). \end{aligned} \]
If we regress \(y \sim \beta z\), then \(\hat{\beta}\sim \mathcal{N}\left(1, 1/N\right)\) with no problems. But suppose we actually also observe \(x_n = z_n + \epsilon_n\) with \(\epsilon_n \sim \mathcal{N}\left(0, \delta\right)\) for very small \(\delta \ll 1\). Then \(x_n\) and \(z_n\) are highly correlated:
\[ \boldsymbol{M}_{\boldsymbol{X}}= \mathbb{E}_{\,}\left[\begin{pmatrix}x_n \\ z_n\end{pmatrix}\right] = \begin{pmatrix} 1 + \delta & 1 \\ 1 & 1 \end{pmatrix} \quad\Rightarrow\quad \boldsymbol{M}_{\boldsymbol{X}}^{-1} = \frac{1}{\delta} \begin{pmatrix} 1 & -1 \\ -1 & 1 + \delta \end{pmatrix}. \]
Therefore, if we try to regress \(y\sim \beta_xx+ \beta_z z\), we get
\[ \mathrm{Cov}\left(\begin{pmatrix}\beta_x \\ \beta_z \end{pmatrix}\right) = \frac{1}{N} \boldsymbol{M}_{\boldsymbol{X}}^{-1} = \frac{1}{N} \frac{1}{\delta} \begin{pmatrix} 1 & -1 \\ -1 & 1 + \delta \end{pmatrix}. \]
Note that \(\mathrm{Var}\left(\beta_x\right) = \frac{N}{\delta}\), which is very large. With high probability, we will estimate \(\hat{\beta}\) that has a very large magnitude, \(\left\Vert\hat{\beta}\right\Vert_2^2\), although its two components should nearly the negative of one another. This could be problematic in practice. For example, in our test set or application, if \(x_n\) and \(z_n\) are not as well–correlated as in our training set, this will lead to crazy and highly variable predicted values.
Does it make sense to permit such a large variance? Wouldn’t it be better to choose slightly smaller \(\hat{\beta}\), which are in turn somewhat more “stable”?
Penalizing large regressors with ridge regression
Recall that one perspective on regression is that we choose \(\hat{\beta}\) to minimize the loss
\[ \hat{\beta}:= \underset{\beta}{\mathrm{argmin}}\, \sum_{n=1}^N(y_n - \beta^\intercal x_n)^2 =: RSS(\beta). \]
We motivated this as an approximation to the expected predicted loss, \(L(\beta) = \mathbb{E}_{\,}\left[y_\mathrm{new},x_\mathrm{new}\right]{(y_\mathrm{new}- \beta^\intercal x_\mathrm{new})^2}\). But that made sense when we had a fixed set of regressors, and have shown that the correspondence breaks down when we are searching the space of regressors. In particular, \(RSS(\beta)\) always decreases as we add more regressors, but \(L(\beta)\) may not.
Instead, let us consider minimizing \(RSS(\beta)\), but with an additional penalty for large \(\hat{\beta}\). There are many ways to do this! But one convenient one is as follows. For now, pick a \(\lambda\), and choose \(\hat{\beta}\) to minimize:
\[ \hat{\beta}(\lambda) := \underset{\beta}{\mathrm{argmin}}\, L_{ridge}(\beta, \lambda) := RSS(\beta) + \lambda \left\Vert\beta\right\Vert_2^2 = RSS(\beta) + \lambda \sum_{p=1}^P \beta_p^2. \]
This is known as ridge regression, L2–penalized regression. The latter is because the penalty \(\left\Vert\beta\right\Vert_2^2\) is the L2 norm of the regressor; next time we will study the L1 version, which is also known as the Lasso.
The term \(\lambda \left\Vert\beta\right\Vert_2^2\) is known as a “regularizer,” since it imposes some “regularity” to the estimate \(\hat{\beta}(\lambda)\). Note that
- As \(\lambda \rightarrow \infty\), then \(\hat{\beta}(\lambda) \rightarrow \boldsymbol{0}\)
- When \(\lambda = 0\), then \(\hat{\beta}(\lambda) = \hat{\beta}\), the OLS estimator.
So the inclusion of \(\lambda\) is to “shrink” the estimate \(\hat{\beta}(\lambda)\) towards zero. Note that since the ridge loss has an extra penalty for the norm, it is impossible for the OLS solution to have a smaller norm than the ridge solution.
The ridge regression regularizer has the considerable advantage that the optimum is available in closed form, since
\[ \begin{aligned} L_{ridge}(\beta, \lambda) ={}& (\boldsymbol{Y}- \boldsymbol{X}\beta)^\intercal(\boldsymbol{Y}- \boldsymbol{X}\beta) + \lambda \beta^\intercal\beta \\={}& \boldsymbol{Y}^\intercal\boldsymbol{Y}-2 \boldsymbol{Y}^\intercal\boldsymbol{X}\beta + \beta^\intercal\boldsymbol{X}^\intercal\boldsymbol{X}\beta+ \lambda \beta^\intercal\beta \\={}& \boldsymbol{Y}^\intercal\boldsymbol{Y}-2 \boldsymbol{Y}^\intercal\boldsymbol{X}\beta + \beta^\intercal\left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\right) \beta \quad \Rightarrow \\ \frac{\partial L_{ridge}(\beta, \lambda)}{\partial \beta} ={}& -2 \boldsymbol{X}^\intercal\boldsymbol{Y}+ 2 \left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\right) \beta \quad\Rightarrow \\ \hat{\beta}(\lambda) ={}& \left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\right)^{-1} \boldsymbol{X}^\intercal\boldsymbol{Y}. \end{aligned} \]
Note that \(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\) is always invertible if \(\lambda > 0\), even if \(\boldsymbol{X}\) is not full–column rank. In this sense, the ridge regression can deal safely with colinearity.
Prove that \(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\) is invertible if \(\lambda > 0\). Hint: using the fact that \(\boldsymbol{X}^\intercal\boldsymbol{X}\) positive semi–definite because it’s symmetric, find a lower bound on the smallest eigenvalue of \(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\).
Standardizing regressors
Suppose we re-scale one of the regressors \(x_{np}\) by some value \(\alpha\) for a very small \(\alpha \ll 1\), i.e., regressing on \(x'_{np} = \alpha x_{np}\) instead of \(x_{np}\). As we know, the OLS minimizer \(\hat{\beta}_p' = \hat{\beta}_p / \alpha\) and the fitted value \(\hat{\boldsymbol{Y}}\) is unchanged at \(\lambda = 0\). But for a particular \(\lambda > 0\), how does this affect the ridge solution? We can write
\[ \lambda \hat{\beta}_p'^2 = \frac{\lambda}{\alpha^2} \hat{\beta}_p^2. \]
That is, we will effectively “punish” large values of \(\hat{\beta}_p'\) much more than we would “punish” the corresponding values of \(\hat{\beta}\). In turn, for a particular \(\lambda\), we will tend to set \(\hat{\beta}'_p < \hat{\beta}_p\) (although this is not necessarily the case).
The point is that re-scaling the regressors affects the meaning of \(\lambda\). Correspondingly, if different regressors have very different typical scales, such as age versus income, then ridge regression will drive their coefficients to zero to very different degrees.
Similarly, it often doesn’t make sense to penalize the constant, so we might take \(\beta_1\) to be the constant (\(x_{n1} = 1\)), and write
\[ \hat{\beta}(\lambda) := RSS(\beta) + \lambda \left\Vert\beta\right\Vert_2^2 = RSS(\beta) + \lambda \sum_{p=2}^P \beta_p^2. \]
But this gets tedious, and assumes we have included a constant.
Instead, we might invoke the FWL theorem, center the response and regressors at their mean values, and then do penalized regression.
For both these reasons, before performing ridge regression (or any other similar penalized regression), we typically standardize the regressors, defining
\[ x'_n := \frac{x_n - \bar{x}_n}{\sqrt{\frac{1}{N} \sum_{n=1}^N(x_n - \bar{x})^2}}. \]
We then run ridge regression on \(\boldsymbol{x}'\) rather than \(x\), so that we
- Don’t penalize the constant term and
- Penalize every coefficient the same regardless of its regressor’s typical scale.
A complexity penalty
Suppose that \(y_n = \beta^\intercal\boldsymbol{x}_n + \varepsilon_n\) for some \(\beta\). Note that, for a fixed \(\boldsymbol{x}_\mathrm{new}\), \(y_\mathrm{new}\), and fixed \(\boldsymbol{X}\),
\[ \begin{aligned} \mathbb{E}_{\,}\left[y_\mathrm{new}- \boldsymbol{x}_\mathrm{new}^\intercal\hat{\beta}(\lambda)\right] ={}& \boldsymbol{x}_\mathrm{new}^\intercal\left(\beta - \mathbb{E}_{\,}\left[\hat{\beta}(\lambda)\right]\right) \\={}& \boldsymbol{x}_\mathrm{new}^\intercal\left(\boldsymbol{I}- \left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\right)^{-1} (\boldsymbol{X}^\intercal\boldsymbol{X})\right) \beta \end{aligned} \]
That is, as \(\lambda\) grows, \(\hat{\beta}(\lambda)\) becomes biased. However, by the same reasoning as in the standard case, under homoskedasticity,
\[ \mathrm{Cov}\left(\hat{\beta}(\lambda)\right) = \sigma^2 \left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\right)^{-1} (\boldsymbol{X}^\intercal\boldsymbol{X}) \left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\right)^{-1} , \]
which is smaller that \(\mathrm{Cov}\left(\hat{\beta}\right)\) in the sense that \(\mathrm{Cov}\left(\hat{\beta}\right) - \mathrm{Cov}\left(\hat{\beta}(\lambda)\right)\) is a positive definite matrix for \(\lambda > 0\) and full-rank \(\boldsymbol{X}\). In this sense, the family \(\hat{\beta}(\lambda)\) is a one-dimensional family that trades off bias and variance. We can thus use cross validation to choose \(\lambda\) that minimizes estimated MSE.
Constrained optimization and the minimum norm interpolator
We can rewrite the ridge loss in a suggestive way. Fix \(\lambda = \hat{\lambda}\), write \(b= \left\Vert\hat{\boldsymbol{\beta}}(\hat{\lambda})\right\Vert_2^2\), and write
\[ \mathscr{L}_{ridge}'(\boldsymbol{\beta}, \lambda) = RSS(\boldsymbol{\beta}) + \lambda (\left\Vert\boldsymbol{\beta}\right\Vert_2^2 - b). \]
Since \(b\) is fixed, \(\hat{\boldsymbol{\beta}}(\hat{\lambda})\) still satisfies
\[ \left. \frac{\partial \mathscr{L}_{ridge}(\boldsymbol{\beta}, \lambda)}{\partial \boldsymbol{\beta}} \right|_{\hat{\boldsymbol{\beta}}, \hat{\lambda}} = \left. \frac{\partial RSS(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} \right|_{\hat{\boldsymbol{\beta}}} + \hat{\lambda}\left. \frac{\partial \left\Vert\boldsymbol{\beta}\right\Vert_2^2}{\partial \boldsymbol{\beta}} \right|_{\hat{\boldsymbol{\beta}}} = \boldsymbol{0}. \]
So the optimum is actually the same. The loss \(\mathscr{L}_{ridge}'(\boldsymbol{\beta}, \lambda)\) is the Lagrange multiplier version of the constrained optimization problem
\[ \hat{\boldsymbol{\beta}}(b) := \underset{\boldsymbol{\beta}: \left\Vert\boldsymbol{\beta}\right\Vert_2^2 \le b}{\mathrm{argmin}}\, RSS(\boldsymbol{\beta}). \]
Taking \(b= \left\Vert\hat{\boldsymbol{\beta}}(\lambda)\right\Vert_2^2\), we see that, for every \(\lambda\), there is a \(b\), and vice–versa. The ridge regression is thus equivalent to minimizing the squared error loss subject to the constraint that it lies within an L2 ball. Making the ball larger allows better fit to the data, but by using larger \(\boldsymbol{\beta}\). This intuition is particularly useful when contrasting ridge regression with the lasso.
This intuition is also useful when understanding what happens as \(\lambda \rightarrow 0\). Write \(r= ESS(\hat{\boldsymbol{\beta}}(\hat{\lambda}))\), and note that we could also have written
\[ \mathscr{L}_{ridge}''(\boldsymbol{\beta}, \lambda) = \frac{1}{\lambda} ( RSS(\boldsymbol{\beta}) - r) + \left\Vert\boldsymbol{\beta}\right\Vert_2^2. \]
As before, for fixed \(\lambda\) (and so fixed \(r\)), \(\mathscr{L}_{ridge}''(\boldsymbol{\beta}, \lambda)\) has the same ridge optimum. However, we can write
\[ \hat{\boldsymbol{\beta}}(r) := \underset{\boldsymbol{\beta}: RSS(\boldsymbol{\beta}) < r}{\mathrm{argmin}}\, \left\Vert\boldsymbol{\beta}\right\Vert_2^2. \]
We can thus equivalently interpret the ridge estimator as the one that produces the smallest \(\boldsymbol{\beta}\) in the L2 norm, subject to the \(RSS\) being no larger than \(r\). As \(\lambda \rightarrow 0\), infinite weight gets put on the \(RSS(\boldsymbol{\beta})\) term, and \(r\rightarrow 0\) if \(\boldsymbol{X}\) is rank \(N\) or higher. So we see that
\[ \lim_{r\rightarrow 0} \hat{\boldsymbol{\beta}}(r) = \underset{\boldsymbol{\beta}: RSS(\boldsymbol{\beta}) = 0}{\mathrm{argmin}}\, \left\Vert\boldsymbol{\beta}\right\Vert_2^2. \]
That is, when there are many \(\boldsymbol{\beta}\) that have \(RSS(\boldsymbol{\beta}) = 0\) — i.e., which “interpolate” the data — the limiting ridge estimator chooses the one with minimum norm. This is call the “ridgeless interpolator.” (See the “ridgeless” lecture in Tibshirani (2023).)
A Bayesian posterior
Another way to interpret the ridge penalty is as a Bayesian posterior mean. If
\[ \boldsymbol{\beta}\sim \mathcal{N}\left(\boldsymbol{0}, \sigma_\beta^2 \boldsymbol{I}\right) \quad\textrm{and}\quad \boldsymbol{Y}\vert \boldsymbol{\beta}, \boldsymbol{X}\sim \mathcal{N}\left(\boldsymbol{X}\boldsymbol{\beta}, \sigma^2 \boldsymbol{I}\right), \]
then
\[ \boldsymbol{\beta}\vert \boldsymbol{Y}, \boldsymbol{X}\sim \mathcal{N}\left( \left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \frac{\sigma^{2}}{\sigma_\beta^{2}} \boldsymbol{I}\right)^{-1} \boldsymbol{X}^\intercal\boldsymbol{Y}, \sigma^{2}\left( \boldsymbol{X}^\intercal\boldsymbol{X}+ \frac{\sigma^{2}}{\sigma_\beta^{2}} \boldsymbol{I}\right)^{-1} \right). \]
One way to derive this is to recognize that, if \(\boldsymbol{\beta}\sim \mathcal{N}\left(\mu, \Sigma\right)\), then
\[ \log \mathbb{P}\left(\boldsymbol{\beta}| \mu, \Sigma\right) = -\frac{1}{2} \beta^\intercal\Sigma^{-1} \beta + \beta^\intercal\Sigma^{-1} \mu + \textrm{Terms that do not depend on }\beta. \]
We can write out the distribution of \(\mathbb{P}\left(\beta | \boldsymbol{Y}\right) = \mathbb{P}\left(\beta, \boldsymbol{Y}\right) / \mathbb{P}\left(\boldsymbol{Y}\right)\), gather terms that depend on \(\beta\), and read off the mean and covariance:
\[ \begin{aligned} \log \mathbb{P}\left(\beta, \boldsymbol{Y}\right) ={}& -\frac{1}{2} \sigma^{-2} (\boldsymbol{Y}- \boldsymbol{X}\beta)^\intercal(\boldsymbol{Y}- \boldsymbol{X}\beta) -\frac{1}{2} \sigma_\beta^{-2} \beta^\intercal\beta + \textrm{Terms that do not depend on }\beta \\={}& -\frac{1}{2} \sigma^{-2} \beta^\intercal\boldsymbol{X}^\intercal\boldsymbol{X}\beta + \sigma^{-2} \beta^\intercal\boldsymbol{X}^\intercal\boldsymbol{Y} -\frac{1}{2} \sigma_\beta^{-2} \beta^\intercal\beta + \textrm{Terms that do not depend on }\beta \\={}& -\frac{1}{2} \beta^\intercal\left(\sigma^{-2} \boldsymbol{X}^\intercal\boldsymbol{X}+ \sigma_\beta^{-2} \boldsymbol{I}\right) \beta + \sigma^{-2} \beta^\intercal\boldsymbol{X}^\intercal\boldsymbol{Y}+ \textrm{Terms that do not depend on }\beta. \end{aligned} \]
From this, we can read off \(\Sigma\) and \(\mu\), and get the above expression.
If we take \(\lambda = \sigma^2 / \sigma_\beta^2\), then we can see that
\[ \mathbb{E}_{\,}\left[\beta | \boldsymbol{Y}\right] = \left(\boldsymbol{X}^\intercal\boldsymbol{X}+ \lambda \boldsymbol{I}\right)^{-1} \boldsymbol{X}^\intercal\boldsymbol{Y}= \hat{\beta}(\lambda). \]
This gives the ridge procedure some interpretability. First of all, the use of the ridge penalty corresponds to a prior belief that \(\beta\) is not too large.
Second, the ridge penalty you use reflects the relative scale of the noise variance and prior variance in a way that makes sense:
- If \(\sigma \gg \sigma_\beta\), then the data is noisy (relative to our prior beliefs). We should not take fitting the data too seriously, and so should estimate a smaller \(\beta\) than \(\hat{\beta}\). And indeed, in this case \(\lambda\) is large, a large \(\lambda\) shrinks the estimated coefficients.
- If \(\sigma_\beta \gg \sigma\), then we find it plausible that \(\beta\) is very large (relative to the variability in our data). We should not take our prior beliefs too seriously, and estimate a coefficient that matches \(\hat{\beta}\). And indeed, in this case, \(\lambda\) is small, and we do not shrink the coefficients much.
Sparse regression with the L1 norm (lasso)
One problem with the L2 solution might be that the solution \(\hat{\boldsymbol{\beta}}_{L2}(\lambda)\) is still “dense”, meaning that, in general, every entry of it is nonzero, and we still have to invert a \(P\times P\) matrix.
For example, consider our highly correlated regressor example from the previous lecture. The ridge regression will still include both regressors, and their coefficient estimates will still be highly negatively correlated, but both will be shrunk towards zero. Maybe it would make more sense to select only one variable to include. Let us try to think of how we can change the penalty term to achieve this.
A “sparse” solution is an estimator \(\hat{\boldsymbol{\beta}}\) in which many of the entries are zero — that is, an estimated regression line that does not use many of the available regressors.
In a word — ridge regression estimates are not sparse. Let’s try to derive one that is by changing the penalty.
A very intuitive way to produce a sparse estimate is as follows: \[ \hat{\boldsymbol{\beta}}_{L0}(\lambda) := \underset{\boldsymbol{\beta}}{\mathrm{argmin}}\,\left( RSS(\boldsymbol{\beta}) + \lambda \sum_{p} 1\left(\beta_p \ne 0\right) \right) \quad\textrm{(practically difficult)} \]
This finds a tradeoff between the best fit to the data, but with a penalty for using more regressors. This makes sense, but is very difficult to compute. In particular, this objective is very non-convex. Bayesian statisticians do attempt to estimate models with a similar kind of penalty (they are called “spike and slab” models), but they are extremeley computationally intensive and beyond the scope of this course.
A convex approximation to the preceding loss is the L1 or Lasso loss, leading to Lasso or L1 regression:
\[ \hat{\boldsymbol{\beta}}_{L1}(\lambda) := \underset{\boldsymbol{\beta}}{\mathrm{argmin}}\,\left( RSS(\boldsymbol{\beta}) + \lambda \sum_{p} \left|\beta_b\right| \right) = \underset{\boldsymbol{\beta}}{\mathrm{argmin}}\,\left( RSS(\boldsymbol{\beta}) + \lambda \left\Vert\boldsymbol{\beta}\right\Vert_1 \right). \]
This loss is convex (beacuse it is the sum of two convex functions), and so is much easier to minimize. Furthermore, as \(\lambda\) grows, it does produce sparser and sparser solutions — though it may not be obvious at first.
Just as with the ridge regression, you should standardize variables before appyling the Lasso.
The Lasso produces sparse solutions
One way to see that the Lasso produces sparse solutions is to start with a very large \(\lambda\) and see what happens as it is slowly decreased.
Start at \(\lambda\) very large, so that \(\hat{\boldsymbol{\beta}}_{L1}(\lambda) = \boldsymbol{0}\). If we take small step of size \(\delta\) in a particular direction away from zero in entry \(\beta_p\), then \(\lambda \left\Vert\hat{\beta}\right\Vert_1\) increases by \(\delta \lambda\), and the RSS changes by the gradient of the squared error,
\[ \delta \sum_{n=1}^N(y_n - \hat{\boldsymbol{\beta}}(\lambda)^\intercal\boldsymbol{x}_n) \boldsymbol{x}_{np} = \delta \sum_{n=1}^N\hat{\varepsilon}_n \boldsymbol{x}_{np} = \delta \sum_{n=1}^Ny_n \boldsymbol{x}_{np} \quad \textrm{ (because $\hat{\boldsymbol{\beta}}(\lambda) = \boldsymbol{0}$)}. \]
As long as \(\left| \sum_{n=1}^Ny_n \boldsymbol{x}_{np}\right| < \lambda\) for all \(p\), we cannot improve the loss by moving away from \(\boldsymbol{0}\). Since the loss is convex, that means \(\boldsymbol{0}\) is the minimum.
Eventually, we decrease \(\lambda\) until \(\sum_{n=1}^Ny_n \boldsymbol{x}_{np} = \lambda\) for some \(p\). At that point, \(\beta_p\) moves away from zero as \(\lambda\) decreases, and the \(\hat{\varepsilon}_n\) also change. However, until \(\sum_{n=1}^N\hat{\varepsilon}_n \boldsymbol{x}_{nq} = \lambda\) for some other \(q\), only \(\beta_p\) will be nonzero. As \(\lambda\) decreases more and more variables tend to get added to the model, until \(\lambda = 0\), when of course \(\hat{\boldsymbol{\beta}}_{L1}(0) = \hat{\boldsymbol{\beta}}\), the OLS solution. Along the path, variables may come in and out of the regression in complex ways.
The Lasso as a constrained optimization problem
See figure 6.7 from Gareth et al. (2021) for an interpretation of the Lasso as a constrained optimization problem. The shape of the L1 ball provides an intuitive way to understand the sparsity of the solution compared to ridge.
The Bayesian Lasso is not sparse
In contrast to the ridge case, it is not hard to show that if you take a prior
\[ \mathbb{P}\left(\boldsymbol{\beta}\right) \propto \exp\left( \lambda \left\Vert\boldsymbol{\beta}\right\Vert_1 \right) \]
you do not recover a sparse solution for the posterior mean! The difference is that, for the ridge prior, the posterior remained normal, so that the “maximum a–posteriori” (MAP) estimator was equal to the mean. In the Lasso case, the MAP is sparse, but the mean is not, and the two do not coincide because the posterior is not normal.
A more Bayesian way to produce sparse posteriors is by setting a non–zero probability that \(\beta_p = 0\). These are called “spike and slab priors,” and are beyond the scope of this course.