Model setup
Consider a standard linear regression problem, in which for 
 we specify the mean of the conditional distribution of 
 given a 
 predictor vector 
: 
where 
 is a 
 vector, and the 
 are independent and identically normally distributed random variables: 
This corresponds to the following likelihood function:

The ordinary least squares solution is used to estimate the coefficient vector using the Moore–Penrose pseudoinverse: 
where 
 is the 
 design matrix, each row of which is a predictor vector 
; and 
 is the column 
-vector 
.
This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about 
.  In the Bayesian approach, [1]  the data is supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes' theorem to yield the posterior belief about the parameters 
 and 
. The prior can take different functional forms depending on the domain and the information that is available a priori.
Since the data comprises both 
 and 
, the focus only on the distribution of 
 conditional on 
 needs justification. In fact, a "full" Bayesian analysis would require a joint likelihood 
 along with a prior 
, where 
 symbolizes the parameters of the distribution for 
. Only under the assumption of (weak) exogeneity can the joint likelihood be factored into 
. [2]  The latter part is usually ignored under the assumption of disjoint parameter sets. More so, under classic assumptions 
 is considered chosen (for example, in a designed experiment) and therefore has a known probability without parameters. [3] 
With conjugate priors
Conjugate prior distribution
For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so-called conjugate prior for which the posterior distribution can be derived analytically.
A prior 
 is conjugate to this likelihood function if it has the same functional form with respect to 
 and 
. Since the log-likelihood is quadratic in 
, the log-likelihood is re-written such that the likelihood becomes normal in 
.  Write

The likelihood is now re-written as 
 where 
 where 
 is the number of regression coefficients.
This suggests a form for the prior: 
 where 
 is an inverse-gamma distribution 
In the notation introduced  in the inverse-gamma distribution article, this is the density of an 
 distribution with 
 and 
 with 
 and 
 as the prior values of 
 and 
, respectively.  Equivalently, it can also be described as a scaled inverse chi-squared distribution, 
Further the conditional prior density 
 is a normal distribution,

In the notation of the normal distribution, the conditional prior distribution is 
Posterior distribution
With the prior now specified, the posterior distribution can be expressed as

With some re-arrangement, [4]  the posterior can be re-written so that the posterior mean 
 of the parameter vector 
 can be expressed in terms of the least squares estimator 
 and the prior mean 
, with the strength of the prior indicated by the prior precision matrix 

To justify that 
 is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a quadratic form in 
. [5] 

Now the posterior can be expressed as a normal distribution times an inverse-gamma distribution:

Therefore, the posterior distribution can be parametrized as follows. 
 where the two factors correspond to the densities of 
 and 
 distributions, with the parameters of these given by


which illustrates Bayesian inference being a compromise between the information contained in the prior and the information contained in the sample.
Model evidence
The model evidence 
 is the probability of the data given the model 
. It is also known as the marginal likelihood, and as the prior predictive density. Here, the model is defined by the likelihood function 
 and the prior distribution on the parameters, i.e. 
. The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by Bayes factors. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating 
 over all possible values of 
 and 
. 
 This integral can be computed analytically and the solution is given in the following equation. [6] 
Here 
 denotes the gamma function. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of 
 and 
. [7] 
 Note that this equation follows from a re-arrangement of Bayes' theorem. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.