Introduction

Just last week, a paper by Verity and Nichols came up online early at Genetics. In this paper, they use a technique called thermodynamic integration to compute, apparently with quite good accuracy, the marginal likelihood for the structure model with different numbers of subpopulations (i.e., different \(K\) values). The method of thermodynamic integration has apparently been known for some time in statistical physics as an approximation method for normalizing constants; in the world of statistics it was described in Gelman and Meng (1998); but more recently it was detailed in an approachable fashion in reference to its use with the “power posterior” distribution by Friel and Pettitt (2008).

Thermodynamic integration looks like it would be a nice addition in anyone’s Monte Carlo toolbox, so this document gives a short tutorial about it. The first section provides some background on Bayesian model choice to remind the reader why we might be interested in estimating normalizing constants. The next section provides a (surprisingly simple, albeit rather long, now that I have written it all down) derivation that shows why thermodynamic integration can be used to approximate normalizing constants.

The third section introduces a toy problem and explores Bayesian model choice within it. The problem is simple enough that the marginal likelihoods can be computed exactly. Finally, we approximate the same marginal likelihoods using thermodynamic integration. We conclude with some thoughts on penalization for model complexity. (None of the sections described in this paragraph have been completed yet)

Quick Review of Bayesian Model Choice

Let \(M_i\) denote a model. Suppose we are interested in several different models under consideration. Each model describes a different set of circumstances and stochastic processes by which our data \(\mathbf{x}\) might have come to be observed. A model \(M_i\) tells us what the set of parameters is (hence it dicates the dimensionality of the parameter space), the probability distributions underlying observed and latent data (i.e., the likelihood), and the the prior distributions on the parameters.

Thus, when we write down the posterior probability, we could explicitly identify the model. For example: \[ P({\mathbf{\theta}}|{\mathbf{x}}, M_i) = \frac{P({\mathbf{x}}|{\mathbf{\theta}}, M_i)P({\mathbf{\theta}}| M_i)} {\int_{\mathbf{\theta}}P({\mathbf{x}}|{\mathbf{\theta}}, M_i)P({\mathbf{\theta}}| M_i) d\theta} \]

Now, if we want to be Bayesian about choosing among different models, we could express our uncertainty about models in terms of the posterior probability of a model. Or, if we just wanted to compare pairs of models, we might look at the Bayes Factor between two models, say, \(M_1\) and \(M_2\): \[ \mathrm{Bayes~Factor} = \frac{P({\mathbf{x}}|M_1)}{P({\mathbf{x}}|M_2)} \] Either way, we need to be able to compute \(P({\mathbf{x}}|M_i)\). Unfortunately, \[ P({\mathbf{x}}|M_i) = \int_{\mathbf{\theta}}P({\mathbf{x}}|{\mathbf{\theta}}, M_i)P({\mathbf{\theta}}| M_i) d\theta \] is the normalizing constant for the posterior from the model. Recall that the reason people rejoice about MCMC is that it provides a means of simulating from target distributions without having to compute the normalizing constant. Computing or even approximating normalizing constants is hard! But if we want to do Bayesian model choice, we will want to be able to compute (or accurately approximate) \(P({\mathbf{x}}|M_i)\), which is sometimes called the marginal likelihood or the model evidence.

Thermodynamic integration provides one computational approach to approximating the marginal likelihood.

Some preliminaries

There are a few mathematical/statistical things to familiarize ourselves with before proceeding to TI.

Something that we can compute during the course of MCMC

Though we can’t directly compute the marginal likelihood by Monte Carlo during our MCMC simulation we can compute log probability of the data given the current values of the parameters—\(\log[P({\mathbf{x}}|{\mathbf{\theta}}^{(j)}, M_i]\), where the superscript \((j)\) denotes the \(j\)-th iteration in the MCMC. If there are additional latent variables in the model, consider them as included in \({\mathbf{\theta}}\) (at least for now).

Averaging over the values of \(\log[P({\mathbf{x}}|{\mathbf{\theta}}^{(j)}, M_i]\) during MCMC gives a Monte Carlo estimate of the posterior mean of the log likelihood function: \[ \int_{\mathbf{\theta}}\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)] P({\mathbf{\theta}}| {\mathbf{x}}, M_i) d{\mathbf{\theta}}= \int_{\mathbf{\theta}}\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)] \biggl(\frac{P({\mathbf{x}}| {\mathbf{\theta}}, M_i) P({\mathbf{\theta}}| M_i)} {\int_{\mathbf{\theta}}P({\mathbf{x}}| {\mathbf{\theta}}, M_i) P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}}\biggr) d{\mathbf{\theta}}. \] This posterior mean is decidedly not the same as the log of the marginal likelihood. However your intuition might say that it should be related somehow to the log of the marginal likelihood. It turns out that it can be related, in an interesting fashion to the log marginal likelihood, and themodynamic integration exploits this relationship.

In order to make this work out, Friel and Pettitt (2008) considered a family of distributions called power posteriors.

Power posteriors

The power posterior with exponent \(\beta\) is simply a posterior distribution obtained by replacing the likelihood with the likelihood raised to the power of \(\beta\) (typically with \(\beta\) between 0 and 1, inclusive). That is, \[ P_\beta({\mathbf{\theta}}| {\mathbf{x}}, M_i) = \frac{[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)} {\int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}} \]

Note that if you can draw an MCMC sample from the posterior distribution (i.e. the power posterior with \(\beta = 1\)), then very likely you could modify your program easily to sample from the power posterior with any \(\beta\). For example, if you are doing Metropolis-Hasting updates, you need only raise the likelihood portions of the target density (that appear in the Hastings ratio) to the power of \(\beta\). Likewise, if you are doing Gibbs sampling on any variables, there is a good chance that the full conditional you are sampling from is of exponential family form, in which case, raising it to the power of \(\beta\) leads to a distribution in the same family.

If you can simulate via MCMC from the power posterior with exponent \(\beta\), then you could also compute an estimate of the posterior mean of \(\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]\) given \(\beta\). This, as before, is just the average of \(\log[P({\mathbf{x}}| {\mathbf{\theta}}^{(j)}, M_i)]\) over the course of the MCMC. It is a Monte Carlo approximation to \[ C(\beta; {\mathbf{x}}, M_i) = \int_{\mathbf{\theta}}\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)] P_\beta({\mathbf{\theta}}| {\mathbf{x}}, M_i) d{\mathbf{\theta}}= \int_{\mathbf{\theta}}\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)] \biggl(\frac{[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)} {\int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}}\biggr) d{\mathbf{\theta}}\] Note that \(\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]\), itself, is not raised to \(\beta\) (neither inside nor outside of the \(\log\) function).

In the next section, we will show how TI makes use of the fact that the log marginal likelihood can be written as an integral over values of \(\beta\) from 0 to 1 of \(C(\beta; {\mathbf{x}}, M_i)\).

The log marginal likelihood by integration over \(\beta\)

What we would like to know is \(P({\mathbf{x}}| M_i)\)—the marginal likelihood. It is just as good to know the log of that quantity, \(\log[P({\mathbf{x}}| M_i)]\), and so that is where we will start. The derivation here, as in a number of things in statistics, is going to involve writing down the quantity 1, dressed up in a tricky fashion, and then exploiting that expression with a fun trick. (This derivation proceeds in the opposite direction from most presentations in the literature and is a little longer, but I find it a little more illuminating.)

So, we start with what we would like to know (or, at least accurately approximate): \[ \log[P({\mathbf{x}}| M_i)] \] Obviously this is the same as \[ \log[P({\mathbf{x}}| M_i)] - 0, \] which is the same as \[ \log[P({\mathbf{x}}| M_i)] - \log(1). \] Now, we are going to write the \(1\) in \(\log(1)\) as the integral of the prior on \({\mathbf{\theta}}\) over all values of \({\mathbf{\theta}}\). (Clearly, since the prior on \({\mathbf{\theta}}\) is a distribution, it must integrate to 1…at least so long as it is a proper prior…). This gives us \[ \log[P({\mathbf{x}}| M_i)] - \log\biggl[ \int_{\mathbf{\theta}}P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr]. \] Now, we are just going to write \(\log[P({\mathbf{x}}| M_i)]\) out in its longer incarnation: \[ \log\biggl[ \int_{\mathbf{\theta}}P({\mathbf{x}}| {\mathbf{\theta}}, M_i) P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] - \log\biggl[ \int_{\mathbf{\theta}}P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] \] Now, we are going to include 1 in there again in a tricky fashion—this time as the likelihood, \(P({\mathbf{x}}| {\mathbf{\theta}}, M_i)\), raised to 0 in the right hand part. And we will also explicitly raise the likelihood where it appears in the left hand part to the power of 1. Obviously we still have the same thing as above: \[ \log\biggl[ \int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^1 P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] - \log\biggl[ \int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^0 P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] \] Notice that the only thing that is changing there is the exponent on the CDDL terms (i.e. in the left hand side it is 1 and in the right hand side it is 0). So, we could rewrite that as: \[ \biggl. \log\biggl[ \int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] \biggr|_{\beta = 0}^{\beta = 1} \] The little vertical bar on the right is saying “evaluate the expression when \(\beta = 1\) and then subtract from that the expression evaluated with \(\beta = 0\).” If this notation is stirring deep memories within you of your high school calculus class, then, Congratulations! you are on the right track.

Recall from that calculus class that if you were evaluating the definite integral \(\int_a^b f(x)dx\), you would evaluate \(\bigl. g(x) \bigr|_a^b\) where \(g(x)\) was the indefinite integral or “anti-derivative” of \(f(x)\). Which is to say that \(\frac{d}{dx}g(x) = f(x)\).

So, as you might have guessed, our next step is going to be to express the equation above, that we have been working through, as a definite integral over \(\beta\) from 0 to 1. If we do that, then from our foregoing excursion back to our calculus class, the integrand must clearly be the derivative (with respect to \(\beta\)) of \[ \log\biggl[ \int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] \] Remember, as well, that \(\frac{d}{dx}\log u(x) = \frac{1}{u(x)}\frac{d}{dx}u(x)\) so that \[ \frac{\partial}{\partial \beta} \log\biggl[ \int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] = \frac{1}{\int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}} \frac{\partial}{\partial \beta} \int_{\mathbf{\theta}}[P(x | {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}. \] We can change the order of differentiation and integration in the term on the right after the partial derivative sign, and thus simplify that term to: \[ \begin{aligned} \frac{\partial}{\partial \beta} \int_{\mathbf{\theta}}[P(x | {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}&= \int_{\mathbf{\theta}}\frac{\partial}{\partial \beta}[P(x | {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\\ &= \int_{\mathbf{\theta}}\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)] [P(x | {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}, \end{aligned} \] making use of the fact that \(\frac{d}{dx}a^x = [\log a] a^x\).

Putting this together we can write \[ \begin{aligned} \frac{\partial}{\partial \beta} \log\biggl[ \int_{\mathbf{\theta}}[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}\biggr] &= \int_{\mathbf{\theta}}\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)] \frac{[P(x | {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)} {\int_{\mathbf{\theta}}[P(x | {\mathbf{\theta}}, M_i)]^\beta P({\mathbf{\theta}}| M_i)d{\mathbf{\theta}}} d{\mathbf{\theta}}\\ &= \int_{\mathbf{\theta}}\log[P({\mathbf{x}}| {\mathbf{\theta}}, M_i)] P_\beta({\mathbf{\theta}}| {\mathbf{x}}, M_i) \\ &= C(\beta; {\mathbf{x}}, M_i) \end{aligned} \]

Remember that this derivative is what we were going to integrate over from \(\beta = 0\) to \(\beta = 1\), which means that \[ \log[P({\mathbf{x}}| M_i)] = \int_{\beta = 0}^{\beta = 1} C(\beta; {\mathbf{x}}, M_i) d\beta \] Now, we typically are not going to be able to analytically evaluate that integral, but that is OK because:

  1. that is just a simple one-dimensional integral, and
  2. in the previous section we saw that we can approximate \(C(\beta; {\mathbf{x}}, M_i)\) for any \(\beta\) by running MCMC and sampling from the power posterior with exponent \(\beta\).

Therefore, we can just form MCMC approximations of \(C(\beta; {\mathbf{x}}, M_i)\) at many values of \(\beta\) between 0 and 1, and then approximate the integral by the trapezoidal rule or Simpson’s method for approximating integrals. Both of which are straightforward. Holy Cow! That is pretty cool.

I currently don’t know how workable this scheme is in practice. One big question is “how much Monte Carlo variance does one find when \(\beta\) is small? Enough that it is very hard to estimate the mean log likelihood accurately when \(\beta\) is small?” Some further work has been done on reducing the discretization error. See Friel, Hurn, and Wyse (2014).

References

Friel, Nial, and Anthony N Pettitt. 2008. “Marginal Likelihood Estimation via Power Posteriors.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 70: 589–607.

Friel, Nial, Merrilee Hurn, and Jason Wyse. 2014. “Improving Power Posterior Estimation of Statistical Evidence.” Statistics and Computing 24: 709–23. doi:10.1007/s11222-013-9397-1.

Gelman, Andrew, and Xiao-Li Meng. 1998. “Simulating Normalizing Constants: From Importance Sampling to Bridge Sampling to Path Sampling.” Statistical Science 13: 163–85.

---
title: "An Exploration of Thermodynamic Integration"
author: "Eric C. Anderson"
date: "July 24, 2016"
output: 
  html_notebook:
    toc: true
bibliography: ti.bib
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```

\newcommand{\btheta}{\mathbf{\theta}}
\newcommand{\bx}{\mathbf{x}}
\newcommand{\bz}{\mathbf{z}}


## Introduction

Just last week, a paper by Verity and Nichols came up online early at *Genetics*.  In this paper,
they use a technique called *thermodynamic integration* to compute, apparently with quite good
accuracy, the marginal likelihood for the *structure*  model with different numbers of subpopulations
(i.e., different $K$ values).  The method of thermodynamic integration has apparently been known for some time
in statistical physics as an approximation method for normalizing constants; in the world of statistics it was described
in @gelman1998simulating; but more recently it was detailed in an approachable fashion in reference to its use with the 
"power posterior" distribution by  @friel2008marginal.  

Thermodynamic integration looks like it would be a nice addition in anyone's Monte Carlo 
toolbox, so this document gives a short tutorial about it.  The first section provides some
background on Bayesian model choice to remind the reader why we might be interested in estimating
normalizing constants.  The next section provides a (surprisingly simple, albeit rather long,
now that I have written it all down) derivation
that shows why thermodynamic integration can be used to approximate normalizing constants.


The third section introduces a toy problem and explores Bayesian model choice within it.  The
problem is simple enough that the marginal likelihoods can be computed exactly.  Finally, we 
approximate the same marginal likelihoods using thermodynamic integration.  We conclude with some
thoughts on penalization for model complexity.  (*None of the sections described in this paragraph have been completed yet*)



## Quick Review of Bayesian Model Choice

Let $M_i$ denote a model.  Suppose we are interested in several different models under consideration.
Each model describes a different
set of circumstances and stochastic processes by which our data $\mathbf{x}$ might have 
come to be observed.  A model $M_i$ tells us what the set of parameters is (hence it dicates
the dimensionality of the parameter space), the probability distributions underlying observed
and latent data (i.e., the likelihood), and the the prior distributions on the parameters.

Thus, when we write down the posterior probability, we could explicitly identify the
model. For example:
$$
P(\btheta|\bx, M_i) = \frac{P(\bx|\btheta, M_i)P(\btheta | M_i)}
{\int_\btheta P(\bx|\btheta, M_i)P(\btheta | M_i) d\theta}
$$

Now, if we want to be Bayesian about choosing among different models, we could
express our uncertainty about models in terms of the posterior probability of 
a model.  Or, if we just wanted to compare pairs of models, we might look at the
Bayes Factor between two models, say, $M_1$ and $M_2$:
$$
\mathrm{Bayes~Factor} = \frac{P(\bx|M_1)}{P(\bx|M_2)}
$$
Either way, we need to be able to compute $P(\bx|M_i)$.  Unfortunately, 
$$
P(\bx|M_i) =  \int_\btheta P(\bx|\btheta, M_i)P(\btheta | M_i) d\theta
$$
is the normalizing constant for the posterior from the model.  Recall that the reason people 
rejoice about MCMC is that it provides a means of simulating from target 
distributions without having to compute the normalizing constant.  Computing
or even approximating normalizing constants is hard! But if we want to do Bayesian
model choice, we will want to be able to compute (or accurately approximate)
$P(\bx|M_i)$, which is sometimes called the *marginal likelihood* or the 
*model evidence*.

Thermodynamic integration provides one computational approach to approximating
the marginal likelihood.

## Some preliminaries

There are a few mathematical/statistical things to familiarize ourselves with
before proceeding to TI.  


### Something that we *can* compute during the course of MCMC

Though we can't directly compute the marginal likelihood by Monte Carlo during
our MCMC simulation we can compute log probability of the data given the
current values of the parameters---$\log[P(\bx|\btheta^{(j)}, M_i]$, where the 
superscript $(j)$ denotes the $j$-th iteration in the MCMC.  If there are additional latent variables in the
model, consider them as included in $\btheta$ (at least for now).


Averaging over the values of $\log[P(\bx|\btheta^{(j)}, M_i]$ during MCMC gives a Monte Carlo estimate
of the posterior mean of the log likelihood function:
$$
\int_\btheta \log[P(\bx | \btheta, M_i)] P(\btheta | \bx, M_i) d\btheta  = 
\int_\btheta \log[P(\bx | \btheta, M_i)] \biggl(\frac{P(\bx | \btheta, M_i) P(\btheta | M_i)}
{\int_\btheta P(\bx | \btheta, M_i) P(\btheta | M_i)d\btheta}\biggr)
d\btheta.
$$
This posterior mean is decidedly *not* the same as the log of the marginal likelihood.
However your intuition might say that it should be related somehow to the log of 
the marginal likelihood. It turns out that it can be related, in an interesting
fashion to the log marginal likelihood, and themodynamic integration exploits this
relationship. 

In order to make this work out, @friel2008marginal considered a family of distributions 
called *power posteriors*.  

### Power posteriors

The power posterior with exponent $\beta$ is simply a posterior distribution 
obtained by replacing the likelihood with the likelihood raised to the power
of $\beta$ (typically with $\beta$ between 0 and 1, inclusive). That is,
$$
P_\beta(\btheta | \bx, M_i) = \frac{[P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)}
{\int_\btheta [P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta}
$$

Note that if you can draw an MCMC sample from the posterior distribution (i.e. the 
power posterior with $\beta = 1$), then very likely you could modify your program 
easily to sample from the power posterior with any $\beta$.  For example, if you are doing
Metropolis-Hasting updates, you need only raise the likelihood portions of the 
target density (that appear in the Hastings ratio) to the power of $\beta$.
Likewise, if you are doing Gibbs sampling on any variables,
there is a good chance that the full conditional you are sampling from is of
exponential family form, in which case, raising it to the power of $\beta$ leads to a 
distribution in the same family.  

If you can simulate via MCMC from the power posterior with exponent $\beta$,
then you could also compute an estimate of the posterior mean of $\log[P(\bx | \btheta, M_i)]$
given $\beta$.  This, as before, is just the average of 
$\log[P(\bx | \btheta^{(j)}, M_i)]$ over the course of the MCMC.  It is a Monte Carlo approximation to
$$
C(\beta; \bx, M_i) = \int_\btheta \log[P(\bx | \btheta, M_i)] P_\beta(\btheta | \bx, M_i) d\btheta  = 
\int_\btheta \log[P(\bx | \btheta, M_i)] \biggl(\frac{[P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)}
{\int_\btheta [P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta}\biggr)
d\btheta
$$
Note that $\log[P(\bx | \btheta, M_i)]$, itself, is *not* raised to $\beta$ 
(neither inside nor outside of the $\log$ function).

In the next section, we will show how TI makes use of the fact that the log marginal likelihood
can be written as an integral over values of $\beta$ from 0 to 1 of $C(\beta; \bx, M_i)$.  

## The log marginal likelihood by integration over $\beta$

What we would like to know is $P(\bx | M_i)$---the marginal likelihood.  It is just as good to know the log of that
quantity, $\log[P(\bx | M_i)]$, and so that is where we will start.  The derivation here, as in a number of things 
in statistics, is going to involve writing down the quantity 1, dressed up in a tricky fashion, and then exploiting that 
expression with a fun trick.  (This derivation proceeds in the opposite direction from most presentations in the 
literature and is a little longer, but I find it a little more illuminating.)

So, we start with what we would like to know (or, at least accurately approximate):
$$
\log[P(\bx| M_i)]
$$
Obviously this is the same as 
$$
\log[P(\bx | M_i)] - 0, 
$$
which is the same as
$$
\log[P(\bx | M_i)] - \log(1).
$$
Now, we are going to write the $1$ in $\log(1)$ as the integral of the prior on $\btheta$ 
over all values of $\btheta$.  (Clearly, since the prior on $\btheta$ is a distribution, it
must integrate to 1...at least so long as it is a proper prior...).  This gives us
$$
\log[P(\bx | M_i)] - \log\biggl[  
\int_\btheta P(\btheta | M_i)d\btheta
\biggr].
$$
Now, we are just going to write $\log[P(\bx | M_i)]$ out in its longer incarnation:
$$
\log\biggl[ 
\int_\btheta P(\bx | \btheta, M_i) P(\btheta | M_i)d\btheta
\biggr] 
- 
\log\biggl[  
\int_\btheta P(\btheta | M_i)d\btheta
\biggr]
$$
Now, we are going to include 1 in there again in a tricky fashion---this time
as the likelihood, $P(\bx | \btheta, M_i)$, raised to 0 in the right hand part. And we will also explicitly 
raise the likelihood where it appears in the left hand part to the power of 1.  Obviously we still have the
same thing as above:
$$
\log\biggl[ 
\int_\btheta [P(\bx | \btheta, M_i)]^1 P(\btheta | M_i)d\btheta
\biggr] 
- 
\log\biggl[  
\int_\btheta 
[P(\bx | \btheta, M_i)]^0
P(\btheta | M_i)d\btheta
\biggr]
$$
Notice that the only thing that is changing there is the exponent on the CDDL terms
(i.e. in the left hand side it is 1 and in the right hand side it is 0).  So, we could
rewrite that as:
$$
\biggl.
\log\biggl[ 
\int_\btheta [P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta
\biggr] 
\biggr|_{\beta = 0}^{\beta = 1}
$$
The little vertical bar on the right is saying "evaluate the expression when $\beta = 1$ and then subtract 
from that the expression evaluated with $\beta = 0$."  If this notation is stirring deep memories within
you of your high school calculus class, then, Congratulations! you are on the right track.

Recall from that calculus class that if you were evaluating the definite integral 
$\int_a^b f(x)dx$, you would evaluate $\bigl. g(x) \bigr|_a^b$ where $g(x)$ was the indefinite
integral or "anti-derivative" of $f(x)$.  Which is to say that $\frac{d}{dx}g(x) = f(x)$.

So, as you might have guessed, our next step is going to be to express the equation above, that
we have been working through, as a definite integral over $\beta$ from 0 to 1.  If we do that,
then from our foregoing excursion back to our calculus class, the integrand must clearly be the
derivative (with respect to $\beta$) of 
$$
\log\biggl[ 
\int_\btheta [P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta
\biggr]
$$
Remember, as well, that $\frac{d}{dx}\log u(x) = \frac{1}{u(x)}\frac{d}{dx}u(x)$ so that
$$
\frac{\partial}{\partial \beta} 
\log\biggl[ 
\int_\btheta [P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta
\biggr]
=
\frac{1}{\int_\btheta [P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta}
\frac{\partial}{\partial \beta}
\int_\btheta [P(x | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta.
$$
We can change the order of differentiation and integration in the term on the right
after the partial derivative sign, and thus
simplify that term to:
$$
\begin{aligned}
\frac{\partial}{\partial \beta} \int_\btheta [P(x | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta &=
 \int_\btheta \frac{\partial}{\partial \beta}[P(x | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta \\
 &=
 \int_\btheta \log[P(\bx | \btheta, M_i)] [P(x | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta,
\end{aligned}
$$
making use of the fact that $\frac{d}{dx}a^x = [\log a] a^x$.

Putting this together we can write 
$$
\begin{aligned}
\frac{\partial}{\partial \beta} 
\log\biggl[ 
\int_\btheta [P(\bx | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta
\biggr]
&=
\int_\btheta \log[P(\bx | \btheta, M_i)] \frac{[P(x | \btheta, M_i)]^\beta P(\btheta | M_i)}
{\int_\btheta [P(x | \btheta, M_i)]^\beta P(\btheta | M_i)d\btheta} 
d\btheta \\
&= 
\int_\btheta \log[P(\bx | \btheta, M_i)] P_\beta(\btheta | \bx, M_i) \\
&= C(\beta; \bx, M_i)
\end{aligned}
$$

Remember that this derivative is what we were going to integrate over from $\beta  = 0$ to $\beta = 1$, which means that 
$$
\log[P(\bx | M_i)] = \int_{\beta = 0}^{\beta = 1} C(\beta; \bx, M_i) d\beta
$$
Now, we typically are not going to be able to analytically evaluate that integral, but that is OK because:

1. that is just a simple one-dimensional integral, and
2. in the previous section we saw that we can approximate $C(\beta; \bx, M_i)$ for any $\beta$ by running 
    MCMC and sampling from the power posterior with exponent $\beta$.  

Therefore, we can just form MCMC approximations of $C(\beta; \bx, M_i)$ at many values of $\beta$ between
0 and 1, and then approximate the integral by the trapezoidal rule or Simpson's method for approximating 
integrals.  Both of which are straightforward.  Holy Cow! That is pretty cool.

I currently don't know how workable this scheme is in practice. One big question is "how much Monte Carlo variance
does one find when $\beta$ is small? Enough that it is very hard to estimate the mean log likelihood accurately when 
$\beta$ is small?"  Some further work has been done on reducing the discretization error. See @friel2014.


## References

