TransWikia.com

Conversion between continuous and discrete partition functions

Physics Asked by Guiste on November 27, 2020

I am a bit confused about the two notations for partition functions, that is the continuous form

$$Z=frac{1}{h^3}intmathrm{d}boldsymbol{p}mathrm{d}boldsymbol{r}mathrm{e}^{-beta H(boldsymbol{p},boldsymbol{r})}$$

and the discrete form

$$Z=sumlimits_i mathrm{e}^{-beta H_i} , .$$

First of all, how is it possible to come from one of these expressions to the respective other one?

Suppose we have a single particle on a discrete lattice (each lattice cell with a site length of $a$) with no kinetic energy. We can then use the 2nd expression and get the partition sum as

$$Z=sumlimits_i mathrm{e}^{-beta V_i} , .$$

with $V_i$ the potential energy ($T_i=0$ assumed) at the site $i$ and the sum going over all lattice sites. How would the continuous expression look like?

$$Z=frac{1}{h^3}left(intmathrm{d}boldsymbol{p}right)left(intmathrm{d}boldsymbol{r}mathrm{e}^{-beta V(boldsymbol{r})}right)=frac{1}{h^3}left(intmathrm{d}boldsymbol{p}right)left(sumlimits_imathrm{e}^{-beta V_i}a^3right)ldots$$

and can we convert this expression again into a sum to come back to the discrete expression?

5 Answers

To a physicist, an integral is often nothing other than a "continuous sum". The instruction $sum_x f(x)$ often means an actual sum when $x$ is drawn from a countable set, but an integral when $x$ is drawn from an uncountable set.

Therefore, there is no "conversion" between your expressions - they all are valid ways of writing down the partition function for different types of systems:

  • Classical statistical system, finitely many microstates $iin{1,dots,n}$ with energy $E_i$: $Z = sum_i mathrm{e}^{-beta E_i}$

  • Classical statistical system, infinitely many continuous microstates given by points $(q,p) in mathbb{R}^{6N}$ in phase space for $N$ particles: $Z = h^{-3}int mathrm{e}^{-beta H(q,p)}mathrm{d}^{3N}xmathrm{d}^{3N}p$.

The integral is simply the only way to get something that is close to a "sum over all $(x,p)$" that is mathematically well-defined and behaves in the desired way.

Answered by ACuriousMind on November 27, 2020

Good question. The point is that Gibbs' recipe: $$rho sim e^{-beta H},$$to obtain the probability distribution for a classical system in thermal equilibrium, needs to be supplemented with a prescription which tells us how we ought to count states, so that the exact version of the recipe should read: $$text d P(s) = e^{-beta H(s)}text d N(s).$$ In other words, we should always specify a measure $text d N(s)$ on the space of admissible states $S$ of the system, for which the Boltzmann factor $e^{-beta H(s)}$ acts as a density.

This point is not completely trivial. For example, the canonical measure $text d p text d q$ for a hamiltonian system with coordinates $p$ and $q$ has the virtue of being form-invariant with respect to canonical transformation, and is somewhat intrinsic to the kinematical description of the system (I guess this is the reason for calling it "canonical"). Therefore it is a good, perhaps the only, candidate measure for counting states.

However, canonical $(p,q)$-system are not the only classical systems which exist. Compare, for example, what Feynman is doing here to calculate the polarization for a gas of molecules with an intrinsic dipole in an external electric field. Before eq. $(1.15)$, he states that

[...]the relative number of molecules with the potential energy $U$ is proportional to" $$e^{-frac{U}{kT}}.$$

Subsequently, he specializes to the problem at hand:

[...]the number of molecules at $theta$ per unit solid angle is proportional to $e^{−frac{U}{kT}}$.

Here, the states $s$ are the possible orientations of a unit vector in three-dimensional space, that is, the unit sphere $S^2$. The natural choice for the measure is clearly the solid angle $text d Omega$, which reflects the hypothesis that every orientation in space should be a priori equivalent (or, put in other words, the existence of the electric field giving rise to inequivalent orientations in space must be completely taken into account by the Boltzmann factor $E$).

Also in counting states of canonical (that is, $p,q$) systems there's an ambiguity, already known to Gibbs, which arises from the question of how to count states of identical particles, and gives rise to the $frac{1}{N!}$ factors in the partition function.

In any case, these problems are all, as far as I know, somewhat academic, since in quantum statistical mechanics, the specification of the density operator is completely unambiguous: $$hat rho =frac {1}{Z}e^{-beta hat H}.$$ Therefore one has, at least in principle, a solid way to obtain the "classical" measure $text d N$ without going into all these (admittedly handwaving) arguments: one hase to choose the measure in such a way to reproduce the quantum partition function in the limit of high temperatures.


In light of this, I think that what you're trying to do in your specific example doesn't make a lot of sense, in primis because your system doesn't have any natural $(p,q)$ coordinates. Maybe, you could look at your problem as a low temperature approximation for a particle on a lattice with a short-range binding potential $V_i$ on each site, such that, because of the low temperature, the only accessible states are the ground states of each potential.

Answered by pppqqq on November 27, 2020

This is going to seem like it comes out of left-field but I promise, it's relevant!

Probability and discrete/continuous random variables.

In probability, it is very common to deal with continuous random variables. The definition of these is that they have a probability density function, rather than discrete probabilities for each given outcome. So the definition is that $f$ is the probability density function for a random variable $X$ if for a small $Delta x$, the probability that $x < X < x + Delta x$ is given by $f(x)~Delta x.$ That is the definition.

Then we can sum all of the probabilities to get one, $int_{-infty}^infty dx~f(x) = 1,$ and we can also start to define things like its expectation bracket, $langle q(X)rangle = int_{-infty}^infty dx~f(x)~q(x).$ For some practice with these ideas before we go further:

  1. Prove the usual shortcut formula for variance: that if $bar X = langle X rangle$ and we define $operatorname{Var}[X] =langle (X - bar X)^2rangle,$ then $operatorname{Var}[X] = langle X^2rangle - langle Xrangle^2.$
  2. Given two random variables, $X, Y$, we have a "joint distribution" $f(x,y)$ such that $x < X < x+Delta x, y < Y < y +Delta y$ is given by $f(x,y)~Delta x~Delta y$. Now this might happen to be factorizable as $g(x) h(y)$ in the case of "independent random variables". Prove that then $langle XYrangle = langle Xranglelangle Yrangle.$
  3. To add two random variables, $Z = X + Y$, you can convolve the joint distribution; $h(z) = int dx~f(x, z-x).$ Prove that the expectation value of such a $Z$ is given by $langle X + Yrangle = langle X rangle + langle Yrangle$ and that for independent random variables, $operatorname{Var}[X + Y] = operatorname{Var}[X] + operatorname{Var}[Y].$

Okay so now that you know what this definition is, we physicists have a crude tool to use these continuous probabilities to model discrete probabilities. It is a family of smooth functions that do not technically exist, called the "Dirac $delta$-function." In words, $delta(x)$ is an infinitely tall, infinitely thin spike centered at zero, such that the integral over it is one. In mathematics, $$int_{u}^{v} dx~f(x)~delta(x - x_0) = left{begin{array}{ll} f(x_0)&text{ if }u < x_0 < v, -f(x_0)&text{ if }v < x_0 < u, 0&text{ otherwise.} end{array}right.$$(Although, you get an indeterminate case if $u = x_0$ or $v = x_0.$)

For example one of these can be a limit of a Gaussian integral with unit area, $$delta_sigma(x) = frac{1}{sqrt{2pisigma^2}}~e^{-x^2/(2sigma^2)},$$and you can imagine that we do all of the math theory with this thing as a smooth function ($sigma > 0$) but then in the end we always take a limit $sigma to 0$ to get the discrete-ish answers that we expect. However there are lots of functions which limit to the Dirac $delta$-function, it is not just the Gaussians.

Now that you know this trick, all I have to tell you is that, say, your normal six-sided die has a probability-density function,$$frac16big[delta(x-1) + delta(x-2) + delta(x-3) + delta(x-4) + delta(x-5) + delta(x-6)big].$$ And then if you work out $langle X rangle$ you'll get your familiar $1/6 + 2/6 + 3/6 + 4/6 + 5/6 + 6/6 = 3.5$ expression, as with all of the others expressions.

Ensembles and secret distributions

Now I claim to you that secretly, we have chosen a probability distribution on phase space, and that means that we have... a probability-density function!

When we start in statistical mechanics from the assumption of entropy maximization, we must start in a very particular circumstance, and these circumstances are called "ensembles". The one we have to start from is the "microcanonical ensemble," it says "there is a very solid box of fixed volume, no particles are going in or out of the box, no energy is going in or out of the box." And then in this particular circumstance we say that the probability distribution for a bunch of particles $i$ is flat over the underlying phase space, $f(x_1, x_2,dots, p_1, p_2,dots) = c.$ The tiny constant $c$ might even have to limit to 0 in the end in some circumstances! But that's fine, what we really mean is that the relative probabilities of seeing A or B are proportional to their phase-space volumes.

So the fact that this is constant is why it is not appearing in your first expression for the partition function, but I claim that it is there and that the generic form is $$Z = int d^text D p~d^text D x ~f(x_1,dots,x_D,p_1,dots,p_D)~e^{-beta H(x_1,dots,p_1,dots)}.$$The reason that you are not seeing it is because it is a constant which has been pulled out of the integral and shoved quietly into the other part of the probability calculation (recall that probabilities are always $e^{beta ~H}/Z$...).

If you punch a sum of evenly-weighted Dirac $delta$-functions in for $f$, corresponding to the discrete probability space of "this thing can be in conformation X or conformation Y but there is no continuous path between the two), you'll instead get your second expression, again with a multiplicative constant pulled out from the definition for convenience's sake.

Answered by CR Drost on November 27, 2020

The following assumption is used in some books of statistical mechanics at the very start and is not related to the canonical ensemble:

"The phase can be split into small parts $C_i$ each with volume $h^3$ such that $H$ is constant on each $C_i$ and takes value $H_i$ on it".

With this assumption, the continuous formula can be simplified :

$$frac{1}{h^3}int dpdr e^{-beta H}= frac{1}{h^3}sum_i int_{C_i}dpdr e^{-beta H} = sum_i frac{1}{h^3}int_{C_i}dpdr e^{-beta H_i} = sum_i e^{-beta H_i}frac{1}{h^3}int_{C_i}dpdr $$

$C_i$ has volume $h^3$ is equivalent to $int_{C_i}dpdr=h^3$

Finally:

$$frac{1}{h^3}int dpdr e^{-beta H}= sum_i e^{-beta H_i}$$

It's just about approximating a continuous function by a step function.

Answered by Benoit on November 27, 2020

First off, the expression for the partition function in quantum mechanics can be expressed in terms of an integral. The partition function is $$Z = text{Tr} e^{-beta H}. $$

This trace can be evaluated in any basis. In the basis where the Hamiltonian is diagonal we write $Z = sum_i e^{-beta E_i}$, where $E_i$ are the eigenvalues of the energies and the sum is over all states. However in other basis, for example the $x$-basis we will write, $$Z = int dx langle x| e^{-beta H} |xrangle .$$ My point is, even in quantum mechanics, if we did the calculation in a continuous basis we will end up with an integral form.

So, now I hope you'd agree that the question is not really how to get from a discreet form to a continuous form, but rather how to get from a quantum mechanical form to the classical form. I think the easiest way to see this connection is by formulating a path integral to partition function, and taking the classical limit. This was discussed in the classic book by Feynman and Hibbs in chapter 9 on statistical mechanics.

Answered by A. Jahin on November 27, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP