TransWikia.com

Probability density functions for the Bose-Einstein and Fermi-Dirac statistics

Physics Asked on March 9, 2021

At the Bose-Einstein and Fermi-Dirac statistics, the average particle number $bar n_r$ at the energy level $varepsilon_r$ is given by

$$bar n(varepsilon_r)=bar n_r=frac{1}{e^{beta(varepsilon_r-mu)}pm1}$$

with the $+$ for Fermions and the $-$ for Bosons, and where $mu$ is the chemical potential.

However, in the texts that I have checked out, I haven’t been able to find an expression for the probability density functions of these distributions $f(varepsilon)$. What would these functions that gives the probability of a boson or a fermion having certain energy be?

My attempt

  1. In the case of the FD statistics, as there can only be one fermion at each state, I guess that the probability density function would equal the average: $f(varepsilon_r)=bar n(varepsilon_r)$. Would be this correct?

  2. On the other hand, In the case of the BE statistics, what would the probability density function $f(varepsilon_r)$ be? Since the average number of particles at each energy level is given by $bar n(varepsilon_r)=sum_rf(varepsilon_r)n(varepsilon_r)$ it is not trivial to deduce $f(varepsilon_r)$ from $bar n(varepsilon_r)$

One Answer

Let us go summarily through the derivation of these probability distributions. References are given at the end.

Microscopic description

First of all let's take the second-quantization or quantum-field-theoretic point of view: our system is a field, which can have several modes of oscillation or propagation. Our system is then the collection of such modes. Each mode has a energy, and it turns out that the amount of energy is quantized in integer multiples of a given amount/quantum, which is different for each mode. For concreteness let's say we have two modes a and b, with basic quanta $epsilon_text{a}$ and $epsilon_text{b}$.

For Bosonic systems there is no bound to the maximum energy of a mode. In this case the energy of the first mode in our example would be $nepsilon_text{a}$ with $ninmathbf{N}$ and analogously for the second mode. For Fermionic systems there can at most be one quantum. In this case we would have $nepsilon_text{a}$ with $nin{0,1}$ for the first mode in our example.

In a semiclassical approach, the microscopic state of the system is given by the amount of energy in each mode, or equivalently the multiples of the basic quanta. In our case a microscopic state is $(n_text{a}, n_text{b})$.

Macroscopic and probabilistic descriptions

When we prepare our system using a macroscopic protocol, we never manage to prepare it in exactly the same microscopic state every time. So we can at most give a guess of what its microscopic state is, in the form of a distribution of probabilities over all possible microstates: $p(n_text{a}, n_text{b})$. How do we assign such probabilities?

We notice that our macroscopic preparation protocol gives rise to reproducible features. For example we find that the system has on average (over many preparations) a particular total number of quanta $bar{N}$ and a particular total energy $bar{E}$. When the microstate is $(n_text{a}, n_text{b})$, the total number of quanta is $N(n_text{a}, n_text{b}) = n_text{a} + n_text{b}$ and the total energy is $E(n_text{a}, n_text{b}) = epsilon_text{a} n_text{a} + epsilon_text{b} n_text{b}$. So a basic requirement of our probability distribution is that it reflects such observed averages: we require $$ begin{aligned} bar{N} &= sum_{(n_text{a}, n_text{b})} (n_text{a} + n_text{b}) p(n_text{a}, n_text{b}) , bar{E} &= sum_{(n_text{a}, n_text{b})} (epsilon_text{a} n_text{a} + epsilon_text{b} n_text{b}) p(n_text{a}, n_text{b}) . end{aligned}$$

The two constraints are not enough to determine the distribution. The next idea, discussed at length by Gibbs and later presented even more clearly by Jaynes, is to assign the "broadest" distribution compatible with the observed constraints above, "broadness" being quantified by the Shannon entropy. So we want to maximize $$ -sum_{(n_text{a}, n_text{b})} p(n_text{a}, n_text{b}) ln p(n_text{a}, n_text{b}) $$ under the constraints above.

The result can be found using the method of Lagrange multipliers. The general solution is $$begin{aligned} p(n_text{a}, n_text{b}) &propto exp[A N(n_text{a}, n_text{b}) +B E(n_text{a}, n_text{b})] equiv{} &qquad exp[A (n_text{a} + n_text{b}) +B (epsilon_text{a} n_text{a} + epsilon_text{b} n_text{b})] equiv{} &qquad exp[(A +B epsilon_text{a}) n_text{a}]cdot exp[(A +B epsilon_text{b}) n_text{b}] end{aligned} $$ where the coefficients $A$ and $B$ are obtained using the constraints above. Note how this maximum-entropy distribution turns out to be factorizable into distributions for the single modes.

First let's find the normalization factor of the distribution. It's $$begin{aligned} sum_{(n_text{a}, n_text{b})} bigl{exp[(A +B epsilon_text{a}) n_text{a}]cdot exp[(A +B epsilon_text{b}) n_text{b}]bigr} equiv{}qquad sum_{n_text{a}} exp[(A +B epsilon_text{a}) n_text{a}] cdot sum_{n_text{b}} exp[(A +B epsilon_text{b}) n_text{b}] . end{aligned}$$

We can calculate each sum explicitly, and here is where the difference between Bosonic and Fermionic systems appears.

Let's focus on the Bosonic case now. We have $n_text{a}, n_text{b} in {0,1,2,dotsc}$. Each sum is a geometric series: $$ sum_{ninmathbf{N}} exp[(A +B epsilon) n] = frac{1}{1-exp(A +B epsilon)} $$ which is convergent only if $A +B epsilon<0$.

Thus our probability distribution is $$ p(n_text{a}, n_text{b}) = prod_{r=text{a}, text{b}} frac{exp[(A +B epsilon_r) n_r]}{ [1-exp(A +B epsilon_r)]^{-1}} $$ which is what you were looking for. The marginal probability distribution for each mode is also immediately visible in the factorized expression above.

We must still find the coefficients $A,B$ in terms of our constraints $bar{N},bar{E}$. The constraint equations become: $$ begin{aligned} bar{N} &= sum_{(n_text{a}, n_text{b})} (n_text{a} + n_text{b}) prod_{r=text{a}, text{b}} frac{exp[(A +B epsilon_r) n_r]}{ [1-exp(A +B epsilon_r)]^{-1}} equiv prod_{r=text{a}, text{b}} frac{sum_{n_r} n_r exp[(A +B epsilon_r) n_r]}{ [1-exp(A +B epsilon_r)]^{-1}} , bar{E} &= sum_{(n_text{a}, n_text{b})} (epsilon_text{a} n_text{a} + epsilon_text{b} n_text{b}) prod_{r=text{a}, text{b}} frac{exp[(A +B epsilon_r) n_r]}{ [1-exp(A +B epsilon_r)]^{-1}} equiv prod_{r=text{a}, text{b}} frac{sum_{n_r} epsilon_r n_r exp[(A +B epsilon_r) n_r]}{ [1-exp(A +B epsilon_r)]^{-1}} . end{aligned}$$

The sums in the numerators are arithmetic-geometric series. The closed-form expression for the second is $$ sum_{n_rinmathbf{N}} epsilon_r n_r exp[(A +B epsilon_r) n_r] = frac{epsilon_r exp(A +B epsilon_r)}{ bigl[1-exp(A +B epsilon_r)bigr]^2} $$ and similarly for the first. The constraint equations thus become, after some simplification, $$ begin{aligned} bar{N} &= prod_{r=text{a}, text{b}} frac{1}{ exp[-(A +B epsilon_r)]-1} , bar{E} &= prod_{r=text{a}, text{b}} frac{epsilon_r}{ exp[-(A +B epsilon_r)]-1} . end{aligned}$$

Usually these are solved numerically, especially for systems with many (or an infinity) of modes. Note that no solution is possible unless $A +B epsilon<0$.

You surely have noticed that $A,B$ are related to temperature and chemical potential by $A= betamu$, $B = -beta$.

References and relation to equilibrium thermodynamics

The principles behind the calculations above can be nicely gathered by comparing these two references:

The paper by Jaynes also explains the relation between this probabilistic approach and the equations of equilibrium thermodynamics.

You can easily generalize what we did here to more than two modes and to Fermionic systems – in that case all sums only have two terms.


As said at the beginning this is a semi-classical approach. In reality the set of microscopic states is much more complex, a state being described by a density matrix. For the full quantum treatment you can check the second paper by Jaynes:


Finally, in the probabilistic description one can include other macroscopic reproducible features besides the average number of quanta and amount of total energy. For example other quantities (angular momentum or whatnot) or the precision with which these quantities are reproduced. This way we obtain a plethora of "ensembles" with many more coefficients. Take a look at the Gaussian ensemble for example:

The general principles behind are again clearly explained in several papers by Jaynes, which can be found in the webpage linked above.

Correct answer by pglpm on March 9, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP