Physics Asked on May 6, 2021
I’m sort of confused with these two ideas. We start by saying that the trajectory of the particle can be described by giving the phase space coordinate $(q,p)$. So Given the initial condition, I can tell where the particle will be in the future. That’s sounds good. Next, we say, As we increase the number of particles in the system it’s hard to track down these phase-space points. And thus We use the concept of probability……
Wikipedia says,
In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. It does not assume or postulate any natural laws but explains the macroscopic behavior of nature from the behavior of such ensembles.
So this gives the feeling that probability theory is included because there is a large number of particles in the system.
Earlier I asked this question,
Does the appearance of $hbar$ imply the role of quantum mechanics?
Further, We use the fact that it’s not possible to specify the phase space point but a volume that is $h^3$ which comes from the uncertainty principle. There many $h$ that appear in classical statistical mechanics. And so I should conclude that quantum mechanics play a role here.
One thing I haven’t clear is, The probability things that we have included here, Is it due to a large number of particles or due to the quantum uncertainty principle?
The uncertainty that we are tackling here, Is it inbuilt or due to a large numbers?
Your confusing two issues. Statistical mechanics replaces the momentum of lots of classical particles i.e. a long list of - say - $p_x$ values for each particle each moving along its own trajectory, by a few average quantities, such as the average momentum, and the statistical spread of the momentum values about the mean. In this way instead of dealing with a list of $10^{25}$ numbers, you deal with $2$ ($langle p_xrangle$ and $sigma_{p_x}$) while maintaining your ability to predict significant features of the system.
This had nothing to do with quantum-based uncertainty on some quantity, which remains even if there is a single particle and is an intrinsic feature of the theory. If you prepare a single particle in a spin-up state along $=hat z$, you cannot predict with certainly the outcome of a measurement of spin along $hat y$. Indeed if you prepared a second particle in exactly the same way, and performed the exact same experiment, the result could be vastly different even if you had perfect control of the experimental conditions.
In quantum mechanics we speak of averages in the sense that, by repeating the same experiment in precisely the same ideal conditions (possibly on a single-particle system) numerous times, we can get probabilities and averages of outcomes, not as a way of paring down on the overload of available information contained in a system with large number of constituents.
Answered by ZeroTheHero on May 6, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP