Physics Asked on October 29, 2021
Well, we know that it is impossible to say exactly when a radioactive atom will go on decay. It is a random process. My question is why then a collection of them decays in a predictable nature (exponential decay)? Does the randomness disappear when they get together? What is the cause of this drastic change of their behaviour?
I think what is confusing you is the way we use words like "random" and "unpredictable". Think about a six sided die. The die has a very specific structure. It's highly symmetrical. Because of this, we can say with a great deal of certainty that if you roll that die 10,000 times, about 1/6th of the time it will show a 2.
The radioactive decay process for billions of identical atoms is just like rolling billions of uniformly structured dice. Every atom has the same structure and properties. So we can say with a great deal of certainty what fraction will decay over time. You don't know anything about what a single die roll is going to do, but because of the structure of the die you can say something about what you expect from the results of ten thousand rolls. Atoms are "doing the decay experiment" (or, in some sense, "rolling the die and deciding based on that roll whether to decay or not) billions of times. So you get nice, uniform results because so many identical experiments are being performed.
It may feel like a contradiction to have something so predictable (overall rate of decay) arise from something where each individual action is "unpredictable". But the "predictability" in the case of the die comes from the fact that the die itself is not a random thing--it's very symmetrically structured. Similarly, the properties of a particular type of atom are always the same. So that's where the predictability comes from--it reflects the uniformity of that particular type of atom's properties, just like the uniformly distributed 1,2,3,4,5,6 of the die is a reflection of the die's uniform structure.
In our heads we might map "random" and "unpredictable" to the same place, but that is a bit deceiving. For many, many things where the individual experiment has a "random" outcome, there is an underlying structure or property that shows through when you repeat it enough. Hence the apparent contradiction of getting highly predictable results from a "random" process.
Answered by msouth on October 29, 2021
It is a common principle in physics to have an quantity emergent from collective properties of matter. For example, Consider 'temperature', in the kinetic theory of gases , temperature is the average kinetic energy of all gas molecules. But, now notice that each gas itself doesn't have a temperature.
Here is another way to think of it, Imagine going to an airport for instance. If you go to the airport and then walk up to random people and ask "Why are you here?" then a reasonable percentage of people from the total people asked would answer " to travel by aeroplane". But there are also people who are just going to the airport for seeing their friends and family depart. Notice that this observation of most people in an airport to fly in aeroplane is a direct result of the kind of place an airport is
So, here we have no idea the particle is gonna decay or not in the immediate future. The way we 'ask' is we take experimental readings of how much particles are left out in the end. And, this readings would be completely dependent on the kind of particle it is (drawing an analogy to airports)
I hope this helped you understand the idea more :) Please do leave a comment if there was any part which was not clear
Answered by User688539 on October 29, 2021
The underlying reason for this is how we define the problem.
If I have 100 individual, identifiable radioactive particles, my ability to predict whether any particular one of them decays or not is no better than random chance. However, in the situations you describe we are not treating them as 100 individual identifiable radioactive particles. Any one decaying is treated the same as if any other decays.
This is where the central limit theorem comes into play. Because we are looking at the sum of all particles that decayed, and any one decay is the same as any other, the behavior starts to become more predictable. We don't know which particles will decay, but we can be more certain how many will decay in any period of time.
Get to a large enough number (say, a few million atoms), and you find that the number of decays in any time frame is extremely predictable. It's not because the radioactivity became more predictable, but rather because you are choosing to measure something that is more predictable.
Answered by Cort Ammon on October 29, 2021
Because an average value is unique ("deterministic"), contrary to a single outcome.
Answered by Vladimir Kalitvianski on October 29, 2021
As an illustration, we can simulate the radioactive decay, using various starting number of atoms. We get something like this:
The two plots show proportion of remaining atoms as a function of time. The bottom panel uses a logarithmic scale to better see what is happening. Each curve shows a simulation with a given starting population (from 1 to 1000 atoms). As you can see, as you increase the number of atoms the curves converge rapidly to the limit curve (in blue). As the numbers of atoms in a lot of problems in much larger than 1000, it makes sense to use the limit curve to model the atom population.
Answered by TonioElGringo on October 29, 2021
This law simply states that if you repeat a trial many times, the result tends to be the expected value. For example if you roll a 6-sided die, you could get any of the six results 1, 2, 3, 4, 5, 6. But the average of the six results is 3.5, and if you roll the 6-sided die a million times and take the average of all of them, you are extremely likely to get an average of about 3.5.
But you 1) might not get a number close to 3.5, in fact there's a non-zero chance you get an average of, for example, 2 or 1, and 2) still can't predict which result you will get when you roll a single die.
In the same way, you might not be able to predict when a single atom will decay (i.e. when you roll a single die), but you can make very good predictions when you have lots of atoms (i.e. equivalent to rolling the die millions of times).
Answered by Allure on October 29, 2021
The underlying reason is down to the probabilistic nature of quantum events. At the quantum level, after a given length of time every event has a particular probability of occurring. Just like rolling a die, you never know when you will roll a six but you know that one will presently turn up. If you roll hundreds or thousands of times, the maths of probability will give you a good idea of what the distribution of sixes will be.
So it is with radioactivity. You never know when a given atom will "roll a six" and decay. But you know what the distribution of decay events in a lump of atoms will be.
You may still want to know, why are quantum events probabilistic? Augh! It is one of life's deepest mysteries. The maths works, that is all we can say for sure.
Answered by Guy Inchbald on October 29, 2021
Loosely speaking a random number is always Poisson distributed, if we have a "large" number of possible events, each of which is "rare" and independent of each another. This can be shown mathematically (look up Poisson process). Since this applies to the number of spam mails received per hour, and to the decay of a radioactive isotope, both are distributed as $$ Pr(X=k) = frac{lambda^k e^{-lambda}}{k!} $$ where $lambda$ is the (dimensionless) rate constant of the Poisson process, which is equal to the average value, $E[X]=lambda$ as well as to the variance, $Var[X]=lambda$. In Physics we usually replace $lambda to tildelambda cdot t$, where $tildelambda$ has dimension $s^{-1}$.
To simplify the above argument, one could say that the $e^{- tildelambda t}$ law of radioactive isotopes is due to an average effect.
Answered by Semoi on October 29, 2021
Radioactive decay is entirely random and it is impossible to predict when a specific atom will decay. However, at any point in time, each radioactive atom in a sample has the same probability of decaying. Therefore, the number of decay events (or reduction in the number of atoms) $-dN$ in a small time interval $dt$ is proportional to the number of atoms $N$.
So $-frac{dN}{dt} = kN$. The solution to this differential equation is $N(t)=N(0)e^{-kt}$.
So when there is a sufficiently large number of atoms in a sample their number can be treated as continuous and a differential equation can be used to solve for the amount of sample.
In other words, after one half-life there is not always exactly half of the atoms remaining because of the randomness in the process. But when there are many identical atoms decaying it is a pretty good approximation to say that half of the atoms remain after one half-life (for large enough numbers of atoms large fluctuations are unlikely to occur).
Answered by mihirb on October 29, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP