Physics Asked by TQM on January 9, 2021
I understand that in an atomic clock there is a microwave signal that is locked to the frequency of the atomic rotation. However, if one wants to extract time ticks from the microwave signal:
$v(t) = V_0 [1+alpha (t)] cos[2 pi v_0 t + phi (t)]$ (shown below)1,
it would seem that we need to assign a tick of our clock as either when the signal crosses some threshold or when the signal reaches a peak ($frac{dv}{dt} = 0$). However, in choosing one of these, your ticks will be susceptible to amplitude or phase fluctuations respectively. So, what is the standard that is used in atomic clocks, and why (i.e. maybe amplitude fluctuations are smaller than phase fluctuations or vice-versa)?
1 Enrico’s Chart of Phase Noise and Two-Sample Variances by Enrico Rubiola (http://rubiola.org/pdf-static/Enrico‘s-chart-EFTS.pdf)
In time metrology it is always the zero crossing that is used, as it is independent of amplitude variations. In the past it was that amplitude noise was one to two orders of magnitude worse than phase noise, this is not necessarily true anymore. And a well designed oscillator will have amplitude noise that is comparable to its phase noise.
And yes, there is a certain uncertainty in using the zero crossings due to phase noise. But this is a quite fundamental limit. No measurement is free of noise and thus all measurements have an uncertainty. The uncertainty of atomic clocks manifests itself in phase noise. Characterization and evaluation of the phase noise properties of atomic clocks is a big part of designing and maintaining them.
Answered by Attila Kinali on January 9, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP