TransWikia.com

Collinear factorisation in QCD: why can we multiply probabilities?

Physics Asked by JCW on February 19, 2021

In the collinear factorisation equation for a QCD cross-section, schematically
$$
sigma_{ABto X}
=
f^A_a otimes hat{sigma}_{abto X} otimes f^B_b
,
$$

we essentially convolute the probability of producing a final-state $X$ given initial-state partons $a$ and $b$ with the probability (ok, number) density of finding $a$ in hadron $A$ and $b$ in hadron $B$ with the requisite initial-state momenta, expressed through the parton distribution functions $f^A_a$ and $f^B_b$.

Is there a intuitive way of understanding why it’s ok to adopt this almost-classical approach, of multiplying probabilities, rather than working with probability amplitudes as is ubiquitous elsewhere in QFT?

2 Answers

Firstly there is the usual intuitive physical argument, which are probably aware of and is also explained in the post by @Ratman. Another version of this same argument is that, because of the asymptotic freedom of QCD, and the high energy scale of the process, the partons inside the hadron are approximately free particle moving collinear to the hadron, since the QCD coupling decreases going to higher energy scales. Then the partons have approximate momenta $xi P$, where $xi in (0,1)$ and $P$ is the hadron momentum. Hence if two hadrons collide it is as if the partons just scatter of each other and for the total cross section you just have to take into account the distribution of the partons inside the hadrons, given by the PDFs.

Now this is a useful but of course very simplified picture. Proving QCD factorization from first principles is quite complicated. But it can be proven for many processes, with rigor at the level of physics, and there are a number of different ways to access factorization theorems. The usual way is the operator product expansion. Then there is also soft collinear effective theory. But as I think the most intuitive way is the perturbative QCD approach which operates at the level of Feynman graphs and I will try to summarize this briefly. For a really in depth treatment, Collins book "Foundations of perturbative QCD" is definitely the right source.

Explaining this in a few words is not easy, so you should take what I write as a crude approximation of what is really going on. The argument is, very rougly, that there are certain regions of the loop integration of the Feynman graphs of the given process, which gives the leading power in $m/Q$, where $Q$ is the characteristic large scale of the process and $m$ is a small mass scale, of order $Lambda_{text{QCD}}$. And the cool thing is that these leading regions are precisely those regions, where the lines of the Feynman graph close to the hadron external lines are collinear to the hadron momentum $P$ and the lines close to the interaction vertex carry lines of large virtuality, of order $Q^2$. Then, in those leading integration regions, we can factorise any graph into a convolution of a factor, contributing to the hard function, corresponding to a subgraph close to the interaction vertex and a function, the PDF, corresponding to the subgraph close to the hadron external lines. The convolution integration comes from the loop momentum connecting these two subgraphs. See the following picture for DIS

enter image description here

So indeed the Feynman diagrams give an intuitive understanding which goes beyond the level of perturbation theory. The leading contributions correspond to collinear "parton" lines on Feynman diagrams, reproducing the classical picture. The coefficient function (or hard subgraph) is just the amplitude for the partonic scattering, corresponding to the $hat{sigma}$ in your equation. The PDFs can be defined as hadronic matrix elements of so-called light-ray operators, as you know they can not be calculated in perturbation theory. The number density interpretation of the PDFs can be justified using light-front quantization, also explained in Collins book.

Answered by jkb1603 on February 19, 2021

I am not an expert, this is the basic idea I found till now. Think about the DIS process ( $l(k)+h(p) rightarrow l(k')+X(p_{X})$ ), which is conceptually analagous to hadron-hadron collisions. Assume the virtual photon probes the hadron with a scale $Q^2=-q^2=(k-k')^2 gg Lambda_{QCD}$, so the interaction between the virtual photon $gamma^*$ and the parton found inside the hadron is characterized by a timescale $tau sim 1/Q$. Instead the dynamics inside the hadron, described by the PDF, is characterized by the timescale of the order $tau_{QCD} sim 1/Lambda_{QCD}$. So you have that $tau_{QCD} gg tau$. The different time scales allows us to say that the two events happens independently. Because we assume that the regular QCD dynamics which happens inside the hadron, given it's larger timescale, can't influence the interaction. Sometimes this is even explained by saing that the probes sees a snapshot of the hadron, if the energy scale is large enough. So the interaction is considered to happen between free particles. Being the interaction part independent you can multiply the partonic cross section and the PDF's to get the total probability of the process. This is just an intuitive approach, but for what I know there isn't a general mathematical proof of this method.

Answered by Ratman on February 19, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP