Biology Asked on June 20, 2021
I have developed and validated a modified a nucleic acid test (NAT) for SARS-COV-2 detection using real-time RT-PCR (aka rRT-PCR, aka RT-qPCR). My assay is not for diagnostic use, but for donor screening (which are similar, but different enough to require independent validations). The PCR step is sensitive enough to detect a roughly a single gene copy per reaction, but not always with a sub 40 Ct (which may or may not be an issue).
You see, the protocol is built around the primer/probe sets first published by the US CDC, so I initially adopted the criteria in the CDC EUA protocol, which stipulate a Ct value < 40.00 to classify a target as positive. This means is that amplification with a Ct value of 39.99 is classified as positive, while a nearly identical amplification curve with a Ct of 40.01 is classified as negative. Looking at similar NATs for in vitro diagnostics, most use this same cutoff. It seems fairly common with tests for other pathogens too.
For a little background, using a cutoff like this reduces the potential for subjective classification errors between users by providing a qualitative, binary interpretation for an ostensibly quantitative data readout. The trade-off is that an improperly assigned cutoff can result in systematic classification errors that are universal among all users. Anyone who has used one of the protocols based on the CDC probes has noticed that the two primer/probes sets targeting the SARS-CoV-2 viral genome do not perform equally well. It’s common for test results to be classified as "inconclusive" when one viral target is positive, but the other is "negative" because of a Ct value just above the 40Ct cutoff. (I’ve already tweaked my protocol to reduce this kind of error, but could not eliminate it completely).
As I am writing this up for publication, I’m wondering if it would be better (more honest) to recommend a Ct cutoff that reflects the empirical range of Ct values in my validation studies, as opposed to keeping it at some seemingly arbitrary round number (even if it is almost universal among similar tests). Doing so would not change any of the major findings I report (sensitivity, specificity, etc.), but would change some of the numeric values. More importantly, I think it could help prevent misclassification of late-amplifying samples as negative.
Does anyone know of a biological or technical reason why this cutoff should not exceed a Ct of 40? Maybe this is just some convention that is blindly carried forward; or maybe there’s some quirk of TaqMan chemistry to justify this value that I’m just not aware of.
This is done because at some point you start amplifying nonsense and not get a meaningful signal. If (and this is most often not the case) you have the ideal doubling of you DNA template in each cycle, you can calculate how little DNA you start with to detect it only around cycle 40 - or the other way round: What you will amplify and see as a "signal" by then.
If you have templates which always really come this late, I would use more DNA (and probably more RNA to start the rtPCR) until this comes at least 2-3 cycles earlier and then do serial dilutions of your template to see if this signal is real or not. If so, you should see a relation between Ct and dilution.
For doing this you need a negative control which stays really negative until your sample signal shows up - which is tricky on its own because at some point you will see some kind of primer self-amplification. If this occurs in the same range as your sample, your signal cannot be used. If no other primers can be used for technical reasons, sometimes nested PCR has to be considered.
And last but not least all your reagents degrade over time by cycling them 40 times through the PCR program (also the polymerase), which affects the efficiency of the reaction.
Answered by Chris on June 20, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP