A recent paper entitled “Diagnostic Value and Outcomes of Systematic SARS-CoV-2 Screening in Asymptomatic Patients” highlights the need for innovative contextual actions in response to positive PCR results during exceptional times of novel pathogen spread. More than that, I had a few issues with the paper’s key message, which I’m also worried could easily feed into the anti-science, anti-PCR, trust-breaking, pro-harm cult. That message? PCR testing resulted in an enormous 36.5% “false positives”. True in a maths sense – like adding two ones together = 11 – but it was a choice that missed some very important context. Also, a paper that’s utterly reliant on how a particular lab tool is applied to reach its conclusions, in my opinion, needs a lot of detail on that tool. It wasn’t there, so the conclusions are shaky. I’ll get into the weeds below.
Background
The use of PCR methods to classify COVID-19 cases occurred globally. They were sensitive and fast, and when performed by a professional laboratory in a high-quality setting (with controls and quality assuranceย previously covered here), they produced reliable results. No, they don’t prove infectiousness, but they prove presence. Worldwide, the urgency and scale of the pandemic meant that, at times, clinical analyses of potential cases were limited, and laboratory results were a trigger for action. In this study, ‘action’ meant that if an asymptomatic person was PCR-positive, they were placed in a ward with other cases, possibly received treatment, or had planned medical procedures postponed or cancelled. The extra sensitivity of PCR was essential at a time when the aim was to limit transmission from infected people, and, in this paper’s case, to catch and prevent spread within a hospital environment.
“36.5% false positives” is just fodder for the cult
Let’s look at the maths they chose to use: 36.5% of the positives did not repeat or were not subjected to a confirmatory test within 72 hours (the latter is not a reason to relegate a patient to the negative bin in my opinion; that is a process failure). I think that figure is easily misinterpreted by a wider audience, such as those skimming summaries (see examples below) or Googling with poor search phrasing.
A clearer reality
Only 278 of 42,666 tested asymptomatic patients produced an unrepeatable result. Yes, 278 of 761 patients (36.5%) didn’t repeat or weren’t retested within 72 hours, but many more asymptomatic people were tested to find those 761 patients.
The proportion of total patients tested who could not be confirmed or who weren’t tested with a confirmatory assay within 72-hours (๐คจ) was only 0.7% of the asymptomatic population screened (278 of 42,666).

Those 278 who tested positive on the first screening test and were either negative (CT >29, viral load <10,000 copies/mL, or a Biofire test of some variety with a negative result) on the repeat test 3 days later, or not repeat-tested, were counted as false positives. Can a person be a false positive if they were not repeatedly tested when that seems to be the workflow? Were clinical decisions being made in the background? It seems like “clinically adjudicated true-positive results” were determined, but the specific outcomes, on how many or using what criteria, were not presented.
The figure you use here to hammer your message – 0.7% or 36.5% – is a choice.
I think scientific paper writers – those who are still human and not AIs, of course – need to get a lot better at imagining how their conclusions can be used by the cult. They feed directly into the “SEE!!!! We were right all along!!” recruitment drive. Yes, anything can be twisted, but why make it so easy for misinformation and disinformation to exist and spread?


You may have seen some swearing that PCR created a sea of false positives and is responsible for lockdowns. Where are those false positives now, by the way (they were crap)? PCR – they will yell at you – caused the pandemic. The PLANdemic! is what I believe some of the strange kids call it!!! Check out X and Facebook for “false positive PCR” to deliver utter BS like that below – if you dare.

The orange text and the red cross are my additions, in case anyone uses this to promote their message instead of mine!
This particular cult also believes PCR was misdiagnosing flu cases as COVID-19 cases (Rubbish. SARS-CoV-2 tests are highly specific for SARS-CoV-2, not flu). It believes there is no evidence that SARS-CoV-2 exists (there is plenty, of course). It believes most detections were false positives (they weren’t, of course). It sometimes claims that laboratory technicians suddenly changed the PCR endpoints. And we all know that PCR can pick up anything given enough cycles (that’s untrue, of course)! It goes on and on and is easily refuted. Unless you believe that there is no truth or things called facts. And some do believe that.
It’s not a false negative if you didn’t do a follow-up test
And yet, this was how their workflow…flowed. It’s bad enough that the discussion makes no allowance for the possibility that some or even many of these late positives could have been real positives at the end of their mild (and now clinically resolved) or asymptomatic infection among whom the natural decline in viral load will slip below what even a PCR can detect (CT>40), but to bin a who-knows-how-many (because the paper did not spell this out ๐คจ) chunk of the group as “false” without provindg any supporting evidence really broils my biscuit!
Did they mean a clinical false-positive?
I thought the authors might have been saying that despite PCR positivity, the asymptomatic patients never developed symptoms. But there were no clinical data presented, nor any details of an analysis of the signs and symptoms among the “false positives”.
So in this study, “false positive” hinges on lab testing results – or lack thereof. It’s not the absence of developing signs and symptoms despite testing positive.
Looking at the PCR detail not provided
In the laboratory, we use the term “false positive” to mean a specific thing: a test that was initially positive but, after further testing, tested negative, so that the initial result (if it was reported before the further testing was completed) was considered false. Below is the workflow they used to trigger further testing. Clear positives (CT<30) and negatives (CT>40) didn’t attract repeated testing.

Did they check repeatability?
Because this paper relies entirely on repeat PCR testing of a subgroup of samples for its key conclusions, I think it would also have been important to show evidence that their tests were reliable when repeat testing of positives and negatives. Testing a subgroup alone generates bias. This approach, also called targeted retesting or discordant analysis, doesn’t tell the entire story. What if a bunch of negatives also retest as positives, or some positives repeat as negatives? If so, what proportion? Having, showing, or referring to data on repeatability and reproducibility is essential for understanding test reliability. There were no references to validation data for any of the in-house PCR tests used, no supplementary data to easily address this, and no specific identification of the Biofire or Cobas test instruments and kits – of which there have been multiple versions over time (๐คจ)
Saliva as a sample type
While saliva was a hot sample type for a while – easier to collect, cheaper to collect and preferred by patients – others have shown that non-saliva samples provide better CTs, which, for this paper, might have relegated some of these “false positives” into true positive territory.
Furthermore, saliva does not appear to be a validated sample type for the only test for which I could see a specific identification: the Xpert Xpress-CoV-2/Flu/RSV-Plus. A quality laboratory will have performed its own in-house validation of the sample type before it reported any human results, but the manufacturer won’t strictly support its test for a type it hasn’t shown to work reliably. The other issue with this kit is that it’s intended for symptomatic people. It offers no assurance of reliable functioning for asymptomatic people. Again, a good lab will have done the work beforehand to show that it performs within expectation. Right?(๐คจ) That data should be provided to reassure readers of the quality of the results used to reach the conclusions they’ve published.

Apples with apples: a screening PCR plus 3 other confirmatory PCRs = mandarins
The SARS-CoV-2 Cobas Test (Roche) was listed for general screening. A modern example of the test and the instrument on which it can be run can be found here, but the paper doesn’t specify which kit version or instrument model was used. (๐คจ)
The confirmatory tests for patient samples with equivocal first-test results (not clearly positive or negative) included:
- SARS-CoV-2 Biofire
- GeneXpert Xpert Xpress-CoV-2/Flu/RSV-Plus system (3 SARS-CoV-2 genes targetted)
- An in-house (not a commercial kit) PCR test
Unfortunately, the authors provided no data to explain:
- The workflow for the use of different tests – was the choice random, or was there a hierarchy – for example, always Biofire first, then GeneXpert, or just whatever was done wherever?
- Whether the three confirmatory PCRs produced like-for-like threshold cycle (CT) results for the same samples – did a CT 32 on the Cobas = a CT 30 on the Biofire = a CT 32 on the GeneXpert = a CT 32 on the in-house? And if not, what was the workflow for deciding on confusing discrepancies? Those data could be very relevant here!
- Were all the equivocal samples screened and quantified? Did those results agree?
- What was used for quantification, and if it was the in-house test, how was it created for this purpose, and how was it used here?
It was nice to see a cursory acknowledgement of some issues around comparing CT values between different PCR tests and platforms. Generally, without extensive prior validation inย thatย lab, this comparison is unreliable because different platforms use different primers, mixes, cycling conditions, and extraction methods. At worst, that variation renders CT results around the cut-off problematic.
These platforms don’t share the same nucleic acid extraction method, use the same sample aliquot or extract, or use different volumes of sample or extract. Perhaps you can see why there’s plenty of potential for variability in this “workflow,” which can affect the final PCR result for each test. It’s less of an issue – but still an issue – if you use the same sample aliquot, extraction method and aliquot, and know the assay through your test validation data. But none of that was explained or even acknowledged. Which, again, is important because PCR performance is the crux of this paper.(๐คจ)
How do these test method quirks impact the final PCR result? For example, we have an initial result at CT 32, which is ascribed as equivocal in this study. One confirmatory test may produce a result of 29 (rendering an equivocal positive), while another could produce a result on that same original sample of 34 (rendering an equivocal negative). Different test choices for the confirmation could produce different results. These data simply aren’t reliable because there’s no way for the reader (me) to trust them without understanding each test and how/why it was applied.
Later positives can still have significant meaning
Later positives, let’s say CT > 30 but < 40, are notorious for flip-flopping around the limit of a test’s sensitivity. That limit – highlighted by the previous example – can vary from test to test, platform to platform. Loading more extract into the real-time RT-PCR and extracting from a larger sample volume adds more template to the reverse transcription (RT) reaction. The RT step precedes PCR when you run RT-PCR to detect RNA viruses such as SARS-CoV-2, RSV, and influenza viruses. More template generally gives RT-PCR a better start and a lower (better) CT result.
Just because you can’t repeat a result near or at the limit of a test’s real-world detection capabilities doesn’t mean it was falsely positive initially. What a commercial kit’s insert says its product can detect can easily go out the window when faced with human samples and routine labs, aka the ‘real world’. It may simply mean the patients were true positives, but at the end of their infection. It canย alsoย mean they are just becoming positive, but that is usually ruled out by clinical surveillance (some seemed to have been conducted, but again, no details) and a confirmatory test 48-72 hours later. Usually. If it was done. And if good specimens were collected and handled well each time.
These were exceptional times. This testing occurred when community case numbers for a newly emerged pathogen were high in Switzerland, before cases took off. Risks and harms from the virus were still not altogether clear. It was a time when stopping further transmission, especially in hospitals, was a priority while populations were vaccinated.
Implications of an unrepeatable late positive
Among the 278 “false positives” patients, 139 (0.3% of the screened asymptomatic patients) were isolated. Versions of hospital-in-the-home (HITH) were a way to manage some of the huge burden of asymptomatic or mild COVID-19 cases during the pandemic. This innovation might be useful for reducing exposure to true cases. Of those 139:
- 46 (0.1%) were placed in cohort wards and exposed to true cases. No follow-up was provided on whether these people were infected or became ill. Could any of these have been candidates for HITH?
- Elective surgeries or other procedures were cancelled or postponed for 9 (0.02%). The risk of nosocomial infection has a long history and also has bad outcomes due to other respiratory viruses among older patients, as these PCR-positive people were (median age 66 years).
- Drugs were administered to 70 (0.2%) based on the results and symptoms initially thought to match COVID-19. Upon reassessment, these symptoms could be ascribed to the patient’s underlying conditions.
None of these things was a good outcome, to be clear. But given the context of the pandemic, I’m unconvinced that these are “considerable unintended outcomes associated with false-positive results, which can strain health care systems and adversely affect patient outcomes“. There is room for innovation to reduce harms. I’d have liked to read some discussion around that.
Final thoughts
Communication. Authors, you really need to think carefully about how you choose to present your findings. It’s a different world, and results can and are being weaponised to attack science in all sorts of fields. Here, the “false positives” were emphasised. The real message: among 75,667 PCR tests from 42,666 patients, only 278 (0.7%) were classified as not positive on repeat testing. Given the context, that’s pretty good, I reckon! Another less-sensitive testing method – like antigen testing – could have flipped this into a completely different paper. An insensitive test would have missed cases, leading to widespread nosocomial transmission in this hospital. The harms of that would have been much greater, I think. Yes, I’m an advocate for PCR, so take that bias on board.
Methodology. I can’t even trust the 0.7% figure. The method details are so sparse that I’m not convinced that the number of false positives is meaningful.
The authors conclude: “These results emphasize the importance of context-driven implementation, in
which screening efforts are aligned with epidemiological trends and resource availability”. Agreed. Truly false positives can cause a range of harms. I’d add the development of innovative isolation methods for consideration to ameliorate those harms.

Click on the image to enlarge.
Most of the testing (60 of 89 weeks) was conducted during periods of high incidence. That high incidence was also supported by wastewater positivity. To me, that lends weight to the need to ensure your assays are doing what you think they’re doing.
In the adjacent graph, incidence was lower than it would later become, as controls and effort were scaled back.
If the aim is to use testing to limit transmission, the horse has bolted if you start too late. Testing to control the spread from infectious people needs to be in place and working before transmission takes off.
Yes, PCR screening of asymptomatic people detects late-stage, recovering cases. It also detects those with higher viral loads who are well on their way to symptomatic disease. But what you do with that data is key to limiting potentially harmful outcomes. We learned things during the pandemic. One is that it’s possible to further reduce harms and inconvenience while maintaining PCR’s usefulness in detecting all stages of infection.
References
- Higher SARS-CoV-2 detection of oropharyngeal compared with nasopharyngeal or saliva specimen for molecular testing: a multicentre randomised comparative accuracy study
https://thorax.bmj.com/content/78/10/1028 - Comparison of Nasal Swabs, Nasopharyngeal Swabs, and Saliva Samples for the Detection of SARS-CoV-2 and other Respiratory Virus Infections
https://www.annlabmed.org/journal/view.html?doi=10.3343/alm.2023.43.5.434 - Superior effectiveness and acceptability of saliva samples for the detection of SARS-CoV-2 in China
https://www.sciencedirect.com/science/article/pii/S2590053624000296#m0005 - The mechanics of the polymerase chain reaction (PCR)โฆa primer
https://virologydownunder.com/the-mechanics-of-the-polymerase-chain-reaction-pcr-a-primer/ - Reverse transcription-polymerase chain reaction (RT-PCR)โฆa primer for virus detection
https://virologydownunder.com/reverse-transcription-polymerase-chain-reaction-rt-pcr-a-primer-for-virus-detection/ - Putting PCR into real-time
https://virologydownunder.com/putting-pcr-into-real-time/ - Worldometer COVID-19 tracking for Switzerland
https://www.worldometers.info/coronavirus/country/switzerland/
Discover more from Virology Down Under
Subscribe to get the latest posts sent to your email.
