This study asked the question:"Are the belief in having had COVID-19 infection and actually having had the infectio… https://t.co/7lLO3dqX8G
Serology is not a sensitive way to assess the presence or absence of past infection. There's plenty of evidence tha… https://t.co/1mWVoPE839
See this study: https://t.co/L4QZbORjAR. This means it's not sound to use antibodies to classify the exposure (covi… https://t.co/KzDk0vTUTp
I'd also like to critique the damaging framing here since we're talking about 'belief'. Why is a low accuracy lab t… https://t.co/yE1GBnqtx9
What the responsibility -if any- do medical journals take in wording and framing illnesses that we still lack subst… https://t.co/VNt0XRqguk
Asking again without the typo because this is an important question: What responsibility do medical journals take i… https://t.co/5ip1ntyMbI
Also very strangely, although the authors state that participants were asked if they had a positive PCR/antigen tes… https://t.co/NTAXIW8yww
Perry Wilson commented this paper and pointed some limitations :
I think it's time we acknowledge the "Long COVID" problem. New study in @JAMAInternalMed finds Long COVID symptom… https://t.co/B7HSSXIW3g
Preface: Yes, there is a Long COVID syndrome. There are individuals with prolonged symptoms *out of proportion* to… https://t.co/3tHvqgIRuo
We don't know because the case definition of Long COVID is hopelessly vague. @WHO says 2 months of virtually any sy… https://t.co/gpUaH1TYve
This is a problem - these are subjective symptoms that lots of people have due to lots of causes - this impairs our… https://t.co/riucFrJp3M
The @JAMAInternalMed study was a population-based survey of ~26,00 individuals in France during the first wave of t… https://t.co/A0zkZWMEkM
The others measured serology and "belief" you had COVID separately, creating 4 groups like this. Most people never… https://t.co/Lsen7DVxaN
The kicker of the study is that, more or less across the board, the BELIEF you had COVID was a better predictor of… https://t.co/7GciOT7IRR
The authors don't draw the conclusion that many in the public will - maybe long COVID is all in your head? But th… https://t.co/TgmPbVFlVn
The sensitivity of serologic testing reported in the paper is 87%, the specificity is 97.5% - assuming a true preva… https://t.co/Y2NYv2Hj0x
In other words, more than half of the serologically positive group are likely false positives. That is going to sub… https://t.co/DrjAWJZr6A
In other words, this study could be telling us that, when disease prevalence is low, belief you had COVID really is… https://t.co/eMipIRfOl3
It could also tell us that "long covid" as defined, is too broad - capturing symptoms that occur after a variety of… https://t.co/CcZ17eu8zw
Basically, I worry this study will diminish the urgency to figure out what is happening with long covid. But SO is… https://t.co/ux4tZTvwYl
Labeling someone "Long Covid" due to vague symptoms may be a disservice to everyone. We fail to understand what is… https://t.co/jzDtFg4XGt
This is all discussed in more detail here, feel free to read / comment. (14/n) https://t.co/cbYW889tcV
I think we can do better with Long COVID. Again, it definitely exists. But it's likely that not everyone who is bei… https://t.co/ckbL69Yidt
https://www.youtube.com/watch?v=1OAgD__BWjU
This sensitivity bias among patients seems problematic for data interpretation to me, could authors address and discuss this issue ?
Another methodology issue is about the questionnaire answering : it is unclear when each answer was given, and if the serological test result was clearly understood by patients : did they receive multiple questionnaires ? If so, did answers evolve before / after test results ?
How much time passed between serological test result (between may and december), and the questionnaire (january) ? If most serological tests were done in may, most infections posterior to serological testing might be missing and considered as "serological negative" !
Dr David Strain, Prof Kevin McConway, Dr Jeremy Rossman commented this study and pointed other limitations:
https://www.sciencemediacentre.org/expert-reaction-to-study-looking-at-the-association-of-self-reported-covid-19-infection-and-sars-cov -2-serology-test-results-with-persistent-physical-symptoms /
D. Strain, Senior Clinical Lecturer, University of Exeter: “It misses a very simple explanation. Whether the participants had COVID or not, there is no doubt that they were experiencing some illness that they attributed to COVID. There are multiple viral illness other than COVID that cause “long symptoms””.
K. McConway, Emeritus Professor of Applied Statistics, The Open University: “This looks as if it’s an interesting study – and indeed it is interesting, but it’s potentially very misleading to take its results at face value.”
"This could be taken to mean that the presence of these symptoms could be more affected by what people believe happened to them than by what previously happened to them as recorded by a test. That does remain a possibility in the light of these results, but it’s very far from being the only possibility."
“One basic issue is that this is an observational study, and it’s never possible to be certain about what causes what in observational studies. While that’s always true in any observational study, there are some characteristics of this one that make it a particularly important issue here.” "What’s more, in this study, the participants already knew the result of their serology (antibody) test when they were asked the questions about whether they thought they had had Covid previously and about their symptoms."
"So they are unlikely to be typical of people in the general French population who would, mostly, not have had an antibody test at all. This study therefore makes it difficult to draw general conclusions about how these things might work in the French population generally (let alone any other population), or to draw any clear conclusions about cause and effect."
"Another issue about the general applicability of the findings is that these results are based on findings from a subset of a large population cohort study, that looks at volunteers aged 18-69 from the French population. I do not know all the details of this cohort study, the CONSTANCES study, but in general people who volunteer for a study like this, which is explicitly about health, is that they tend to be more interested in matters of health than is the population generally. That’s inevitable – but it could be a particular issue when, as here, one is studying something that could possibly depend on people’s attitudes about their own health. Further, the participants in this new piece of research seem not to be particularly typical of the whole CONSTANCES cohort in some ways." "There are also issues about the serology test that was used. "
"they rightly acknowledge that the antibody test might, in some cases, not actually correspond to whether people had really been infected. That’s always the case for any diagnostic test of this sort, because no such test is perfect. The researchers provide arguments to support their belief that this is not in fact a big problem in this study – but errors in diagnostic test results are often counter-intuitive, and I don’t entirely agree with the researchers’ conclusion."
"What they don’t do is look at false positives – with the assumptions they make, there would be rather a lot of false positives, in fact about 644 of them, which would be about four in every ten people who test positive for antibodies."
""I think there could be some issues with the assumptions behind these calculations anyway." "Then, the researchers do not say where their figure of 4% prevalence of previous infection comes from."
""In the people actually included in their statistical analysis, a little under 2% had a positive antibody test result. That’s a much lower figure than you would get with 4% prevalence and the stated performance rates." "this does indicate that the assumptions behind the performance of the test and about possible misclassification of who was previously infected are unlikely to be correct, and that can feed through into the interpretation of the results."
"I’m not trying to trash this research. These things are difficult to study and this new study does provide potentially useful information."
"But I don’t feel that this research can give a clear enough indication of how likely this is." "The conclusions to the research do suggest that the diagnosis and treatment of people who have this kind of symptom should at least consider other possibilities rather than always assuming everything was directly caused by the virus."
J. Rossman, Honorary Senior Lecturer in Virology, University of Kent: "There are several concerns with this analysis."
"First, a serological test for the presence of COVID-19 antibodies is an unreliable marker for previous infection, and some research in hospitalised patients suggests Long COVID patients can tend to have weaker antibody responses".
"In addition, antibody levels diminish over time and the study did not report on the duration between reported infection and the serological test."
"Furthermore, the authors report that only anosmia (or loss of smell) was associated with a positive serological test, but anosmia tends to be one of the shorter duration symptoms of Long COVID and thus, patients still experiencing anosmia may have been more recently infected and thus more likely to have detectable antibody levels."
"“Second, the authors state that having a confirmatory diagnostic test or diagnosis of COVID was only associated with anosmia and not any other Long COVID symptoms. However, in their analysis, the authors compare the likelihood of different Long COVID symptoms in people that believe they had been infected but have not had a diagnostic test with those that believe they were infected and had diagnostic confirmation. Thus, the 15 symptom categories the authors found to be associated with belief in having previously been infected are present in both patients with a positive diagnostic test and those without. If belief, specifically, was driving Long COVID symptoms, then having that belief confirmed would likely increase the odds of having those persistent symptoms. In this study, a confirmatory diagnostic test had no impact on the likelihood of having most Long COVID symptoms, suggesting that belief in having been infected is as accurate as having had a diagnostic test, a result that has been seen in other Long COVID studies."
"[The] conclusion [of the study] reiterates a damaging narrative, implying that Long COVID is a psychological disease and that by taking steps to avoid symptom exacerbation patients are effectively making themselves sick. This is a narrative that Long COVID patients, advocates, physicians and scientist have been struggling to address in many countries around the world. There are multiple studies that demonstrate the presence of persistent physiological symptoms following SARS-CoV-2 infection that are not present in uninfected controls."
This article investigates the association between self-reported COVID-19 infection and results of serology tests with persistent physical symptoms associated to "long Covid". It concludes that "persistent physical symptoms after COVID-19 infection may be associated more with the belief in having been infected with SARS-CoV-2 than with having laboratory-confirmed COVID-19 infection."
The study design suffers from multiple shortcomings, precluding any conclusion.
1. Participants had received their serology test results before answering the questionnaire
("At the time they answered this question, the participants were aware of their serology test results")
The authors do not test the consistency of the replies with this information, and do not have ways to assess whether the serology results have been properly received and understood by the participants. Answers will differ depending on whether the result was understood or not. The questionnaire should have included a question to test the participants' understanding of their serology results. Understanding of the result should have been a covariate in the model.
In a proper design, the participants would have answered the questionnaire before learning their serology results.
2. The authors did not test the consistency of serology results and the results of participant-reported lab results
Participants who believe they have been infected, and only them, were asked whether they had received a lab or physician result. The association between this reported result and the serology is not given in the article, while it could help assess the serological test's sensibility.
3. The authors did not ask "non-believers" about lab confirmation
Information about the existence of another lab test or physician diagnostic is not available for belief(-) participants
4. The association of Physician confirmation or lab result with symptoms was only tested for belief(+) participants
This is due to the questionnaire design (point 3), but as a result, the tests lack power. The test is conducted as if all belief(-) were NAs for test results, while some of them may have had lab confirmation that they had not been infected and could have been included in the model.
5. Serology is treated as a 100% true result
Since they have not tested the association with reported results, the authors are unable to assess the sensibility of their serological test, which is treated in the study as if it were perfect.
To #1 Alexander Samuel, about their last comment: please note that
"Participants who reported having an initial COVID-19 infection only after completing the serology test were excluded."
Based on statistical analysis of a cohort, the authors claim that physical symptoms typically associated with long Covid may result not from infection but from psychosomatic effects or misattribution of unrelated symptoms to Covid infection. Although they use the word “suggest”, their interpretation leaves no doubt: “Two main mechanisms may account for our findings. First, having persistent physical symptoms may have led to the belief in having had COVID-19. […] Second, the belief in having had COVID-19 infection may have increased the likelihood of symptoms.”
This conclusion relies on an erroneous understanding of statistics. In statistics, “non significant” does not mean that there is no effect: it means that the data are compatible with an absence of effect. But in this case, the confidence intervals make it clear that the data are also compatible with a substantial effect. To give an example, in table 3, the CI of the odds ratio for dizziness is 0.7-2.88 in model 3, serology. This means that an absence of relation between serology and dizziness (OR = 1) is possible, but so is an odds ratio of 2, which is very large.
A non-significant result cannot rule out an absence of effect, but the analysis may rule out effects of a given size. For example, assuming the methodology is appropriate (see below), Table 3 says that positive serology is unlikely to increase the odds ratio for persistent concentration problems above 1.52. In the data, general prevalence is about 2.5%, so an OR of 1.52 means that prevalence is increased by 1.3%. Is this small? In April 2021, an estimate of 23% of the French population had been infected, or about 15 million people. Thus, for an OR of 1.52, compatible with this paper’s data, this would make 200 000 people with persistent concentration problems due to Covid infection.
The remarks above address only the interpretation of the results, assuming the methodology is appropriate. But there are important issues with the methodology, which are not correctly taken into account in the discussion. The sensitivity of the serological test is reported to be 87%, but it is treated as if it were perfect. (As a side note, this figure corresponds to the probability that the test is positive given that the antibody level exceeds a certain threshold at the time of testing, and the figure was not obtained for self-administered tests, so the probability of a positive test given infection will be lower than 87%. But this is a minor problem compared to the issues discussed below.)
Importantly, it must be noted that the respondents knew their serological results when responding to the questionnaire. Therefore, the two categories “belief” and “serology” are not independent, which makes it virtually impossible to interpret the results. For example, suppose that the respondent is an ideal Bayesian observer that uses both serology and additional information (e.g. symptoms, PCR) to decide whether there was a previous infection. Since that decision already incorporates the knowledge of serology, a statistical analysis as done in this paper (model 3) would yield no association between serology and symptoms, since all information is already incorporated in the “belief”. This holds even if there is barely any additional information. This lack of association does not reflect psychosomatic effects, but on the contrary, it reflects the simple fact that a rational estimate based on serology plus additional information is more reliable than one based on serology alone. Consequently, the results shown in Table 3 are compatible with people simply being rational. In that case, the relevant odds ratio would be those in the column “Belief” of model 3, which are very high. The conclusion would then be the exact opposite of the conclusion of this paper.
What information did respondents base their belief on? We learn that about half of them actually had laboratory-confirmed infection (PCR or serology), which contradicts multiple statements in the paper that assign laboratory-confirmed infection only to positive serology obtained within the study. Another piece of information is obviously the symptoms they experienced, which in another 15% of cases, were confirmed by clinical examination.
Since respondents had more detailed information in addition to the serological result, why couldn’t it be that their assessment is more reliable than the serological result alone? In the discussion, the authors discard this possibility by a quantitative argument based on the 87 % sensitivity of the serological test. This reveals another misunderstanding of probability. Sensitivity is the probability of a positive result given previous infection (more accurately, of a threshold antibody level). But obviously, probabilities change when conditioned to additional information (PCR, symptoms). To take an example, given the high specificity of PCR tests, it is obvious that the probability of infection given both a positive PCR test and a negative serology is much higher than the probability of infection given negative serology alone (which is about 0.5%). The same holds, although presumably to a smaller extent, for symptoms, unless it is considered that clinical examination is totally irrelevant to any medical diagnosis.
Therefore, it cannot be ruled out that those who believed they had been infected despite knowing their negative serology were right to do so. In fact, it seems highly plausible at least for the large proportion who had a positive PCR.
Given these serious shortcomings, it seems difficult to draw any conclusion from the data presented in this paper, even “suggestive”.
The paper relies on correlations between a belief that one was infected by SARS-CoV-2 before, reported COVID-19-like symptoms, and infection confirmed by the Euroimmun serological test. But this test is prone to seroreversion. Examples of this seroreversion:
"At the individual level, 26% (91/354) of the EI-positive cohort was seronegative with the EI test at follow-up (i.e. seroreverted, Table 2)." https://www.clinicalmicrobiologyandinfection.com/article/S1198-743X(21)00371-2/fulltext
"However, while 86/88 (98%) of participants with initially detectable antibodies in the Diasorin assay continued to have detectable antibodies in this assay, numbers were significantly lower for the Euroimmun (73/90 (81%) p 0.0004) [...]." https://www.nature.com/articles/s41598-021-94453-5
"Eight months after their infections, we detected [...] anti-S1 IgG in 40 (69.0%) (p<0.01) (Table 2)." https://wwwnc.cdc.gov/eid/article/27/3/20-4543_article
For other studies, this seroreversion manifests as people with a previous SARS-CoV-2 infection diagnosis (whether by PCR, medical diagnosis, and/or antigen test), who test negative on the Euroimmun assay more than 10 days later. The proportion that later test seropositive decreases as time extends further from large waves of infection because people have more time to serorevert. For instance:
"Furthermore, study-specific sensitivity of 0.616 (95% CI 0.475 - 0.740) that takes into account antibody decay over time was estimated based on 133 participants with a self-reported positive SARS-CoV-2 test at least 11 days prior to DBS sampling." https://www.medrxiv.org/content/10.1101/2021.11.22.21266711v1.full
"For stage 1 sixty-two percent (68/110) of those reporting ever having tested positive with a SARS-CoV-2 PCR had a positive antibody test in our sample. [...] For stage 2 80% (90/112) of those reporting ever having tested positive in a SARS-CoV-2 PCR had a positive antibody test." https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3834300
Moreover, if rates of seroreversion vary based on symptom history and with demographic factors like age, then the proportion of the sample that seroreverts can change due to changing non-response bias among particular demographic groups and greater prior testing of asymptomatic individuals:
"The maintained seropositivity of 62% after a median time of 70 days (interquartile range 33-193 days) in our study is lower which may be due to the population sample with more asymptomatic cases." https://www.medrxiv.org/content/10.1101/2021.11.22.21266711v1.full
non-response / selection bias: https://academic.oup.com/jid/article/224/2/188/6292145
seroreversion varying by hospitalization status: https://www.science.org/doi/10.1126/sciadv.abh3409 (figure 5B)
So seroreversion and changes in sampling likely biased the results of this study. This is seen even more clearly when comparing earlier data from this study to later data. A prior analysis of this sample included people from May 4 - June 23, 2020, with 983 out of 14,628 testing positive on Euroimmun. So 6.7% seropositive:
https://academic.oup.com/ije/article/50/5/1458/6323645
Yet the current analysis from May - November 2020 has 1091 seropositive out of 26823, i.e. 4.1% seropositive. The inclusion of samples after June 2020 allowed from time for seroreversion since France's prior COVID-19 peak in April 2020 ( https://covid19.who.int/region/euro/country/fr ), decreasing the overall seropositive rate.
So the seropositive rate decreased as people were included later in the study, which points to seroreversion and/or sampling bias, consistent with prior published work on sampling bias and seroreversion with the Euroimmun assay used in the study. So some of the study's belief+ seronegative people were probably infected, such that prior SARS-CoV-2 infection was a bigger predictor of reported long COVID-19 than study concluded.
Atomsk's Sanakan
1/L The ideologically-motivated discussion around this paper is *infuriating*, so I thought it deserved a thread.… https://t.co/BHbmC1j5SK
2/L The paper discusses "long COVID", which involves long-term symptoms from COVID-19. Some other helpful discuss… https://t.co/pojD3XhCk9
3/L This thread isn't focused on the biology on long COVID, but instead on the immunology underlying the paper. I… https://t.co/icelJVYMBc
4/L When a virus infects you, your body increases production of proteins known as antibodies, which are usually sp… https://t.co/RiLwiuIYQB
5/L Seroreversion occurs when antibody levels decrease back to below the level the antibody test can detect. At t… https://t.co/v1iQpWMhJ5
6/L The paper used the Euroimmun antibody test to determine who was infected. The sensitivity they give below ass… https://t.co/gmAmzCxjdv
7/L Their paper hinges on correlations between a belief that one was infected by SARS-CoV-2 before, reported COVID… https://t.co/eWSbiDDBCl
8/L Even if every study participant who believed they were infected was actually infected, seroreversion would fal… https://t.co/MQiQdYY360
9/L The Euroimmun test is prone to seroreversion. "antibody levels detected in the Diasorin assay were stable, wh… https://t.co/JHthXK8rMN
10/L It's been well-known for close to a year that the Euroimmun assay has a seroreversion problem. Anyone who w… https://t.co/HjGVa45p8a
11/L The study's prior Euroimmune analysis from May 4 - June 23 had 943/14628 test positive [6.7%]. Their current… https://t.co/UkHfxrD8wa
12/L So people claiming expertise should have thought of seroreversion when the research in question either: 1) u… https://t.co/VFe5AxkqMR
13/L Seroreversion is also basic immunology. I learned it by the time I was a freshman in college. It's covered ag… https://t.co/4VWEuU4vFB
14/L So if you're looking for an excuse to downplay long COVID-19 to suit your ideological agenda, or to make it l… https://t.co/dn3c8ntgtq
15/L To be clear: It's not the study's authors who are infuriating, though it would have helped if they discussed… https://t.co/YXTzOVpZeh
16/L Antibody tests differ in how at risk they are to seroreversion (see parts 5/L, 9/L, + 10/L). So one can't jus… https://t.co/YAzMGfQnuJ
17/L The results from part 11/L could also be due, in part, to selection bias, as follows: As the pandemic progr… https://t.co/s5Irm2bilJ
18/L A national seroprevalence study in Germany further illustrates this point. They too used the Euroimmun assa… https://t.co/NBsbhI27JD
19/L So in combination with the decrease in proportion of reported seropositives (see part 11/L), this means sensi… https://t.co/6W1oNpd411
20/L People are still citing the article as a credible source without noting obvious problems, such as seroreversi… https://t.co/2E3tOfJRxB
"Post COVID-19 in children, adolescents, and adults: results of a matched cohort study including more than 150,000 individuals with COVID-19"
https://www.medrxiv.org/content/10.1101/2021.10.21.21265133v1
There appear to be some numerical inconsistencies in this article and the supplement.
First, in Table 2, over half of the percentages are too high by between 0.1 and 0.3. Furthermore, most of the p values above 0.001 for the chi-square tests are off by one or two places in the last digit. This might seem minor, but it could make one wonder whether some of these results were calculated more than once, perhaps with different overall sample sizes due to some change in exclusion criteria.
Second, the sum of the Belief+ totals in Table 2 (461 + 453), together with the second half of the first paragraph of the Results section (starting two lines before the end of page E3), suggest that the number of people reporting a belief that they had been infected was 914. But in eTable 5, which reports the results of the follow-up question ("Has this [infection that you just mentioned] been confirmed?"), the numbers of participants sum to 1001, and all of the percentages for each response appear to be consistent with a denominator of 1001. Again, the curious reader might wonder about the origin of this discrepancy.
Third, in eTable 7, two models for each reported symptom are presented on each line, with one sample size ("N") for each symptom. It is unclear what this sample size represents. For example, for "Sleep problems" the N is reported as 54, whereas in Table 2 we see that there were 2580 people S-B-, 49 S-B+, 55 S+B-, and 45 S+B+. It is not clear how much information a single N might mean here, since there are two models and each implies a (presumably different) set of +/- numbers.
Code to reproduce my analysis of Table 2 is available at https://gist.github.com/sTeamTraen/617360320606b3b2c484f9f9a38298c5
It is not clear how the authors arrived at their estimation (in the “Strengths and Limitations” section) of 4% “[o]n the basis of the present results” (emphasis added; that is, this figure is apparently not based on, for example, an estimation by the French public health authorities of the number of people infected during the relevant period) for the prevalence of SARS-CoV-2 infection. The most obvious possibility is that this represents the ratio of 1,091 seropositive tests to 26,823 total participants.
However, as comments #1 and #3 note, the specificity of 97.5% means that over half of those 1,091 positive tests would be false positives, and the true prevalence rate, including false negatives, would be around 2.2%.Between the reported sensitivity and specificity of the test, the observed numbers of each type of test result, and the estimated prevalence rate, something has to give. If the authors’ claim of a prevalence rate of 4% is indeed based on a simple ratio of 1,091 positive tests to 26,823 participants, omitting the false positives, that would appear to be a rather unfortunate oversight.
We thank Dr Lemogne for his comments on the study webpage.
We agree that serology may be a good enough tool to clarify the association between persistent symptoms and previous infection, provided that the cohort is large enough. Indeed, as table 2 in the publication shows, the authors did identify associations between positive serology and 10 categories of persistent symptoms. However, in the other models in support of the conclusion, associations with serology are analysed after individuals are (self-)classified as previously infected. In these models, almost 70% of the "belief-positive" participants had implicit knowledge of a past SARS-CoV-2 infection because they had received a physician diagnosis or a positive lab result (eTable5). In contrast, only 40% of those in this group with a positive serology result were true positives (i.e. participants correctly identified by the serological test, see calculation below). Therefore, even if all of the 30% without an objective reason to assume a past infection were actually not infected, “belief” would still be a better classifier than serology. As a result, in the separate analysis of the participants by “belief”, the rate of previous infection is high within the "belief-positive" and low within the "belief-negative" group. Obviously, in these populations, serology dramatically lacks accuracy to identify subgroups with significantly different infection rates.
In his reply to my comment, Dr Lemogne bases his calculations on an assumed prevalence of previous SARS-CoV-2 infection of 4% – corresponding to the 4% of positive serology results in their cohort. However, with 1,091 seropositives out of 26,823 participants identified by a test with a sensitivity of 87% and a specificity of 97.5%, the correct estimate of the prevalence of previous infection is in fact only 1.86%. Thus, in the study of Matta et al., only 40% of serology tests actually represent truly infected participants (corresponding to 658 false positive test results). With respect to the analysis results, it has to be assumed that the high number of false positives is mainly to be found within the “belief-negative” group (similar to the actual negative serologies). Conversely, the nonconverters (up to one-third of infected individuals, as suggested in my previous comment) are expected to fall mainly into the “positive belief” group (together with the true positives). All this implies that the belief would indeed be a better marker of actual infection history than the serology results.
It is therefore expected that persistent symptoms associated with serology are no longer statistically significant in models mutually adjusted with “belief”. Here, only anosmia stands out with a limited odds ratio of 2.72. With its very high unadjusted odds ratio of 15.69, this most specific symptom of Covid-19 basically "survives" the admixture of misclassified cases. For the same reason, anosmia is the only symptom associated with positive serology when restricting the analysis to the "positive belief" group.
We conclude that reporting a past Covid-19 infection would be more reliable than serology to correctly ascertain cases, indeed, reporting a past infection shows significant association not only for anosmia but for most of the other self-reported symptoms.
The study uses serology results to determine if any prolonged symptoms are associated with infection. Despite the known limitations of serology tests, they can be useful when comparing positive and negative serologies within a sufficiently large population. And in fact, model 2 shows “a positive serology test result was associated with 10 categories of persistent symptoms”.
But the validity of this identification becomes questionable when separating analysis of the participants by “belief”. It seems that misclassification bias was not ruled out for the other serology models (model 3 and following), which only demonstrate an association with anosmia. False positives should be treated as negative serologies, and may be assumed to have the same “belief” distribution, meaning most of them would fall into the “Belief-” group. We might also expect that false negatives (individuals who were in fact previously infected) would fall into the “Belief+” group, especially since 65% of them have confirmation in the form of a clinical diagnosis or a test. Whatever the serological result, the rate of actual infection should be very high for individuals who report having had an infection, and very low for those who do not.
These distributions dramatically reduce the usefulness of serology to identify association with actual Covid-19 infection. Are prolonged symptoms really expected to be statistically significant in the mutually adjusted models? We assume that anosmia is the only symptom to stand out because it is the symptom most specific to Covid-19 but close review of the data shows its odds ratio collapses from 15.7 (model 2) to 2.73 (model 3), which is further evidence of misclassification. This in itself represents sufficient bias to question the conclusions.
Evidence that belief is a more reliable indicator of previous infection than serology
We believe that the study data and in particular the distribution into subgroups lead to an erroneous conclusion. Even more, the data may lead us to the opposite conclusion.
First, inclusion of those testing positive via serology but not reporting infection is of questionable legitimacy. Prolonged symptoms appear primarily after symptomatic viral episodes but seldom after asymptomatic infections (1).
Second, the number of false negative serologies is estimated using a prevalence of 4%, corresponding to positive serology results. Prevalence should be directly calculated in accordance with the test’s 87% sensitivity and 97.5% specificity to make it consistent with the number of positive tests (1,091). This gives a plausible prevalence rate of 1.86%, 658 false positive serologies and 433 true positives. Thus, only 39.7% of those with positive serology tests would have actually been infected.
However, sensitivity may actually be much lower than stated: Levels of SARS-CoV-2 antibodies often undergo rapid decline after 6 to 9 months, which might lower the sensitivity to 61.2% (2,3), and up to one-third of people with SARS-CoV-2 infection do not seroconvert at all (4). This means that sensitivity could be as low as 39%, and there could be as many as 698 false negatives. This results in higher prevalence due to false negatives, although the number of true and false positives remains almost unchanged, respectively 449 and 642.
Turning to distribution within the cohort “believing” (two-thirds of whom, as noted elsewhere, had either a positive test or a medical opinion to support this “belief”) that they had Covid-19: False positives are by definition in fact seronegatives, so the expectation is that they will have a distribution similar to negatives serologies, of which over 98% are “Belief-”. All the 638 seropositive participants who did not report any infection (“Belief−”) would potentially be false positives, meaning they have never been infected. Conversely, true positives would all fall into the "Belief+" category, as well as the false negatives that should be grouped with them. Thus, it is possible that the 461 individuals who report an infection despite negative serology were all actually infected.
In summary, the possibility that almost 100% of the "Belief+" group were in fact infected cannot be ruled out. Similarly, up to 100% of the "Belief−" may not have been infected. Given the uncertainty, serology tests do not add to our knowledge over and beyond what the “belief” in having had Covid-19 provides (which, again, was backed up with objective, direct evidence in at least two-thirds of cases). Indeed the "self-reported infection" parameter would thus appear (at least in these cohorts) as potentially a better marker than serology for indicating previous SARS-CoV-2 infection.
Considering anosmia as very specific to Covid-19 leads to a different conclusion
The authors appear to consider anosmia to be the only symptom specific to Covid-19 and claim this to be consistent with anosmia being the only symptom associated with positive serology.
A closer look at the data raises questions. The association of anosmia with the “belief” that one had Covid-19 looks stronger (odds ratio of 28.66) than the association with serology (odds ratio of 15.69), and this is enhanced after mutual adjustment, when the odds ratio of anosmia is only 2.72 for serology, compared to 16.37 for “belief”. Thus, anosmia, “a hallmark of Covid-19 infection”, is much more highly correlated with belief than with serology – throwing the study’s conclusions into question.
This specificity of anosmia implies that the number of affected individuals should be directly correlated to the number of infections. Within the seropositives, the rate of anosmia is 8.8 times higher among those who believe they have been infected ("Belief+" 9.7%) than among those who believe they have not ("Belief–" 1.1%), and the same ratio should be assumed for infection rates. This unbalanced distribution provides a convincing explanation for why prolonged symptoms continue to be associated with “belief” but not with serology after mutual adjustment, and further confirms the misclassification bias raised by Dr Esther Rodriguez Rodriguez using different data from the same study.
Moreover, we can deduce from the rates of anosmia in the “Belief+” population (7.0%) and in the “Serology+” population (4.7%) that the first group contains 1.5 times more infected individuals than the latter group. In other words, “belief” seems more accurate than serology in identifying Covid-19 infection (what is expected when two-third of the "belief-positive" have received a physician diagnosis or a positive lab result), and the multiple symptoms other than anosmia associated with “belief” are likely to be consequences of the disease.
It is perhaps relevant that when infection is confirmed by PCR/LFT test or clinical diagnosis (model 7), the point estimates of the odds ratios are greater than 1 for most of the prolonged symptoms. Although the sample size is too limited for these individual estimates to reach conventional levels of statistical significance (i.e., their 95% confidence intervals do not include 1.0), this cumulative evidence across symptoms is nevertheless suggestive of an association of these prolonged symptoms with Covid-19.
There are several comments on the journal site, but it doesn't seem to be possible to link to them directly.
Editorialised by Nick Brown:
Another comment strikes a blow to the French "belief drives #LongCovid" study in JAMA Internal Medicine. The author… https://t.co/oKSwYg9CJv
The authors—and, even more so, the media outlets that covered the story—used the fact that people who "believed" th… https://t.co/AW1SOw3VLy
But as my earlier comment (RT'd in tweet 1 of this thread) showed, 2/3 of people who "believed" that they had had C… https://t.co/NPPe6Quy6t
My comment also noted that, in contrast, probably over half of the people with a positive serology test never had C… https://t.co/IyF3oPAVqu
The authors of the Matta et al. article discussed false negatives, but not false positives. Up to now, that could h… https://t.co/VDF3e8zk1A
But we now know that the limitations of these "ELISA-S" serology tests—which were done in the context of a general… https://t.co/06eLhPVnKx
That is, the participants were told "You tested positive, but it might very well be a false positive. Do you think… https://t.co/cs3nygk9XV
This information was published in https://t.co/7u0bFADOoU, four months before Matta et al. The two articles have se… https://t.co/gCvxUKwxmv
Indeed, it seems entirely likely that many of the people with an initial positive serology test knew about their su… https://t.co/8QUkNajSZJ
In summary, I think that the authors have some serious questions to answer about how they chose to report their met… https://t.co/VjsdXYoAP2
“Belief” may have been swayed by prejudicial presentation of an additional serological test result
The article by Matta et al classifies participants with their Covid-19 serological tests against spike protein (ELISA-S) results. The article, however, does not mention that all positive serologies were supplemented with an in-house microneutralization assay (SN) that detects the presence of neutralizing antibodies against SARS-CoV-2 (1).
Participants were given both their ELISA-S and SN results by December 2020 (2,3), before they were asked whether they thought they had been infected with SARS-CoV-2 or not. Crucially, they were informed, by a laboratory led by one of the Matta et al. authors, that SN would be far more reliable than ELISA-S serologies: a positive SN test would be almost 100% associated with a prior Covid-19 infection, but a negative SN could reveal a false positive ELISA-S, caused by antibodies against other common cold coronaviruses (4). This information is likely to have substantially influenced the opinion of those with a positive ELISA-S serology, such that those with positive SN would be likely to report a Covid-19 infection while those with negative SN would not.
Seven co-authors of the Matta et al. article contributed to a research article (5) based on 16,000 participants of the SAPRIS-SERO study, which showed that only 38% of ELISA-S positives were also SN positives. This proportion is very close to the surprisingly low rate (42%) of positive ELISA-S that report a Covid-19 infection in the Matta et al. study. In order to assess this potential major source of bias, the distribution of SN results for each subcategory “Belief+” and “Belief−” should be provided to readers.
(1) https://www.constances.fr/actualites/2020/Etude-serologie-COVID.php
(2) https://cephb.fr/ceph-SAPRIS-SERO.php
(3) https://www.constances.fr/actualites/2020/Web-conference-COVID19.php
(4) https://www.constances.fr/actualites/2020/Conclusion-serologie-COVID.php
Reassessment of persistent symptoms, self-reported COVID-19 infection and SARS-CoV-2 serology in the SAPRIS-SERO cohort: identifying possible sub-syndromes of Long Covid.
https://www.medrxiv.org/content/10.1101/2022.02.25.22271499v1.full.pdf
Conclusions There may be three common sub-syndromes of Long Covid, one with persistent anosmia, another with other respiratory tract symptoms and a third, currently under researched, with symptoms relatable to chronic fatigue. Antibody tests are insufficient for case detection while Long Covid remains poorly understood.
CONSIDERING FALSE POSITIVES INVALIDATES THE CONCLUSIONS
The corresponding author acknowledged in previous comments that, taking into account low prevalence, sensibility/specificity values, the reduction in antibodies over time, and non-seroconversion, the serology tests used to control past infections would result in high rates of false positives and false negatives. Consequently, occurrences of symptoms for people with positive serology should be appropriately adjusted before applying logistic regressions. Indeed, false positives are expected to have the same rate of prolonged symptoms as the control group (negative serology and no infection reported).
Based on the specificity of the tests reported in the study, 2.5% of the population of non-infected individuals are expected to be tested false-positive. To estimate the global proportion of non-infected individuals, we can legitimately assume that they are about the same proportion of participants whose serology test was negative (i.e., 96% of the total). Multiplying these values results in a global false positive rate of 2.4% that, compared to the 4% rate of positive tests, corresponds to a 60% share (i.e. 655 individuals).
In model 3, using the conservative assumption that false positives identified above are homogeneously distributed within the “Belief+” (B+) and “Belief–” (B–) groups, we see that along with anosmia, nine other symptoms (fatigue, dizziness, headache, breathing difficulties, palpitations, chest pain, cough, poor attention or concentration and “other symptoms”) are statistically associated with a positive serology test after mutual adjustment (1).
We should also consider the fact that the false positives fall mainly into the "Belief–" group. It is possible to have an estimate of the maximum proportion in that group with the in-house neutralization assay (SN) described in my previous comment. These tests were carried out for all positive serologies and have a specificity close to 100% (2). In his answer, the corresponding author stated that 2.3% of the B– had a negative SN out of the 2.5% of the S+/B– group. As a result false positives represent a maximum of 92% of that group. Even at this extreme rate, fatigue and poor attention or concentration are statistically associated with a positive serology test (1).
An additional correction should be applied to take into account false negatives, expected to fall mainly within the “B+” group, which would result in an even higher number of symptoms associated with serology. These results question the conclusion of the study that only anosmia was associated with a positive serology test after mutual adjustment.
(1) [https://osf.io/w7tn9/]
(2) [https://www.constances.fr/actualites/2020/Conclusion-serologie-COVID.php]
Attach files by dragging & dropping,
selecting them, or pasting
from the clipboard.
Uploading your files…
We don’t support that file type.
with
a PNG, GIF, or JPG.
Yowza, that’s a big file.
with
a file smaller than 1MB.
This file is empty.
with
a file that’s not empty.
Something went really wrong, and we can’t process that file.