As an internist with formal research training and a career focus on data analysis, I respect the knowledge and insights we can glean from math as much as any physician. As a clinician in Seattle, I’m proud of local groups sharing information generated from such calculations (e.g., of new and recovered cases, basic reproduction number, doubling time) to raise public awareness about COVID-19. As a reader of the Annals, I am glad to see an uptick in virus-related articles, such as a recent study that used publicly reported data to report new information about the virus’s 5-day median incubation period (1). Even with everything that still remains unknown about the coronavirus, this is the type of information that helps my colleagues and I triage and counsel patients.
However, one thing is clear amid the ongoing pandemic: Even if math itself is universal, the way we use it to talk about solutions to difficult situations doesn’t appear to be.
For instance, a chorus of voices emerged in the early period after the U.S. outbreak, suggesting that concerns were overblown and that there was little reason to panic. A key piece of mathematical evidence used to make the case: that the total number of deaths attributable to COVID-19 paled in comparison to the number attributable to seasonal flu, and most communities still had few to no cases of COVID. As the thinking went, the math didn’t add up to cause major concern.
Although that isn’t factually wrong–there have been fewer COVID-related deaths than influenza-relate deaths thus far, and many areas have had few to no diagnosed cases–another group of voices was quick to use those same data to raise counterpoints. Their explanation for the low death count was that the worst was yet to come, particularly given potential for exponential growth in transmission, and therefore really not be more, not less, concerned.
Same math, different conclusions.
In this instance, it seems dangerous to use the math to downplay the situation. Although the number of cases has been low in many areas to date (at the time of this writing in mid-March), it’s important to remember the principle that we also can’t find what we don’t look for. Because of extreme limitations in testing capacity in the United States, we have a classic “tip of the iceberg” problem in which we see only what we see, and not what we don’t. Using the number of cases to gauge the severity of COVID-19 is problematic at best and dangerous at worst. Think of the iceberg below the water.
The other issue is the time lag in what we’re looking at. Analyzing COVID-19 data is a bit like studying a star in that the light seen now was actually emitted a while ago. Look no farther than the North Star, Polaris, for a salient example. Light from the star takes 680 years to reach Earth, meaning that the light we see on any given night was emitted before we were alive, and that none of us will be alive to see the light actually emitted during our lifetimes.
Which much less extreme, a similar lag exists for COVID-19–cases identified now reflect transmission and infection from preceding days or weeks ago.
There are also unintended, off-target effects to consider. The creation of multiple narratives around the same math–ranging from “there is no reason to be concerned” to “we should be very concerned”–can generate informational noise that prevents both the medical community and the public from absorbing key knowledge and guidance, such as that contained in the Annals article and official guidelines. Importantly, the proverbial knife can cut both ways. As noted above, it is problematic to use low case estimates to brush away concerns, and overall, we should take more rather than less caution. But the doomsday “back-of-the-napkin” calculations promoted by others also haven’t necessarily helped maintain public trust or prevent mass hysteria. Both approaches can add noise to the situation.
Grappling with narratives around a given statistic can siphon attention away from other important math. As an example, consider how much attention has been placed on the number of test kits, compared with the attention placed on understanding the characteristics of those tests. At a disease prevalence of 0.1%, diagnostic tests with even superb sensitivity/specificity (99%) yield a PPV of 9%. To be clear, the fact that 91% of positive test results would be false positives in this scenario doesn’t undercut the need to expand access to testing kits and create thoughtful testing strategies. But it does spotlight what we might be missing if those strategies are designed without consideration of the math behind test characteristics.
Knowing all of this may not make the challenge of COVID-19 any easier. But recognizing the very human tendency to infuse narrative into math may nonetheless improve our response to the challenge, helping us think and communicate more accurately about the pandemic and improve our approaches to fighting it.
Reference
- Lauer SA, Grantz KH, Bi Q, et al. The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. Ann Intern Med. 10 March 2020. [Epub ahead of print]. doi:10.7326/M20-0504
No comments:
Post a Comment
By commenting on this site, you agree to the Terms & Conditions of Use.