TY - JOUR
T1 - Audio-based AI classifiers show no evidence of improved COVID-19 screening over simple symptoms checkers
AU - Coppock, Harry
AU - Nicholson, George
AU - Kiskin, Ivan
AU - Koutra, Vasiliki
AU - Baker, Kieran
AU - Budd, Jobie
AU - Payne, Richard
AU - Karoune, Emma
AU - Hurley, David
AU - Titcomb, Alexander
AU - Egglestone, Sabrina
AU - Tendero Cañadas, Ana
AU - Butler, Lorraine
AU - Jersakova, Radka
AU - Mellor, Jonathon
AU - Patel, Selina
AU - Thornley, Tracey
AU - Diggle, Peter
AU - Richardson, Sylvia
AU - Packham, Josef
AU - Schuller, Björn W.
AU - Pigoli, Davide
AU - Gilmour, Steven
AU - Roberts, Stephen
AU - Holmes, Chris
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/2
Y1 - 2024/2
N2 - Recent work has reported that respiratory audio-trained AI classifiers can accurately predict SARS-CoV-2 infection status. However, it has not yet been determined whether such model performance is driven by latent audio biomarkers with true causal links to SARS-CoV-2 infection or by confounding effects, such as recruitment bias, present in observational studies. Here we undertake a large-scale study of audio-based AI classifiers as part of the UK government’s pandemic response. We collect a dataset of audio recordings from 67,842 individuals, with linked metadata, of whom 23,514 had positive polymerase chain reaction tests for SARS-CoV-2. In an unadjusted analysis, similar to that in previous works, AI classifiers predict SARS-CoV-2 infection status with high accuracy (ROC–AUC = 0.846 [0.838–0.854]). However, after matching on measured confounders, such as self-reported symptoms, performance is much weaker (ROC–AUC = 0.619 [0.594–0.644]). Upon quantifying the utility of audio-based classifiers in practical settings, we find them to be outperformed by predictions on the basis of user-reported symptoms. We make best-practice recommendations for handling recruitment bias, and for assessing audio-based classifiers by their utility in relevant practical settings. Our work provides insights into the value of AI audio analysis and the importance of study design and treatment of confounders in AI-enabled diagnostics.
AB - Recent work has reported that respiratory audio-trained AI classifiers can accurately predict SARS-CoV-2 infection status. However, it has not yet been determined whether such model performance is driven by latent audio biomarkers with true causal links to SARS-CoV-2 infection or by confounding effects, such as recruitment bias, present in observational studies. Here we undertake a large-scale study of audio-based AI classifiers as part of the UK government’s pandemic response. We collect a dataset of audio recordings from 67,842 individuals, with linked metadata, of whom 23,514 had positive polymerase chain reaction tests for SARS-CoV-2. In an unadjusted analysis, similar to that in previous works, AI classifiers predict SARS-CoV-2 infection status with high accuracy (ROC–AUC = 0.846 [0.838–0.854]). However, after matching on measured confounders, such as self-reported symptoms, performance is much weaker (ROC–AUC = 0.619 [0.594–0.644]). Upon quantifying the utility of audio-based classifiers in practical settings, we find them to be outperformed by predictions on the basis of user-reported symptoms. We make best-practice recommendations for handling recruitment bias, and for assessing audio-based classifiers by their utility in relevant practical settings. Our work provides insights into the value of AI audio analysis and the importance of study design and treatment of confounders in AI-enabled diagnostics.
UR - http://www.scopus.com/inward/record.url?scp=85184421962&partnerID=8YFLogxK
U2 - 10.1038/s42256-023-00773-8
DO - 10.1038/s42256-023-00773-8
M3 - Article
AN - SCOPUS:85184421962
SN - 2522-5839
VL - 6
SP - 229
EP - 242
JO - Nature Machine Intelligence
JF - Nature Machine Intelligence
IS - 2
ER -