In today’s Duke AI Health Friday Roundup: the ‘uncanny valley’ of AI applications; EHR data powers early autism screening; deer may be serving as reservoir for COVID; study delineates acute effects of diesel exhaust on human brains; FDA announces reorganization around food oversight; AI assistance makes its way to the patient bedside; wearable trackers to provide digital biomarkers for progression of Duchenne muscular dystrophy; much more:
AI, STATISTICS & DATA SCIENCE
- “In this diagnostic study of an autism screening test, EHR-based autism detection achieved clinically meaningful accuracy by age 30 days, improving by age 1 year. This automated approach could be integrated with caregiver surveys to improve the accuracy of early autism screening.” In an article published this week in JAMA Network open, Matthew Engelhard and a group of researchers from Duke and Johns Hopkins describe the development of a predictive model to identify autism in children less than 1 year of age based on data available in electronic health records.
- “…we pursue here the conception of ethomic biomarkers for human clinical use. We hypothesized that AI could fundamentally change disease characterization, develop objectively quantifiable readouts and thereby reduce necessary cohort sample size and time- to-endpoint.” A research article by Ricotti and colleagues, published in Nature Medicine, describes the use of a full-body system of wearable trackers, combined with AI analysis, to develop digital biomarkers that could help predict the progression of Duchenne muscular dystrophy.
- “Here we describe ProGen, a language model that can generate protein sequences with a predictable function across large protein families, akin to generating grammatically and semantically correct natural language sentences on diverse topics.” In a research article published in Nature Biotechnology, Madani and colleagues report on the use of a type of AI known as a large language model to predict protein sequences.
- “…there remains a dire need to reduce bias and promote privacy and security in the application of medical AI. Privacy AI computing is starting to take off with the use of federated and swarm learning, as well as with the increasing application of edge computing, which uses algorithms fully operating on the smartphone. In 2023, these strategies will be explored more fully, in a much-needed effort to not only fully investigate the potential for AI in health and medicine but also to address its potential flaws and pitfalls.”In an article for Wired Magazine, cardiologist and clinical researcher Eric Topol explores some of the possible applications for AI systems in routine clinical care – along with some important caveats.
- “Our study, with its state-of-the-art model performance and interpretable results, suggests that a deep learning-based model can have a relevant role in endometrial cancer diagnostics. The identification of morpho-molecular correlates and their impact on prognosis advances the evidence towards building an improved risk stratification system in endometrial cancer and unifying the molecular-driven and morphology-driven classification systems.” A research article published in Lancet Digital Health by Fremond and colleagues describes an interpretable deep learning model designed to predict classification of endometrial cancer based on interpretation of slide images.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “Previous studies of deer have suggested humans have repeatedly introduced the coronavirus into white-tailed deer populations in the United States and Canada and that deer can spread the virus to one another. Scientists are not sure how people are passing the virus to deer, but they have speculated that it might happen when people feed deer or deer encounter human trash or waste.” Oh, deer: The New York Times’ Emily Anthes reports on concerns that the SARS-CoV-2 various may have found a natural reservoir in white York Times’-tailed deer – one from which new variants could emerge to (re)infect humans.
- During the peak of the last ice age, known as the Last Glacial Maximum, the low sea levels exposed a vast land area that extended between Siberia and Alaska known as Beringia, which included the Bering Land Bridge. In its place today is a passage of water known as the Bering Strait, which connects the Pacific and Arctic Oceans…Based on records of estimated global temperature and sea level, scientists thought the Bering Land Bridge emerged around 70,000 years ago, long before the Last Glacial Maximum. But the new data show that sea levels became low enough for the land bridge to appear only 35,700 years ago”. A news post on the National Science Foundation website summarizes recent research suggesting that the Bering Land Bridge between the Asian and North American continents emerged from the water later than previously believed, putting a tighter set of brackets around the likely arrival of humans in the Americas.
- “Our estimates of future power sector generation material requirements across a wide range of climate-energy scenarios highlight the need for greatly expanded production of certain commodities. However, we find that geological reserves should suffice to meet anticipated needs, and we also project climate impacts associated with the extraction and processing of these commodities to be marginal. A study by Wang and colleagues published in Joule offers projections for future demand of materials, including rare earth metals, to accommodate evolving needs for power generation under different sets of assumptions.
- “Our study provides the first evidence in humans, from a controlled experiment, of altered brain network connectivity acutely induced by air pollution. The use of this model is important because it is not subject to potential confounding by variables correlated to exposure, a vexing concern common to observational studies.” A study published in BMC Environmental Health by Gawryluk and colleagues finds that even brief exposure to diesel exhaust can have marked effects on aspects of brain function in humans.
COMMUNICATION, Health Equity & Policy
- “Automation is a double-edged sword, and there is a kind of uncanny valley. We know perfectly well not to trust lousy systems; and wouldn’t need to pay attention to truly reliable systems, but the closer they get to perfect, the easier it is for mere mortals to space out. The CNET editors probably took a quick look at the large-language model generated prose, and thought it looked good enough; ‘complacency and over-trust’ to their own detriment.” In an article available at his Substack, Gary Marcus considers the ramifications of CNET’s decision to use the large language model AI known as ChatGPT to write articles for the website and offers some thoughts on why having human judgment in the loop was still insufficient to prevent a spate of erroneous articles from being auto-published.
- “The changes aim to straighten out a convoluted leadership structure. The FDA oversees human and veterinary drugs and medical devices, along with much of the U.S. food supply. The Agriculture Department also oversees some food products.” PBS Newshour reports on recent announcements that the US Food and Drug Administration would be making substantial organizational changes to the agency’s division in charge of overseeing food safety and nutrition – changes that include the creation of a new deputy commissioner post overseeing the division.
- “The ranking of universities and colleges at the national and global level is a well-known dubious practice. Flawed methodologies generate distorted and inaccurate profiles of these institutions. Yet, rankings have remained a popular and trusted measure of “the best” by the public.” Science editor in chief Holden Thorp weighs in on recent refusals by a number of prominent schools to participate in (by providing data) US News and World Report’s annual ranking of US colleges.
- “This is the first time the agency has sought to permanently ban a company from sharing customers’ health data with third parties for advertising purposes. In previous enforcement actions, the FTC has instead taken a more reserved approach, requiring companies to reform their practices and meet the clear and informed consent standard.” The Markup’s Todd Feathers reports on the recent Federal Trade Commission’s request for a court order to stop online prescription drug company GoodRx from sharing customer data – a move that marks a ratcheting up of consumer data protections.