AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
February 17, 2023
In today’s Duke AI Health Friday Roundup: The blurriness of large language models; troubling errors crop up in genetic studies; chatbots take center stage; a call for regulating AI now; patching injured rat brains with organoids; racial disparity and ambulance transportation; using AI to help parse animal communication; paper deluge heightens severity of peer review crisis; revisiting chocolate’s health benefits; azithromycin prophylaxis in childbirth; much more:
AI, STATISTICS & DATA SCIENCE
- “…even if a large language model includes only the information we want, there’s still the matter of blurriness. There’s a type of blurriness that is acceptable, which is the re-stating of information in different words. Then there’s the blurriness of outright fabrication, which we consider unacceptable when we’re looking for facts. It’s not clear that it’s technically possible to retain the acceptable kind of blurriness while eliminating the unacceptable kind, but I expect that we’ll find out in the near future.” At the New Yorker, celebrated science fiction author Ted Chiang offers a thought-provoking way of thinking about AI chatbots such as ChatGPT, and understanding why they work (or don’t) the way they do.
- “That instrumentation creates a data deluge, and that is where artificial intelligence comes in—because the same natural language processing algorithms that we are using to such great effect in tools such as Google Translate can also be used to detect patterns in nonhuman communication.” An interview with researcher Karen Bakker at Scientific American explores the recent application of AI technology to decoding animal communication.
- “And it’s not just that AI has a fairly spotty track record for taking demos into reliable products; and it’s not just that hallucinatory web search could be dangerous if left to run amok in domains like medicine, it’s that the promises themselves are problematic, when examined from a scientific perspective.” At his Substack, Gary Marcus offers his perspective on the recent public debuts of AI chatbot tools by tech giants like Google and Microsoft – as well as the differences in public reaction to them.
- “We examine the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice.” A preprint article by Mittelstadt, Wachter, and Russell available at SSRN explores the potential shortcomings of “leveling down” approaches to fairness in AI and proposes some alternatives.
- “The broadest problem on this list, though, is not within the AI products themselves but, rather, concerns the effect they could have on the wider web. In the simplest terms: AI search engines scrape answers from websites. If they don’t push traffic back to these sites, they’ll lose ad revenue. If they lose ad revenue, these sites wither and die. And if they die, there’s no new information to feed the AI. Is that the end of the web? Do we all just pack up and go home?” An article by The Verge’s James Vincent reports on the flurry of demos around chatbot AI tools, including Microsoft’s incorporation of the technology into its search engine and Google’s preview of Bard – and delves into some of the problems attending the real-world use of such technologies.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “The organoids grew slightly larger during this time, gaining new cells and extending wires to link to the rats’ brain cells. The researchers mapped out these new connections using a fluorescent tracer, which revealed that the organoids had successfully connected to the retina through this network of wires. What’s more, the researchers showed the rats visual stimuli — including flashing lights and black and white bars on a screen — and found that their organoids activated in response, as an intact visual cortex would be expected to.” An article by Nicoletta Lanese at LiveScience reports on an experiment that used transplanted brain-tissue “organoids” to repair brain injuries in rats.
- “Byrne and her team say that the nature of many of the errors makes them seem suspicious. They found that some of the reagents that purported to target human genes or genomic sequences had no identifiable targets in the human genome, and that some targeted sequences in other species, such as rodents, plants and fungi.” Nature’s Diana Kwon reports on a new study, currently available as a non-peer-reviewed preprint at bioRxiv, that identifies a number of genetic sequence errors in highly-cited papers.
- Once more around this particular block: (a certain kind of) chocolate is good for you! Maybe. NPR’s Allison Aubrey reports on recent findings regarding the possible benefits of some cocoa-based products: “In early February, the agency gave a green light to use certain, limited health claims on products made with high-flavanol cocoa powder. But, the agency says there’s not enough evidence to support claims on regular chocolate, the kind most of us consume. Perhaps that’s because some of the more convincing research comes from studies of cocoa flavanol supplements, not candy.”
- “In this multicountry, randomized trial involving pregnant women in labor who were planning a vaginal delivery, azithromycin prophylaxis led to a significantly lower frequency of maternal sepsis or death than placebo but had little effect on stillbirth or neonatal sepsis or death. Maternal deaths were infrequent in both groups; findings were driven by the effects of azithromycin on maternal sepsis.” A research paper by Tita and colleagues, published last week in the New England Journal of Medicine, describes findings from a randomized trial that evaluated the effectiveness of a single dose of a widely-used antibiotic during childbirth in preventing sepsis or death for mothers and neonates.
COMMUNICATION, Health Equity & Policy
- “We found meaningful differences in the destination hospitals for White and non-White patients transported by ambulance from locations in the same ZIP code. In half of the studied ZIP codes, at least 8 percent of White patients would have had to be transported to different hospitals to achieve evenness in the transport destinations of White and non-White patients.” A research article published in Health Affairs by Pack and colleagues examines relationships between race and ethnicity and choice of destination for patients being transported by ambulance.
- “Many scientists are increasingly frustrated with journals — Nature among them — that benefit from the unpaid work of reviewing while charging high fees to publish in them or read their content…Yates and others suggest that cash payments would solve the problem, but others say such a system would be unethical and unsustainable. A better solution might be to share the load more widely — with early-career scientists, for instance, or those in less well resourced countries, who are not yet heavily involved — or even to let computer algorithms bear some of the burden.” At Nature, Amber Dance reports on the increasing severity of the shortage of peer reviewers amid a deluge of scientific papers.
- “If the federal government doesn’t start regulating AI companies, it will get a lot worse. Billions of dollars are pouring into AI technology that generates realistic images and text, with essentially no good controls on who generates what….as we know, it’s much easier to spread outrageous falsehoods than it is to spread the truth. Is this really like the beginning of the internet? Or is this like launching a nuclear bomb on the truth?” In an op-ed for The Hill, Duke professor Cynthia Rudin makes an urgent case for regulation in the domain of AI products, where technology – particularly generative AI – is rapidly outpacing the law ostensibly governing it.
- “In this cross-sectional study, we identified an attrition rate among all surgical specialties of 6.9% with an unintended attrition rate of 2.3%. This attrition rate decreased from 5.9% in 2001 to 1.7% in 2018, which may be due to changes in work hour restrictions. We also found a disproportionate rate of attrition and unintended attrition among female and [underrepresented in medicine] residents, particularly Black/African American residents.” A research article published in JAMA Surgery by Haruno and colleagues examines differences in race and sex among surgical specialty residents who choose to exit those training programs.