AI Health Friday Roundup​ - 2023

The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, clinical research, health policy, and more.

A cardinal captured in midflight as it launches from the branch of an ice-covered tree. Image credit: Kevin Cress/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Roundup: a couple of big weeks for AI regulation; GPT-4 reveals serious biases in clinical tasks; brain organoids bridged to computer inputs; proposing a network of “assurance labs” for health AI; automated ECG-based tools for risk assessment; assessment framework for eHealth tools; Stanford AI experts look forward to 2024; diagnostic accuracy of large language models; genome of vanished “wooly dogs” decoded; surveys examine state of deep learning; more:

READ MORE

Industrial robot comprising a single large manipulator arm in a brightly lit white space, poised as if preparing to retrieve one of multiple items on numbered shelves arrayed around it. Image credit: Zhenyu Luo/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: applying large language models to robotics; new insights into severe morning sickness; New England Journal reflects on historical medical injustices; clarifying the economics of generative AI; ozone pollution responsible for elevated risk of low birth weight in many LMICs; “productivity paradox” may temper expected benefits of AI in healthcare; prompt injection risks for customized GPT; surge of article retractions in 2023; much more:

READ MORE

Closeup photo of the polished disk of an optical computer drive with other internal components. Image credit: Patrick Lindberg/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: exploring LLMs’ capacity for inductive reasoning; Google debuts new Gemini LLM; structural racism and lung cancer risk; “passive fatigue” behind virtual meeting burnout; fruit flies suggest approach for generative AI learning; simple attack prompt can make LLMs disgorge sensitive training data; early warning for ovarian cancer; rating LLM trustworthiness; the global warming contributions of a digital pathology deep learning system; much more:

READ MORE

Photograph showing a surveyor’s tripod standing in a grassy field amid rolling terrain. Image credit: Valerie V/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: AI needs to get the lay of the healthcare land; drones beat ambulances for AED delivery; AI stress validation tool stumbles on validation; lessons from COVID misinformation; more worries for screen time and kids; when not just the content but the author are AI-generated; LLMs can’t fix healthcare by themselves; using GPT-4-ADA to cook up bogus research dataset; adapting quality assurance methods for AI; much more:

READ MORE

Colorful autumn leaves lying in a pile on the ground. Image credit: Jeremy Thomas/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: testing GPT-4’s diagnostic chops; yeast with 50% synthetic genome survives, replicates; roles for AI in clinical trials; role of pets in zoonotic spillover; vaccine status, bias, and perceptions of risk; potential for bias in radiological deep learning models; what rats remember; developing standards for health-related chatbots; how publishing professionals perceive recent changes in social media; much more:

READ MORE

A human figure stands on a hilltop next to a bare tree, silhouetted against a starry sky with the Milky Way galaxy bisecting the view diagonally. Image credit: Vincent Chin/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: more earthly biota than stars in the sky; AI needed to subtract AI-created content; researchers apply cognitive tests to GPT; study highlights bad citations as serious problem for science; mental health resources for LGBTQ+ youth; new therapies needed to counter dengue’s march; bioRxiv uses LLMs to create tailored content summaries from papers; risks of generative AI not evenly distributed; much more:

READ MORE

Extreme closeup photograph of a human eye, with iris and pupil filling most of the field. Image credit: v2osk/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: dissecting the AI executive order; deep learning predicts macular degeneration; history of medical debt; unease over the surveillance campus; social vulnerability, diabetes, and heart health; open access and consolidation in scholarly publishing; AI may require new legal frameworks; diverse datasets needed for training AI; “watermarking” may not work for distinguishing AI-generated content; much more:

READ MORE

Closeup photograph showing a crystal ball perched on a rocky wall or ledge. The blurry landscape below – a river gorge surrounded by green slopes – is refracted, upside down and in focus, in the crystal ball. Image credit: Marc Schulte/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: transparency index for foundation models; upending assumptions about 1918 flu; disparity dashboards considered; fixing drift in image classifiers; COVID trial shows no benefit for vitamin C; Excel data gremlin vanquished; LLMs reveal medical racism, bias when queried; external validation not enough for clinical AI; “data poison” fends off generative AI; NIH changes grant evaluation criteria; much more:

READ MORE

Digital image showing a black screen with white digits (ones and zeros) in long rows. Superimposed in the center is a transparent red heart shape. Image credit: Alexander Sinn/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: digital determinants of health; determining when a pandemic is “over”; despite law, academia and institutions slow to return Native American art and remains; cancer cells siphon mitochondria from T cells; AI deciphers scorched scrolls from Roman ruins; addressing “ecosystem level” bias in AI; who’s legally on the hook when LLMs do harm?; writing grants with ChatGPT; much more:

READ MORE

A woman in a green patterned blouse stands against a whitewashed brick wall. She holds a large red analog wall clock in front of her face with both hands. Image credit: Rodolfo Barreto/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: the hidden influence of chronobiology; AI predicts immune escape; comparing COVID surveillance systems; yet another way to cheat at citations; updating models for ICU algorithm degrades performance; new “cooling” chemicals in cigarettes dodge menthol ban; AI image generator can’t be coaxed away from biased images; stroke deaths poised to rise in coming years; shedding light on AI’s dark corners; much more:

READ MORE

A pair of colorful paper origami cranes, sitting on a tabletop. Image credit: Carolina Garcia Tavizon/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: deep learning predicts variation in proteins; how AI effects clinical productivity; mRNA insights garner Nobel prize; US continues to lose ground in health, life expectancy; sitting is still bad for you; surveying algorithmic bias mitigation; antiracist approaches to clinical documentation; the surveillance and human labor interwoven into AI systems; the LLM hype cycle: peaks and troughs; much more:

READ MORE

Low angle photograph of an illuminated masonry lighthouse against the background of a starry night sky. Image credit: Nathan Jennings/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: lighting an (s)Beacon for genomic data; randomized trials for clinical AI; bees exhibit signs of sentience; scrutiny of AI chip design paper grows; the complexities of statistics vs. AI in medicine; deep brain stimulation for severe depression; worries about AI that sounds too human; tackling clinical conversations with GPT-4; YouTube disinformation videos being served to kids as STEM educational material; much more:

READ MORE

A bright lightning bolt captured against a dark, purplish sky. Image credit: Frankie Lopez/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: harnessing physical processes to power AI; xenograft study in mice sheds light on neuronal destruction in Alzheimer’s; speaking plainly in science; multimodal AI comes to the clinic; aligning AI fairness with medical practice; small-town healthcare imperiled by lack of doctors; GPT enhances consultant productivity and levels skills – with caveats; ableism in computer programming; much more:

READ MORE

A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network. Image credit: Alan Warburton / © BBC / Better Images of AI / CC-BY 4.0

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: navigating multimodal learning in AI; fine particulate pollution and breast cancer; surreptitious ChatGPT use pops up in scientific literature; the challenges of safeguarding generative AI against prompt injection; FDA panel gives thumbs-down to ubiquitous decongestant phenylephrine; study surveys standards for employing AI in journalism; twin study of WWII veterans sheds light on consequences of traumatic brain injuries; much more:

READ MORE

Picture, taken from below, showing a skylight window opening on to blue sky with wispy white clouds. Image credit: Dylan Ferreira/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: biased data offers window onto health equity issues; cancer therapeutics eye AI for drug discovery; testing machines with human exams; unveiling the “hidden curriculum” in medical education; once-vaunted telehealth startup collapses; eye movements combine with other data for early autism diagnosis; government seeks public input on AI and copyright; overemphasis on technology during COVID shutdown may have worsened education inequities; much more:

READ MORE

Nine small images with schematic representations of differently shaped neural networks, a human hand making a different gesture is placed behind each network. Image credit: Alexa Steinbrück /Better Images of AI/CC-BY 4.0

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: reinforcement learning to align LLMs with human preferences; modeling T cell exhaustion; examining clearance lineages of AI medical devices; writing as medicine for docs; healthcare needs more than current foundation models; watermarking images to spot AI influence; semaglutide tested in heart failure; NCSU researchers automate dragnet for fraudulent robocalls, much more:

READ MORE

A person is illustrated in a warm, cartoon-like style in green. They are looking up thoughtfully from the bottom left at a large hazard symbol in the middle of the image. The Hazard symbol is a bright orange square tilted 45 degrees, with a black and white illustration of an exclamation mark in the middle where the exclamation mark shape is made up of tiny 1s and 0s like binary code. To the right-hand side of the image a small character made of lines and circles (like nodes and edges on a graph) is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. It faces off to the right-hand side of the image. Image credit: Yasmin Dwiputri & Data Hazards Project / Better Images of AI / CC-BY 4.0

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: how bias emerges in healthcare algorithms; COVID vaccination and reduced maternal-fetal risk; research institutions need to beware predatory publishers; AI enables speech and expression by avatar for paralyzed woman; the protein “unknome” gets a closer look; figuring out what open AI really means; a testing schema for AI consciousness; sharing code helpful, encourages citations – but most authors still don’t share; much more:

READ MORE

Picture of window with a view of distant city skyline, taken at some distance back in a darkened room. Image credit: Ed Vázquez/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: transparency for AI-generated content; a critical appraisal of large language models; reconsidering radiation therapy; the future of governance for health AI; sport supplements whiff on truth in labeling; electronic payment charges siphon money from healthcare; focusing on AI’s real dangers; investigation reveals trouble with ethical oversight at French institute; much more:

READ MORE

Llamas grazing in an open field. Image credit: Hugo Kruip/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: Meta debuts LlaMA 2 large language model; calculating the toll of misdiagnosis; teaching writing in the age of GPTs; geographical concentration in AI industry; transgender youth, social media & mental health; responding to systemic racism in science; ML for extracting data from unstructured EHR records; regulatory implications for medical chatbots; building resiliency for a hotter world; much more:

READ MORE

A young girl wearing a school backpack places her hand in the extended hand of humanoid robot decked in flowers. Image credit: Andy Kelly/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: a primer on foundation models; genetics and asymptomatic COVID; overblown claims for AI content detection; don’t trust GPT with the baby just yet; AI thirst for data drives interest in synthetic sources; the merits of working (out) for the weekend; physics offers window on sudden heart arrhythmias; tracing developments in press coverage of scientific preprints; expanding vaccination coverage for uninsured adults; much more:

READ MORE

Plastic figure resembling a human who sits on a table infront of a laptop in a dark room. Long shadows disseminate a gloomy mood. Image credit: Max Gruber / Better Images of AI / Clickworker 3d-printed / CC-BY 4.0

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: the unseen human costs underpinning popular AI chatbots; oceanic plastic pollution comes in all sizes; neighborhood redlining casts long shadow on health; project eyes AI-assisted texts for health behavior nudges; drowning remains a persistent threat to young children in US; catching up with a flurry of recent AI applications in medicine; big hospital data breach exposes patient names, emails; much more:

READ MORE

Row of large parabolic radio telescope dishes, all pointing in the same direction, against a background of twilit sky. Image credit: Gontran Isnard/Unsplash

AI Health Friday Roundup

In this week’s Duke AI Health Friday Roundup: need for a global AI observatory; humans like GPT-3’s medical information better, regardless of whether it’s true or false; Surgeon General tackles epidemic of loneliness; problems with recency bias in NLP literature; ticks surf static charge to land on hosts; will scholarly publishing be able to cope with AI-generated content?; EHR data, bias, and pragmatic clinical trials; much more:

READ MORE