AI Health Friday Roundup - 2023
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, clinical research, health policy, and more.
AI Health Friday Roundup
In this week’s Duke AI Health Roundup: a couple of big weeks for AI regulation; GPT-4 reveals serious biases in clinical tasks; brain organoids bridged to computer inputs; proposing a network of “assurance labs” for health AI; automated ECG-based tools for risk assessment; assessment framework for eHealth tools; Stanford AI experts look forward to 2024; diagnostic accuracy of large language models; genome of vanished “wooly dogs” decoded; surveys examine state of deep learning; more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: applying large language models to robotics; new insights into severe morning sickness; New England Journal reflects on historical medical injustices; clarifying the economics of generative AI; ozone pollution responsible for elevated risk of low birth weight in many LMICs; “productivity paradox” may temper expected benefits of AI in healthcare; prompt injection risks for customized GPT; surge of article retractions in 2023; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: exploring LLMs’ capacity for inductive reasoning; Google debuts new Gemini LLM; structural racism and lung cancer risk; “passive fatigue” behind virtual meeting burnout; fruit flies suggest approach for generative AI learning; simple attack prompt can make LLMs disgorge sensitive training data; early warning for ovarian cancer; rating LLM trustworthiness; the global warming contributions of a digital pathology deep learning system; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: AI needs to get the lay of the healthcare land; drones beat ambulances for AED delivery; AI stress validation tool stumbles on validation; lessons from COVID misinformation; more worries for screen time and kids; when not just the content but the author are AI-generated; LLMs can’t fix healthcare by themselves; using GPT-4-ADA to cook up bogus research dataset; adapting quality assurance methods for AI; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: testing GPT-4’s diagnostic chops; yeast with 50% synthetic genome survives, replicates; roles for AI in clinical trials; role of pets in zoonotic spillover; vaccine status, bias, and perceptions of risk; potential for bias in radiological deep learning models; what rats remember; developing standards for health-related chatbots; how publishing professionals perceive recent changes in social media; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: more earthly biota than stars in the sky; AI needed to subtract AI-created content; researchers apply cognitive tests to GPT; study highlights bad citations as serious problem for science; mental health resources for LGBTQ+ youth; new therapies needed to counter dengue’s march; bioRxiv uses LLMs to create tailored content summaries from papers; risks of generative AI not evenly distributed; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: dissecting the AI executive order; deep learning predicts macular degeneration; history of medical debt; unease over the surveillance campus; social vulnerability, diabetes, and heart health; open access and consolidation in scholarly publishing; AI may require new legal frameworks; diverse datasets needed for training AI; “watermarking” may not work for distinguishing AI-generated content; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: transparency index for foundation models; upending assumptions about 1918 flu; disparity dashboards considered; fixing drift in image classifiers; COVID trial shows no benefit for vitamin C; Excel data gremlin vanquished; LLMs reveal medical racism, bias when queried; external validation not enough for clinical AI; “data poison” fends off generative AI; NIH changes grant evaluation criteria; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: digital determinants of health; determining when a pandemic is “over”; despite law, academia and institutions slow to return Native American art and remains; cancer cells siphon mitochondria from T cells; AI deciphers scorched scrolls from Roman ruins; addressing “ecosystem level” bias in AI; who’s legally on the hook when LLMs do harm?; writing grants with ChatGPT; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the hidden influence of chronobiology; AI predicts immune escape; comparing COVID surveillance systems; yet another way to cheat at citations; updating models for ICU algorithm degrades performance; new “cooling” chemicals in cigarettes dodge menthol ban; AI image generator can’t be coaxed away from biased images; stroke deaths poised to rise in coming years; shedding light on AI’s dark corners; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: deep learning predicts variation in proteins; how AI effects clinical productivity; mRNA insights garner Nobel prize; US continues to lose ground in health, life expectancy; sitting is still bad for you; surveying algorithmic bias mitigation; antiracist approaches to clinical documentation; the surveillance and human labor interwoven into AI systems; the LLM hype cycle: peaks and troughs; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: lighting an (s)Beacon for genomic data; randomized trials for clinical AI; bees exhibit signs of sentience; scrutiny of AI chip design paper grows; the complexities of statistics vs. AI in medicine; deep brain stimulation for severe depression; worries about AI that sounds too human; tackling clinical conversations with GPT-4; YouTube disinformation videos being served to kids as STEM educational material; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: harnessing physical processes to power AI; xenograft study in mice sheds light on neuronal destruction in Alzheimer’s; speaking plainly in science; multimodal AI comes to the clinic; aligning AI fairness with medical practice; small-town healthcare imperiled by lack of doctors; GPT enhances consultant productivity and levels skills – with caveats; ableism in computer programming; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: navigating multimodal learning in AI; fine particulate pollution and breast cancer; surreptitious ChatGPT use pops up in scientific literature; the challenges of safeguarding generative AI against prompt injection; FDA panel gives thumbs-down to ubiquitous decongestant phenylephrine; study surveys standards for employing AI in journalism; twin study of WWII veterans sheds light on consequences of traumatic brain injuries; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: biased data offers window onto health equity issues; cancer therapeutics eye AI for drug discovery; testing machines with human exams; unveiling the “hidden curriculum” in medical education; once-vaunted telehealth startup collapses; eye movements combine with other data for early autism diagnosis; government seeks public input on AI and copyright; overemphasis on technology during COVID shutdown may have worsened education inequities; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: reinforcement learning to align LLMs with human preferences; modeling T cell exhaustion; examining clearance lineages of AI medical devices; writing as medicine for docs; healthcare needs more than current foundation models; watermarking images to spot AI influence; semaglutide tested in heart failure; NCSU researchers automate dragnet for fraudulent robocalls, much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: how bias emerges in healthcare algorithms; COVID vaccination and reduced maternal-fetal risk; research institutions need to beware predatory publishers; AI enables speech and expression by avatar for paralyzed woman; the protein “unknome” gets a closer look; figuring out what open AI really means; a testing schema for AI consciousness; sharing code helpful, encourages citations – but most authors still don’t share; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: transparency for AI-generated content; a critical appraisal of large language models; reconsidering radiation therapy; the future of governance for health AI; sport supplements whiff on truth in labeling; electronic payment charges siphon money from healthcare; focusing on AI’s real dangers; investigation reveals trouble with ethical oversight at French institute; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: Meta debuts LlaMA 2 large language model; calculating the toll of misdiagnosis; teaching writing in the age of GPTs; geographical concentration in AI industry; transgender youth, social media & mental health; responding to systemic racism in science; ML for extracting data from unstructured EHR records; regulatory implications for medical chatbots; building resiliency for a hotter world; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: a primer on foundation models; genetics and asymptomatic COVID; overblown claims for AI content detection; don’t trust GPT with the baby just yet; AI thirst for data drives interest in synthetic sources; the merits of working (out) for the weekend; physics offers window on sudden heart arrhythmias; tracing developments in press coverage of scientific preprints; expanding vaccination coverage for uninsured adults; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the unseen human costs underpinning popular AI chatbots; oceanic plastic pollution comes in all sizes; neighborhood redlining casts long shadow on health; project eyes AI-assisted texts for health behavior nudges; drowning remains a persistent threat to young children in US; catching up with a flurry of recent AI applications in medicine; big hospital data breach exposes patient names, emails; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: need for a global AI observatory; humans like GPT-3’s medical information better, regardless of whether it’s true or false; Surgeon General tackles epidemic of loneliness; problems with recency bias in NLP literature; ticks surf static charge to land on hosts; will scholarly publishing be able to cope with AI-generated content?; EHR data, bias, and pragmatic clinical trials; much more: