AI Health Friday Roundup 2024
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, clinical research, health policy, and more.
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: teaching AI ethics in grade school; checking on where all that training data is coming from; House task force releases AI report; NASEM analysis adds to debate on alcohol; examining the gap between lifespan and ‘healthspan’; COMET trial shines light on treatment for DCIS; proposing ethical guidelines for posthumous authorship; flood of low-quality commentaries engulfing some scholarly journals; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: developing benchmarks for AI performance; tapping the potential of mental health apps; how multidisciplinary is that journal?; updating AI prediction models in healthcare; weighing the benefits of sepsis prediction models; risks of drug overdose death among Medicaid recipients; creating institutional guidelines for the use of health AI; dataset encompasses retracted journal articles; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: DeepMind improves on forecasting accuracy; NASEM unveils report on AI and the future of work; Women’s Health Study looks at 30-year risk; cognitive biases in AI models; bogus papers threaten knowledge synthesis; dark chocolate and type 2 diabetes; the evaporation of knowledge in the digital age; what ‘open’ really means for AI models; the distance to achieving artificial general intelligence; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: are LLMs facing a cliff in performance?; role of digital health in combating antimicrobial resistance; AI’s effects on job markets already being felt; epigenetic changes and obesity; how stress warps memory, feeds anxiety; developing medical AI curricula via Delphi; tech bans and teen mental health; growing strains on scientific publishing; operator guidance can help bystanders in performing CPR; more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: perils of predictive algorithms; soil fungus grooves to white noise; a new angle on Alzheimer disease; dishonest data rampant in online surveys; dispensing with tattooing for radiation therapy; using NLP to extract adverse events in postmarket scenarios; teaching convolutional neural nets to recognize shapes; first-in-human stem cell trials for repairing corneas; going beyond paywalls in ensuring accessibility; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the promise of AI agents in discovery science; new mechanism for MRSA resistance found; LLMs for matching patients with clinical trials; the science behind cats’ love of tuna; memorization vs reasoning in LLMs; deep phenotyping illuminates sex-based differences in aging; data sleuths dig into dodgy science; weighing different definitions for Alzheimer disease; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: concerns about LLMs clouding patients’ medical records; methylation allows more efficient DNA computing; surveying the impact of generative AI in scholarly publishing; access to chatbots doesn’t improve clinical reasoning; AI transcription tool hallucinates, especially with lots of pauses; cardiovascular risks of yo-yo dieting; cataloguing skills taught in US higher education; effects of screen time on teen mental health; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: AI boosted cameras to prevent medication errors; systems education for future scientists; LLMs extract quality measures from patient data; fine-tuning persuasion for foundation models; staffing patterns and quality measures; tools target suspicious patterns at scholarly journals; an ethics framework for evaluating AI; the benefits of publishing failures as well as successes; much more:
AI Health Friday Roundup
In this Friday’s Duke AI Health Friday Roundup: framework for aligning AI with the needs of clinicians, patients; even small amounts of synthetic data tied to model collapse; FDA perspective on AI regulations; evaluating satisfaction with AI responses to patient questions; exposure to COVID during pregnancy not associated with later effects on infants; probing the limits of LLM reasoning; revisiting the early days of peer review; tracking scholarly content licensing for AI training; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: evaluation framework for LLMs in healthcare; chemistry, physics Nobels go to AI researchers; large proportions of scientists leaving at 5, 10-year marks; CAR T therapy piloted for autoimmune diseases; AI deployed for sentiment analysis of discussion about GLP-1 agonists; pragmatic clinical trial garners much more data from EHRs, claims than PROs; worries about buggy code being produced with AI-assisted processes; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: FTC takes aim at misleading AI claims; study suggests greater LLM size and instructability correlates with less reliability; evidence of misconduct surfaces in Alzheimer, Parkinson research; worries that the antimicrobial “bubble” may be about to pop; harnessing AI for better disaster preparedness; increasing numbers of female docs entering high-compensation specialties; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: cardiovascular medicine looks at AI; wildfire smoke exposure exacts significant health tolls; bigger not necessarily better for AI models; preserving knowledge against a rising tide of digital decay; NIH announces funding for genomics learning health; big academic publishers face lawsuit; new benchmark for auto-replicating analyses with AI; minority physicians provide care for a disproportionate number of Medicaid recipients; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the physiology behind “choking” under pressure; LLM classifiers for social determinants of health; wearables log differences in long COVID patients; ORI finalizes updates to research integrity rules; conversations with LLMs reduce adherence to conspiracy theories; Kolmogorov-Arnold networks for explainable AI; LLMs for protein tinkering; the evolution of outboard memory for physicians; pros and cons of AI-powered genomic prediction; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: real-time AI decision support for surgeons; how patient self-advocacy effects medical billing; chatbots for cancer genetic counseling; common dye renders living mice transparent; longitudinal modeling of blood flow to enable digital twins; impact of vaping on exercise tolerance; the importance of clinical validation for AI tools; reexamining race-based clinical algorithms; frontier model LLMs to aid human forecasting; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: AI sorts through chemical libraries to find drug candidates; fibrin-spike protein interaction implicated in COVID inflammation; AI’s artistic prospects; short-circuiting damage from retracted papers; improving clinical trials infrastructure; more insight needed on AI implementation experiences; sifting climate policy with machine learning; phages hitch a ride on tiny worms; testing ChatGPT’s performance as source of information for patients; large research teams put some scientists at career disadvantage; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: confidence, nonsense and medical questions; framework assesses health AI cost-effectiveness; computational approach sheds light on music theory; ‘AI scientist’ cranks out papers by itself; heat-related deaths in US trend upward; fierce competition for AI expertise; evaluating clinical trial papers for trustworthiness; using health AI in conflict zones; black market for paper citations; JAMA publishes draft guidance on inclusive language; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: engineering vs cognitive science in AI; TACT2 reports results on chelation study in CVD, diabetes; a registry for health AI; trying to avoid past missteps with health tech evaluation; AI decision support and human cognition; database captures spectrum of risks from AI; safeguarding children’s digital determinants of health; scrutinizing cancer screening and health disparities; screening human microbiome for potential antimicrobials; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: parsing the implications of implementing PREVENT cardiovascular risk equations; AI-generated training set causes model to melt down into nonsense; tracking the “expert gaze” to evaluate AI decision support; making time for researchers to think; framework addresses algorithmic bias for nurses; AI image detectors stumble at flagging faked Western blot images; spotting spin in systematic reviews; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: cat moves to top of pet H-index leaderboard; AI predicts rogue waves at sea; comparing preventive strategies for HIV; AI-assisted MRI scans for breast cancer; adding context for clinical trial demographics; pursuing equity in AI; checking back in on AI self-regulation, one year later; learning to spot predatory research conferences; caution needed when using point-and-click sample size generators; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: language completeness and LLM capabilities; open foundation models, risk, and the value chain; CAR-T for autoimmune diseases; editing the gut microbiome in vivo; international medical graduates face roadblocks in US; what should publishers do when a paper is retracted?; nurses “collaborate” with LLMs; the role of academic medical centers in health AI adoption and oversight; Human Genome Project under the microscope; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: what we mean when we talk about AI; links between race and air pollution mortality; concerns over lack of access to cardiologists, obstetrics; ChatGPT for literature reviews; an LLM tailored for mobile use; FTC turns critical eye on PBMs; coping with the mental health crisis in academic science; AI supremacy for certain tasks may be more vulnerable than thought; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the value of clinical humility in the face of uncertainty; implications of generative AI for robotics; refining risk prediction for binary outcomes; managing retractions and the implications for information literacy; the limits of what scaling can achieve in AI; regulating AI with existing frameworks; haystack summarization and performance for large language models; lessons from China on managing hypertension; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: AI for decoding dog barks; Surgeon General designates firearm violence as public health crisis; where next for AlphaFold?; coming to grips with AI’s water use; randomized trial evaluates extended-release ketamine for depression; parsing the implications of vaccine exemption rates; forking paths in statistics; Coalition for Health AI releases assurance standards guide; LLMs for analyzing radiology reports; how AI will impact workforces; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: scientific literature being flooded with bogus publications; LLMs can help screen for trial participants; lingering questions about H5N1 transmission; trial tests walking for lower back pain; interpretable deep learning model helps docs scan EEGs; why you shouldn’t cite chatbots; LLM model translates neglected languages; AI challenges for global governance; critiquing (and defending) Medicare Advantage; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: countering hype around AI for discovery science; CO2 levels associated with risk from airborne disease; disproportionate effects of Medicaid disenrollment; a path forward for AI in nursing; references to nonexistent cell lines reveal tracks of paper mill publications; medicine stares down challenges of heart disease in coming years; deciding whether a “frictionless” experience in instruction is actually desirable; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: sizing up GPT-4 with retrieval-augmented generation; progress in stretchable RF antennas; sociotechnical frameworks for AI; creating “assembloids” of organoids to explore complex biological systems; evaluating clinical text datasets for LLM training; legal and ethical challenges for using LLMs in medicine; steps toward tackling replicability problems in scientific research; the imperative for informing trial participants about results; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: an ML-enabled intervention provides nudges for hard medical conversations; informational “inoculation” for misinformation; understanding what LLMs can and can’t do; power dynamics affect medical care; assessing the impacts of race-based adjustments for lung function; quantum internet marks another milestone; links between race, environmental pollution, and Alzheimer disease; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the persistence of bias in large language models; genomic study sheds light on mammalian adaptations; failure to publish code with recent AlphaFold paper irks scientists; questioning LLMs’ value proposition; application flags papers discussed on PubPeer; the hidden human expenses of cost-sharing in healthcare; questioning whether generative AIs are ready for primetime in patient care; a late foray into alchemy; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: lessons for health AI from self-driving vehicles; monoclonal antibody for malaria prevention; intense pressures on foreign residency applicants; large language models offer second opinions; poor quality dogs some patient-facing materials; PCAST releases report on AI for science and research; mapping patterns of research misconduct in the literature; new improvements to AlphaFold debut; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: foundation models for reading echocardiograms; FDA weighs in on lab-developed tests; telling human from AI in conference abstracts; NIST addresses Generative AI; USPSTF revises age recommendations for mammogram screenings; Dana Farber describes institutional rollout of GPT for staff; how some drugs “hijack” brain’s reward circuits; ensuring publication integrity in the age of AI; good advice for responding to peer review; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: WHO debuts healthcare chatbot; eyeing US preparations to counter avian flu; video games for the phylogenetic win; the importance of evidence-based approaches to smoking cessation; AI-assisted email associated with some benefits for docs, but saving time is not one of them; AI sets sight on modern battlefields (and haunts some old ones); surprising results from studies of medical debt relief; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: TRIPOD-AI statement covers best practices for reporting on AI prediction models; taking stock of real-world effectiveness of RSV vaccination; totting up the balance sheet for generative AI; the importance of inclusivity in design decisions; imputing missing covariate data; AI hunts down source of metastatic cells; multiple nations gear up to battle smoking and vaping; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: new pretraining approach allows LLMs to cite sources; Light Collective releases draft guidance on AI rights for patients; NYC government chatbot delivers dubious advice; study evaluates precision medicine approach in pediatric cancers; weighing up AI X-risk; new analyses cast doubt on DIANA fMRI technique; counting the full data costs of zero-shot learning for multimodal generative AIs; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: benchmarking LLMs for extracting oncology data from charts; greenery and mental well-being; LLMs get around information asymmetry; tiny artificial liver shows promise for treating liver failure without transplantation; network analysis reveals fraudulent “paper mills”; turning a skeptical eye on LLM performance on bar exams; the serious health impacts of loneliness; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: White House issues policies for AI use by federal agencies; NEJM AI requires registration of interventional AI studies; 3D specimen imaging project reaches finish line; insights into the human immune system, courtesy of COVID; using “digital twins” in biomedical research; hallucinated software gets called by real computer code; who should be responsible for policing integrity in scientific publication?; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: a framework for human labor in AI; the global health risks of air pollution; dermatology database seeks to overcome skin color bias in previous datasets; using generative AI for science communication; LLMs being used to generate peer reviews; the effects of digital redlining; AI-generated images used in engagement farming and scams; predicting underlying text from ground-truth embeddings; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: implementing generative AI in healthcare; landmark study looks at health consequences of microplastics; using AI to distill summaries from patient discharge notes; scientific misconduct haunts Alzheimer research; foundation models on the cutting edge of biological discovery; lean budget times may be ahead for research agencies; study flags bias regarding use of GPT in hiring decisions; plagiarism in peer review; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: toward generalist medical AI; limited benefit for rapid respiratory virus testing in ED; why successful health AI is more than algorithms; discussion paper examines AI impacts for Black community; epithelial organoids cultivated from stem cells in amniotic fluid; Coalition for Health AI debuts as nonprofit, announces leadership; Alzheimer disease biomarkers present long before clinical diagnosis; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: configuring health AI for human benefit; criticism erupts over figure in All of Us paper; risk assessment for open foundation models; deprecated authorship practices still common in life sciences; flagging cross-task inconsistency in unified models; promising findings for treating food allergies; gene duplication implicated in antimicrobial resistance; adding up generative AI’s environmental tab; using ChatGPT to evaluate research articles; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: few-shot learning powers drug interaction model; how LLMs pick up new skills (and why it matters); gene-swapped bananas are bulwark against fungal foe; new papers build on trove of NIH All of Us genetic data; parsing recently dismissed lawsuit over EHR data; pressure builds for definitive path on AI regulation; training language models to build proteins; a really big PDF; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the toll of digital disconnection; teaching LLMs to mimic doctors’ cognitive approaches; prosthetic allows user to sense temperature; a benchmark for LLMs designed to diagnose rare diseases; bibliometric analysis shows lack of clarity regarding genAI use in scientific publishing; LLMs can autonomously hack websites; regulatory frameworks for thinking about AI; the lasting epigenetic effects of smoking, much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: Department of Commerce announces debut of US AI Safety Institute Consortium; AI literature may be facing its own replication crisis; where to next for public health?; FDA eyes bias in pulse oximetry; California legislators propose new AI regulations; AI benchmarks easily perturbed; PLOS looks back on four years of open peer review; Google makes its Gemini AI available for some products and customer; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: how AI is reshaping society; NLP evaluation experiments reveal flaws; new study illuminates the reason why insects circle streetlights; AI automation of jobs may proceed gradually; cardiologists call for better collection of SOGIE data; survey examines AI governance; sizeable proportion of dementia cases may be due to liver dysfunction; sifting EHR data for diseases transmitted via transfusion; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the importance of sharing imaging data; NSF debuts National AI Research Resource; microbe genomes give up food preferences; groundswell gathers against ‘paper mills’; AMIE boasts high performance as conversational medical AI; how AI may change liability; new model for how error correction works in brains; dodging dataset shifts; NASEM recommends training on social media impacts for healthcare providers; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: new ML approach boosts geometry problem-solving; GPU architecture allows LLM eavesdropping; “anthrobots” suggest future therapeutic possibilities; new kind of AI bias identified; biological retinas inspire improvements in computer color vision; paper mills branching out into bribery; UK Post Office software disaster offers AI lessons; how AI tools could reshape organizations; many docs unfamiliar with how FDA evaluates devices; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: quantum computing plus LLMs; the case for zero-shot translation in scientific LLM applications; FTC warns model-as-service companies to toe the line on privacy; semaglutide use associated with reduced suicidal ideation; series examines developing, validating clinical prediction models; using LLMs to surface social determinants of health; FDA warns over declining vaccination rates; predatory publishing in medical education; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: chatbots and Borgesian Babel; dogs are good for your health; chatbot errs in diagnosing pediatric conditions; assurance labs for health AI; digital apps for contact tracing; “Coscientist” AI shows research chops; health impacts motivate people to address racial disparities; new class of antibiotics debuts against resistant A. baumanii; wearables for depressive disorders; meeting a new paradigm for data sharing; much more: