AI Health Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, clinical research, health policy, and more.
In this week’s Duke AI Health Friday Roundup: exploring LLMs’ capacity for inductive reasoning; Google debuts new Gemini LLM; structural racism and lung cancer risk; “passive fatigue” behind virtual meeting burnout; fruit flies suggest approach for generative AI learning; simple attack prompt can make LLMs disgorge sensitive training data; early warning for ovarian cancer; rating LLM trustworthiness; the global warming contributions of a digital pathology deep learning system; much more:
In this week’s Duke AI Health Friday Roundup: AI needs to get the lay of the healthcare land; drones beat ambulances for AED delivery; AI stress validation tool stumbles on validation; lessons from COVID misinformation; more worries for screen time and kids; when not just the content but the author are AI-generated; LLMs can’t fix healthcare by themselves; using GPT-4-ADA to cook up bogus research dataset; adapting quality assurance methods for AI; much more:
In this week’s Duke AI Health Friday Roundup: testing GPT-4’s diagnostic chops; yeast with 50% synthetic genome survives, replicates; roles for AI in clinical trials; role of pets in zoonotic spillover; vaccine status, bias, and perceptions of risk; potential for bias in radiological deep learning models; what rats remember; developing standards for health-related chatbots; how publishing professionals perceive recent changes in social media; much more:
In this week’s Duke AI Health Friday Roundup: more earthly biota than stars in the sky; AI needed to subtract AI-created content; researchers apply cognitive tests to GPT; study highlights bad citations as serious problem for science; mental health resources for LGBTQ+ youth; new therapies needed to counter dengue’s march; bioRxiv uses LLMs to create tailored content summaries from papers; risks of generative AI not evenly distributed; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: dissecting the AI executive order; deep learning predicts macular degeneration; history of medical debt; unease over the surveillance campus; social vulnerability, diabetes, and heart health; open access and consolidation in scholarly publishing; AI may require new legal frameworks; diverse datasets needed for training AI; “watermarking” may not work for distinguishing AI-generated content; much more:
In this week’s Duke AI Health Friday Roundup: transparency index for foundation models; upending assumptions about 1918 flu; disparity dashboards considered; fixing drift in image classifiers; COVID trial shows no benefit for vitamin C; Excel data gremlin vanquished; LLMs reveal medical racism, bias when queried; external validation not enough for clinical AI; “data poison” fends off generative AI; NIH changes grant evaluation criteria; much more:
In this week’s Duke AI Health Friday Roundup: digital determinants of health; determining when a pandemic is “over”; despite law, academia and institutions slow to return Native American art and remains; cancer cells siphon mitochondria from T cells; AI deciphers scorched scrolls from Roman ruins; addressing “ecosystem level” bias in AI; who’s legally on the hook when LLMs do harm?; writing grants with ChatGPT; much more:
In this week’s Duke AI Health Friday Roundup: the hidden influence of chronobiology; AI predicts immune escape; comparing COVID surveillance systems; yet another way to cheat at citations; updating models for ICU algorithm degrades performance; new “cooling” chemicals in cigarettes dodge menthol ban; AI image generator can’t be coaxed away from biased images; stroke deaths poised to rise in coming years; shedding light on AI’s dark corners; much more:
In this week’s Duke AI Health Friday Roundup: deep learning predicts variation in proteins; how AI effects clinical productivity; mRNA insights garner Nobel prize; US continues to lose ground in health, life expectancy; sitting is still bad for you; surveying algorithmic bias mitigation; antiracist approaches to clinical documentation; the surveillance and human labor interwoven into AI systems; the LLM hype cycle: peaks and troughs; much more:
In this week’s Duke AI Health Friday Roundup: lighting an (s)Beacon for genomic data; randomized trials for clinical AI; bees exhibit signs of sentience; scrutiny of AI chip design paper grows; the complexities of statistics vs. AI in medicine; deep brain stimulation for severe depression; worries about AI that sounds too human; tackling clinical conversations with GPT-4; YouTube disinformation videos being served to kids as STEM educational material; much more:
In this week’s Duke AI Health Friday Roundup: harnessing physical processes to power AI; xenograft study in mice sheds light on neuronal destruction in Alzheimer’s; speaking plainly in science; multimodal AI comes to the clinic; aligning AI fairness with medical practice; small-town healthcare imperiled by lack of doctors; GPT enhances consultant productivity and levels skills – with caveats; ableism in computer programming; much more:
In this week’s Duke AI Health Friday Roundup: navigating multimodal learning in AI; fine particulate pollution and breast cancer; surreptitious ChatGPT use pops up in scientific literature; the challenges of safeguarding generative AI against prompt injection; FDA panel gives thumbs-down to ubiquitous decongestant phenylephrine; study surveys standards for employing AI in journalism; twin study of WWII veterans sheds light on consequences of traumatic brain injuries; much more:
In this week’s Duke AI Health Friday Roundup: biased data offers window onto health equity issues; cancer therapeutics eye AI for drug discovery; testing machines with human exams; unveiling the “hidden curriculum” in medical education; once-vaunted telehealth startup collapses; eye movements combine with other data for early autism diagnosis; government seeks public input on AI and copyright; overemphasis on technology during COVID shutdown may have worsened education inequities; much more:
In this week’s Duke AI Health Friday Roundup: reinforcement learning to align LLMs with human preferences; modeling T cell exhaustion; examining clearance lineages of AI medical devices; writing as medicine for docs; healthcare needs more than current foundation models; watermarking images to spot AI influence; semaglutide tested in heart failure; NCSU researchers automate dragnet for fraudulent robocalls, much more:
In this week’s Duke AI Health Friday Roundup: how bias emerges in healthcare algorithms; COVID vaccination and reduced maternal-fetal risk; research institutions need to beware predatory publishers; AI enables speech and expression by avatar for paralyzed woman; the protein “unknome” gets a closer look; figuring out what open AI really means; a testing schema for AI consciousness; sharing code helpful, encourages citations – but most authors still don’t share; much more:
In this week’s Duke AI Health Friday Roundup: transparency for AI-generated content; a critical appraisal of large language models; reconsidering radiation therapy; the future of governance for health AI; sport supplements whiff on truth in labeling; electronic payment charges siphon money from healthcare; focusing on AI’s real dangers; investigation reveals trouble with ethical oversight at French institute; much more:
In this week’s Duke AI Health Friday Roundup: Meta debuts LlaMA 2 large language model; calculating the toll of misdiagnosis; teaching writing in the age of GPTs; geographical concentration in AI industry; transgender youth, social media & mental health; responding to systemic racism in science; ML for extracting data from unstructured EHR records; regulatory implications for medical chatbots; building resiliency for a hotter world; much more:
In this week’s Duke AI Health Friday Roundup: a primer on foundation models; genetics and asymptomatic COVID; overblown claims for AI content detection; don’t trust GPT with the baby just yet; AI thirst for data drives interest in synthetic sources; the merits of working (out) for the weekend; physics offers window on sudden heart arrhythmias; tracing developments in press coverage of scientific preprints; expanding vaccination coverage for uninsured adults; much more:
In this week’s Duke AI Health Friday Roundup: the unseen human costs underpinning popular AI chatbots; oceanic plastic pollution comes in all sizes; neighborhood redlining casts long shadow on health; project eyes AI-assisted texts for health behavior nudges; drowning remains a persistent threat to young children in US; catching up with a flurry of recent AI applications in medicine; big hospital data breach exposes patient names, emails; much more:
In this week’s Duke AI Health Friday Roundup: need for a global AI observatory; humans like GPT-3’s medical information better, regardless of whether it’s true or false; Surgeon General tackles epidemic of loneliness; problems with recency bias in NLP literature; ticks surf static charge to land on hosts; will scholarly publishing be able to cope with AI-generated content?; EHR data, bias, and pragmatic clinical trials; much more:
In this week’s Duke AI Health Friday Roundup: reflecting on the future of AI; successful creation of model “human embryoid” raises challenging questions; transparency reporting for generative AI; health imperiled by excessive heat; internet already feeling the strain of AI-generated junk content; chemo shortage becoming acute; a health AI code of conduct; disappointing report card for internet privacy protections for kids; leveraging social networks for contact tracing; much more:
In this week’s Duke AI Health Friday Roundup: nurses’ judgment vs the algorithm; mapping mutations in primate and human genomes; GPT4 tackles differential diagnosis; the maternal death crisis threatening Black women; AI for drug design; ventilation as public-health priority; cut-and-paste errors proliferate in EHRs; aspirin and anemia in older adults; applying Ubuntu to AI ethics; informed consent in psychedelic research; cognitive impairments bring financial peril for elderly; much more:
In this week’s Duke AI Health Friday Roundup: helping kids navigate the AI landscape; transformer models can tackle a myriad of predictive clinical applications; the surprising health burdens of noise; doctors, scientists receiving elevated levels of online harassment; large quantities of LLM-generated text swamp Amazon’s mTurk platform; digital media survey shows shifts toward image- and video-centric social media; how AI image generators can supercharge existing societal bias; much more:
In today’s Duke AI Health Friday Roundup: probing the limits of transformer models; the merits of visual explanations in healthcare; potential for racial and ethnic bias in algorithmic healthcare tools; gratitude ceremonies for donated bodies; AI targets antibiotic candidate for resistant pathogen; postdoc pipeline for life sciences in danger of drying up; climate change may narrow range of livable area on Earth for millions; much more:
In this week’s Duke AI Health Roundup: does explainable AI help with decision-making?; “skeletal editing” opens new doors in chemistry; watching out for persuasive language in science; Charles Babbage and the use of data for control; hospital CT scanner illuminates hidden manuscripts; ChatGPT exceeds brief, writes fiction in court filing; many older patients use health portals; conference eyes growing threats from misinformation; much more:
In this week’s Duke AI Health Friday Roundup: large language models and evidence-based medicine; “digital bridge” restores mobility after paralysis; impact of legislation on access to gender-affirming care; switching endpoints common in clinical trials, reporting less so; prospects for generative AI in medicine; publishing credits join college entrance arms race; Surgeon General addresses concerns about social media and youth mental health; much more:
In this week’s Duke AI Health Friday Roundup: neural nets help robots find their place in the world; two studies underscore the toll of inequity on health, wealth; Google presents results from Med-PaLM 2; study puts price tag on manuscript formatting; GPT-4 task: explain GPT-2; CRISPR screening helps identify potential antidote for deadly mushrooms; how small can a language model go (and still make sense?); ChatGPT goes to college; much more:
In this week’s Duke AI Health Friday Roundup: what the medical world still doesn’t understand about AI; meeting the needs of small vulnerable newborns; embracing failure in science; Google goes big on generative AI; machine learning for hunting shipwrecks in Thunder Bay; regression vs ML for breast cancer prognostication; data availability statements often disappoint; EU committees signal move toward stronger data privacy; much more:
In this week’s Duke AI Health Friday Roundup: Duke’s Cynthia Rudin on transparent AI; Black Americans face “cardiology deserts”; marked jump in proportion of youth emergency visits for mental health reasons; considering AI for infectious disease surveillance; problems emerge with nation’s organ donation systems; share of US oncology clinics owned by private equity grows; language may affect musical perceptions; much more:
In today’s Duke AI Health Friday Roundup: study reveals bumpy path to integrating AI into clinical care; revisiting questions about credit for DNA discovery; asking ChatGPT to help with your research; graphene “tattoo” pacemaker tested in rats; model uses EHR data to predict hospitalization risks for kids with complicated health issues; transplanting pancreatic islet cells to control diabetes; testing a video explainer for patients who can benefit from ICDs; much more:
In today’s Duke AI Health Friday Roundup: does AI need a body for real understanding?; GPT4 and Epic join hands; Duke scientists achieve imaging breakthrough; tech companies behind health data “gold rush”; pros and cons of owning an emergency defibrillator; the case for independent, open-source AI; imaging journal editors resign over publication charges; associations between COVID and development of diabetes; much more:
In today’s Duke AI Health Friday Roundup: reports from AI research groups stress governance, ethical issues; studies examine genetic lineages of lung cancer; cross-site tracking of hospital patients nearly ubiquitous; newly discovered form of archaea found in ocean mud puzzles, delights scientists; chatbots and the coming of AI-generated “grey goo”; FDA revises safety warning for opioids; plastic waste: the new geology; warning labels for data collection risks; much more:
In today’s Duke AI Health Friday Roundup: Coalition for Health AI releases blueprint for trustworthy AI; stark differences in US-UK life expectancy; US youth mortality rises, and COVID is not the only driver; socioeconomic stigma among PhD students; open letter calls for “pause” on AI development; brain research adds to understanding of Alzheimer’s among Black patients; transgender persons may face increased healthcare costs, reduced access; taking stock of brain-computer interfaces; much more:
In today’s Duke AI Health Friday Roundup: guiding patients through the digital care maze; human-centered design for AI; dissecting the Internet Archive ruling; demanding safeguards for biometric data; new organ donation rules create winners and losers; chatbots and a theory of mind; global equity in research collaborations; FDA issues guidances on AI, cybersecurity; US stillbirth rate remains high; trying to understand AI risks without being able to see under the hood; much more:
In today’s Duke AI Health Friday Roundup: simple strategies for countering bias in large language models; debate swirls about new childhood obesity guidelines; pumping the brakes on AI; genetics of dogs living near Chernobyl’s ruins; researchers intrigued by GPT4 but want more info; military aviators, groundcrew at heightened risk for some cancers; White House releases equitable data report; how to work with a data commons; finding collaborators across academic medical centers; much more:
In today’s Duke AI Health Friday Roundup: GPT4 AI debut wows users, raises questions; US maternal deaths continue to climb; early lower-respiratory tract infections have long-term consequences; transformer model predicts hundreds of millions of protein structures; revisiting the evidence for masking; portable MRIs possible; transparency practices in AI; benefits and burdens of open-access publishing; building healthy campuses; much more:
In today’s Duke AI Health Friday Roundup: securing AI against hidden backdoors; reining in the size of generative AI models; implications of new obesity drugs; NC governor proposes more funding for mental health; US “deaths of despair” during COVID pandemic; coerced citation in medical literature; how UC Davis achieves med school diversity; protecting rainforests can also shield against future pandemics; artificial sweetener associated with cardiovascular risks; much more:
In today’s Duke AI Health Friday Roundup: clinical vs general large language models for medical NLP; Lilly announces cost caps for insulin products; dangers of ‘algorithmic paternalism’; parental social support and mental health of LGBTQ kids; scoring system for housing help may be adding to inequity; rural hospitals see loss of obstetric/maternity services; working to improve health literacy and fighting misinformation; much more:
In today’s Duke AI Health Friday Roundup: Bing AI chatbot’s churlishness surprises, alarms users; RCT of high-dose ivermectin for COVID shows no benefit for symptom length, hospitalization; “style cloaks” for art confound generative AIs; vascular surgery practices at Kansas VA draw scrutiny; data brokers are trafficking in sensitive health data; a skeptical perspective on chatbots’ prospects in education; 8 days a week needed for complying with clinical practice guidelines; much more:
In today’s Duke AI Health Friday Roundup: The blurriness of large language models; troubling errors crop up in genetic studies; chatbots take center stage; a call for regulating AI now; patching injured rat brains with organoids; racial disparity and ambulance transportation; using AI to help parse animal communication; paper deluge heightens severity of peer review crisis; revisiting chocolate’s health benefits; azithromycin prophylaxis in childbirth; much more:
In today’s Duke AI Health Friday Roundup: examining future prospects for large language model chatbots; Scandinavian study evaluates myocarditis outcomes; Black and Hispanic dialysis patients at greater risk for infections; FDA issues guidance for external controls; “jailbreak” prompting technique overrides chatbot’s ethical brakes; global agricultural use of antibiotics much higher than previously thought; closing the gap on building a culture of open research, much more:
In today’s Duke AI Health Friday Roundup: the uncanny valley of AI applications; EHR data powers early autism screening; deer may be serving as reservoir for COVID; study delineates acute effects of diesel exhaust on human brains; FDA announces reorganization around food oversight; AI assistance makes its way to the patient bedside; wearable trackers to provide digital biomarkers for progression of Duchenne muscular dystrophy; much more:
In today’s Duke AI Health Friday Roundup: heart failure outcomes worst for rural Black men; looking forward to the future of clinical trials; stroke risk algorithms perform worse for Black patients than white ones; avian flu spreads at mink farm; drug manufacturing lapses harm young leukemia patients; trial will assess AI for lung cancer risk prediction; the effect of Twitter tumult on scholarly publishing; much more:
In today’s Duke AI Health Friday Roundup: looking ahead to clinical trials for 2023; algorithm solves shortest-path problem for negative graphs; study finds widespread PFAS contamination in freshwater fish; relatively few hospitals compliant with federal price transparency mandates; ChatGPT creates artificial abstracts that pass scientific review; injection drug use fuels rise in endocarditis; Getty Images sues Stability AI over image scraping; much more:
In today’s Duke AI Health Friday Roundup: developers’ roles in building ethical AI; median US prices for new drugs top $200K in 2022; whole-parasite malaria vaccine tested; the “political economy” of misinformation; are AIs breaking copyright laws?; AI, clinical decision-making, and risks to LGBTQ patients; inpatient adverse events still common, many preventable; much more:
In today’s Duke AI Health Friday Roundup: using machine learning to create “synthetic” x-rays and answer medical questions; the continuing evolution of gender-affirming care; how glass frogs manage their disappearing act; validating survival models; going beyond Tuskegee when examining medical racism; how the scholarly community deals with paywalled papers; much more:
In today’s Duke AI Health Roundup: tools for disaggregating large datasets for bias evaluation; global deaths from COVID may be substantially undercounted; NIH proposes streamlining peer review for grants; engaging with AI issues is important – for everyone; enormous potential for mRNA platforms; “smart bandages” for healing, monitoring wounds; some telehealth companies funnel data to tech and social media companies; much more:
In today’s Duke AI Health Friday Roundup: transferring skills between robots; NHLBI report scrutinizes social determinants of health in atrial fibrillation; kicking the tires on ChatGPT; making pulse oximeters work for everyone; considering race & ethnicity in medical school admissions; national database will track nonfatal opioid overdoses; testing the generalizability of a kidney injury model; researchers buckling under administrative burdens; much more:
In today’s Duke AI Health Friday Roundup: large language model gets pushback from scientists; competition among viruses may blunt effects of feared winter “tripledemic”; oversight for machine learning software in healthcare; fruit fly connectome resembles machine learning architectures; “evidence map” for maternal health risk factors; the importance of trust between patients and physicians; much more:
In today’s Duke AI Health Friday Roundup: new report envisions transformed digital ecosystem; refinement in neuromorphic chip design may open new frontiers in AI; March of Dimes report card shows worsening rates of preterm birth in US; potential complications for Twitter via EU GDPR regulations; in-utero enzyme replacement therapy for Pompe disease; totting up 8 years of predatory publishing onslaught; applying regulatory science for better medical AI; much more:
In today’s Duke AI Health Friday Roundup: Meta language model predicts proteins; Paxlovid and long COVID risks; innate immune system offers possibilities for Alzheimer’s approaches; lawsuit sends ripples through the world of generative AI; harnessing value-based payment for health equity; teasing out the implications of OSTP publications access policy; pitfalls of using demographic data to promote algorithmic fairness; much more:
In today’s Duke AI Health Friday Roundup: graph neural networks to describe galaxy evolution; bias in risk prediction models; CDC issues new opioid guidelines; app for exploring bias in AI-generated images; realizing biology’s potential for this century; how Wikipedia citations can affect impact of journal articles; giving an eel an MRI; the current state of medical malpractice law; much more:
In today’s Duke AI Health Friday Roundup: time series classification for sensor data; most US maternal deaths are preventable; confronting stigmatizing language about substance use; digital repository houses wealth of 3-D specimen scans; user evaluations for explainable AI systems; retooling research funding mechanisms; the immunological reverberations of the Black Death pandemic; much more:
In today’s Duke AI Health Friday Roundup: the invisible work underpinning AI; meta-research study reveals unexplained variance; dish of neurons learns to play Pong; toolkits for ameliorating AI bias; results from Moderna vaccine trial in kids; inequities in internet access; confronting racism in the culture of science; factors behind steep US life expectancy declines; AI translators for spoken language; much more:
In today’s Duke AI Health Friday Roundup: modeling the perfect cup of joe; new regulations grant patients more control over health data; how COVID slips past cellular defenses; rates of physician burnout climb; what not to do in designing biomarker studies; misinformation’s threat to health cybersecurity; a human rights framework for AI; atrial fibrillation and use of DOACs in disadvantaged neighborhoods; interventions to reduce partisan animosity; much more:
In today’s Duke AI Health Friday Roundup: White House OSTP releases “AI Bill of Rights”; Pääbo wins Physiology/Medicine Nobel for paleogenomics; COVID lessons and coming pandemics; messaging as public health tool; postdoc pipeline slows to a trickle; systematic review finds paucity of randomized trials of machine learning interventions; the limits of mental health chatbots; common pitfalls of AI journalism, much more:
In today’s Duke AI Health Friday Roundup: transformer neural networks mimic the human hippocampus; NIH undertakes to ID function for every human gene; FDA releases new guidance for health AI; “nanorattles” shine a light on cancer detection; the impact of elite universities on hiring for US faculty; light pollution gets worse across much of Europe; association between type 1 diabetes and COVID infections in kids; much more:
In today’s Duke AI Health Friday Roundup: probing reading comprehension for machines; CAR-T for lupus; reflections on loss of public trust in science (and how to fix it); reverberations of racism in digital image collections; FDA eyes pulse oximeter performance with darker skin; USPSTF recommends widespread anxiety screening; wearable sensors for measuring tumor regression; much more:
In today’s Duke AI Health Friday Roundup: peering through COVID-induced “brain fog”; critiquing academic culture at computer science conferences; cardiovascular polypill trial results show benefit for secondary prevention; medical racism, radiation, and x-rays; the case for better data on race, ethnicity & language; cases of acute flaccid myelitis increase; AI decodes speech from thought without invasive probes; much more:
In today’s Roundup: health and the genetics of circadian rhythms; trust and human-robot interactions; how bias gets built into GANS; Stone Age surgery; probing the limits of scientific education and civic engagement; judge rules against PrEP coverage; prosthetics for memory; arguing for and against including AI in medical training; much more.
In today’s Duke AI Health Friday Roundup: digital biomarkers for disease surveillance; weak electrical current for countering memory loss; racial disparities in prostate cancer diagnosis persist in affluent neighborhoods; proposing a new approach for managing journal retractions; cybersecurity primer for healthcare; a boom in rare kidney disease research; segregation, redlining, and firearm violence in Baltimore; “touchless” sensing for detecting Parkinson disease; much more.
In today’s Duke AI Health Friday Roundup: the limits of language in AI; a worsening child mental health crisis in North Carolina; big open-access policy change from OSTP; using machine learning to predict carcinogenic compounds; choosing whether to attempt to eradicate diseases; new method may offer cheap path to unravelling “forever” PFAS chemicals; reporting of NIH-funded clinical trials still lags; lip service vs meaningful action in publication integrity; equity, justice, and disability data; much more:
In today’s Roundup: assessing the health merits of fitness trackers; Florence Nightingale’s contributions to data visualization; racial inequity in uterine cancer; emergency departments under strain; how “social capital” shapes our world; educational tech and cyber risk; why evidence-based medicine needs implementation science; the hidden chaos of living systems; much more.
In today’s Duke AI Health Friday Roundup: the evolution of lactose tolerance; philosophical NLP AI hard to tell from the real thing; possible data fraud rocks Alzheimer research; free library of AlphaFold protein structures released; humans may be less resilient to extreme heat than thought; nursing homes pursue aggressive legal tactics over unpaid bills; study homes in on COVID outbreak epicenter; White House pivots toward harm reduction in drug policy; why data breaches keep happening; much more.
In today’s Duke AI Health Friday Roundup: data leakage as challenge for machine learning replication; effectiveness and uptake of machine-learning application for sepsis detection; associations between community violence and cardiovascular risk; differences in aging have implications for dementia risk; assessing the state of telehealth in NC; bolstering diversity in clinical trials; tiny windup motor runs on DNA; much more.
In today’s Duke AI Health Friday Roundup: BLOOM debuts as open-source, open-access large language model; bias in foundation models translates to real world via robots; calculating the “missing Americans” of higher US mortality rates; what to do when physicians spread medical misinformation; saving lives by brushing teeth; bringing back the single-panel figure; advances in wastewater analysis allow scientists to track individual COVID variants; much more.
In today’s Duke AI Health Friday Roundup: AI considered as “late-stage teenager”; racial equity in clinical trials; ethical filtering for large language models; precision medicine for rheumatology; updating approaches to disease surveillance; Facebook inundates cancer patients with dubious ads; making pulse oximeters work for everyone; adjuvant boosting for COVID vaccines; legal framework for biometric tech; adversarial training for NLP models; much more.
In today’s Duke AI Health Friday Roundup: embodied AI reaches toward a new kind of problem-solving; considering the role of pragmatic and virtual clinical trials; Lancet surges to the top of journal impact factor ratings; mistrust of tech and the collapse of contact tracing efforts; FDA orders Juul to cease marketing vaping products; managing polypharmacy in heart failure; the ethics of large language models; the threat to privacy posed by inferential analytics; making inroads on childhood food insecurity; much more.
In today’s Duke AI Health Friday Roundup: the mathematics of randomized trials; medical debt affects huge proportion of Americans; k-safety properties help keep machine learning models on track; COVID worsens peer-review crisis; new analysis reveals US health disparities; human activity overwhelms animal senses; one girl’s illness yields new insights into lupus; FDA advisory panels give thumbs-up for COVID vaccines in small children; US still lags in clinical trial diversity; learning to measure what matters to patients; much more.
In today’s Duke AI Health Friday Roundup: new lightning-fast algorithm solves maximum flow; discrimination puts strain on hearts; skeptical views on artificial general intelligence; head-turning cancer trial results from ASCO; using machine learning to reduce cognitive load on healthcare professionals; digital innovations in mental health may not reach everyone; tracking what may be multiple monkeypox outbreaks; much more.
In today’s Duke AI Health Friday Roundup: supercomputer breaks exascale barrier; pulse-oximetry meters yielded underestimates of COVID effects in people of color; machine perfusion keeps liver viable for transplant; ancient victims of Vesuvius have genomes sequenced; gender bias in math prizes; lobbying against data privacy legislation intensifies; how to spot a “hijacked” scientific journal; machine learning algorithms ID potentially dangerous asteroids in old astrophotos; much more.
In today’s Duke AI Health Friday Roundup: avoiding the Turing Trap in AI; monkeypox emerges in US, Europe; roadmap for better western blot data; patent law on collision course with AI; individual variability may still confound mouse models; firearms lead causes of death for children in 2020; EMA puts hold on generics due to dubious bioequivalence studies; retracing the path that let COVID jump from minks to humans; move toward Medicare Advantage plans has implications for availability of data; much more.
In today’s Duke AI Health Friday Roundup: machine learning deduces physical law; marking a somber COVID milestone; rebuilding trust in public institutions; lack of diversity still a problem for clinical research; frameworks for evaluating clinical AI; digital ID can leave most vulnerable behind; study compares vaping vs. nicotine patches for quitting smoking; European digital privacy protections poised to go beyond GDPR; Great Pacific Garbage Patch turns out to be surprisingly rich in marine life; much more.
In today’s Duke AI Health Friday Roundup: automatic bias detection; integrating AI into clinical workflows; the pitfalls of ancestry data; going beyond fairness in AI ethics; figuring out the “why” of some cancers; why preprints are good for patients, too; transparency and reform for medical debt; US public still esteems scientists; urging social media to open its book for researchers; imaging the invisible at cosmic scales; much more.
In this week’s Duke AI Health Friday Roundup: toolkit for applying NLP to EHR free-text; AI powers wildlife conservation efforts; addressing racism in medical education; what’s next for AlphaFold; questioning the review process for NSF fellowships; new hydrogel is crushing it, literally; mobile health for reducing health inequities; a new framework for managing medical technologies; AI and a new era of colonialism; much more.
In today’s Duke AI Health Friday Roundup: Global review of bias in clinical AI studies; reconsidering hypertension in pregnancy; how humans build and share algorithms; science journals’ responsibilities to mend old harms; European regulators clear AI x-ray reader for use; starlings and Shakespeare; brain imaging reference spans entire lifespan; much more.
In today’s Duke AI Health Friday Roundup: siloed storage risks big data fading into obscurity; renewed focus on viral factors in MS; the racial legacy of the Flexner Report; audits for medical algorithms; patients link up with researchers to help drive studies of long COVID; Sharpless to step down as NCI chief; the rewards of “diving into a new field” later in life; study examines different COVID vaccines in head-to-head comparisons, “sonification” portrays exoplanet data as music; much more.
In today’s Duke AI Health Friday Roundup: estimating the health risks of longer-term space missions; report takes pulse of AI in 2022; no COVID benefit for early ivermectin in Brazilian RCT; dashboard condenses firehose of AI research into manageable views; light pollution’s impact on human health; postpartum Medicaid extension goes into effect in NC; lack of mental health resources to counter effects of racism on campuses; trove of 1950 Census data released; much more.
In today’s AI Health Friday Roundup: drone delivery for blood products; geometry, human cognition & AI; the FTC & “algorithmic disgorgement”; magpies: even smarter than we realized; revisiting data dashboards after 2 years of COVID; rethinking disability and the workplace; credit reporting companies’ new approach to medical debt; NIST publishes report on AI bias standards; brain implant allows “locked-in” person to communicate; much more.
In today’s Roundup: Snakebitten? Data science can help; large (harmless) spiders on the march; adversarial attack with lasers foxes self-driving LIDAR; impact of state policy on COVID mortality; Cow Clicker as a window onto online culture; creating guardrails for health AI; growing impatience with data blocking; ARPA-H gets funded but organizational questions remain; disparities impact healthcare workers, too; possible unintended consequences of open access publishing; challenges in getting data and code from study authors; much more.
In today’s Duke AI Health Friday Roundup: why “AI” and “machine learning” can be loaded terms; lead exposure may have docked IQ points; DeepMind’s Ithaca parses, dates ancient Greek writing; pharma marketing explores the rest of the color wheel; effects of school masking policies; preprints need clarity on policies; Shackleton’s Endurance located on ocean floor; Surgeon General issues call for misinformation data and perspectives; NC to use Medicaid to tackle social determinants of health; IoT, medical devices at risk from security vulnerabilities; much more:
In today’s Duke AI Health Friday Roundup: war in Ukraine spills over into cyber realm; data shifts spell trouble for clinical AI; Berkeley loses CRISPR patent battle; differences in neighborhood mobility can affect disease risk; racism (not race) as a risk factor; piping digital notebooks directly into manuscripts; grim news from latest climate report; perfect cryptographic secrecy possible; new analyses point to Wuhan market as point of origin for COVID pandemic; who’s keeping track of your location data?; new lemur makes debut at Duke Lemur Center; much more:
In today’s Roundup: when docs spread misinformation; deep learning holds reins in fusion reactor; parsing NC regulations on syringes; sudden collapse of pain clinics leaves patients stranded; trade secrets, patents and bioscience; remembering Paul Farmer; the “wicked problem” posed by retracted scientific papers; fighting ageism in AI; groundswell gathers for federal privacy protections; much more:
In today’s Roundup: why bigger is better for neural nets; REDUCE-IT eyes cost effectiveness for statin alternative; COVID’s burdens for immunocompromised; reinforcement learning yields AI that can beat humans at driving simulator; policy journal devotes issue to racial equity in healthcare; fighting smartphone addiction to boost scientific productivity; scientists not always equipped for social media furor; Califf returns to FDA leadership post; transgenic zebrafish on the loose in Brazil; much more:
In today’s Roundup: assessing algorithmic impact for healthcare; nerve stimulation to treat paralysis; teaching robots to generalize; extending sleep linked to reduced caloric intake; countering quantum hackers; making space for compassionate care; reconsidering the toll from the Black Death in medieval Europe; more weirdness from the Burgess shale; when caregivers are machines; spotlight on ad targeting and data sharing practices; much more:
In today’s Roundup: Crisis helpline passes data to for-profit spinoff; the ethics of visual representations of AI; COVID’s toll on kids in sub-Saharan Africa; Algorithmic Accountability Act introduced; untangling tau protein; “fingerprinting” for journal PDFs; new nonprofit clinical trials org launches; the long half-life of problematic datasets; cybercriminals benefit from lax attitudes toward data protection; countering buggy scientific programming; much more:
In today’s Roundup: tracking years of work in medical AI and machine learning; considering data ethics for mathematicians; neurological consequences of COVID; new antivirals will be needed for COVID in future; OpenAlex debuts research database; digital medicine and targeted ads; practice-base research networks struggle in COVID’s wake; inclusivity and bias in human-machine interactions; AI forays into breakfast cereal, much more:
In today’s AI Health Roundup: Black patients more likely to have stigmatizing descriptions in EHR notes; Office of the National Coordinator debuts Trusted Exchange Framework; global toll of COVID likely undercounts deaths; impact of “nocebo effect” on reported adverse events in COVID trials; world’s children still face dire health impacts from lead; trash piles up as Omicron spreads among sanitation workers; links between eviction and Medicaid disenrollment, much more:
In today’s Roundup: digital phenotyping with patient-generated data; Epstein-Barr virus role in mutliple sclerosis; COVID vaccination effort stalls in younger kids; “growing pains” for arXiv preprint server; pig heart transplantation raises ethical issues; unpacking Medicare coverage decision for Alzheimers medication; digital literacy not the only factor in sharing of misinformation; much more.
In today’s Roundup: looking ahead to the next pandemic; best data visualizations of 2021; health AI for the Global South; meta-analysis sharpens focus on ‘long COVID’; diet, gut microflora, and immunotherapy; sorting through Web3 hype; despite progress, chatbots still go off the rails; flattery for dictator still enshrined in scientific literature; the state of scientific peer review in 2021; much more.
Well, it’s 2022, and we’re already running a bit behind. Nevertheless, here is an entirely subjective selection of Roundup items from 2021 that caught our eye, raised our eyebrows, or made us stop and think awhile. We hope you’ll enjoy them as well.
Thanks for reading, and here’s hoping for a better 2022.
In today’s Roundup: Spread of omicron variant may make for a gloomy winter; the ethics of exporting AI models; large study examines cardiovascular side effects from COVID, vaccines; abandoning traditional publishing for preprints; deciding authorship position with videogame duels; transparent jellyfish open window on neurobiology; ditching systematic reviews for something faster; Senate committee meets to consider Califf FDA nomination; more people skimping on medical care due to cost; much more.
Forge AI Health Friday Roundup
In today’s Roundup: introducing simulation intelligence; replication project for cancer studies has hard time getting data; DeepMind makes splash with compact language transformer; survey bias overestimated vaccine uptake; pandemic takes toll on nation’s blood pressure; synthetic embryos raise thorny questions; study finds no benefit from Medicare Advantage bonus program; what ethnography can tell us about the reproducibility crisis; much more.
In today’s Roundup: federated learning on the Internet of Things; recognizing cells via barcodes; favoritism in scientific publishing; calling for better BIPOC representation in health data; building an evidence base to fight health misinformation; ethical complications for large population genetics datasets; survey indicates growing burnout among scientists; closing the global gap in COVID vaccination; much more.
In today’s Roundup: healthcare professionals buckling under the strain of a second pandemic year; unintended consequences from health apps; GPT-3 livens up software error messages; worrying rise in COVID cases ahead of holidays; reimagining diagnostic excellence; school nurses exhausted; second patient found to have naturally cleared HIV; just how much we owe peer reviewers; Califf tapped to head FDA for second time; much more.
In today’s Roundup: specially engineered bacteria solve mazes; checking up on the Delphi project’s “machine ethics”; white-tailed deer may be a reservoir for COVID; the cardiovascular toll of pollution; Surgeon General releases primer on countering health misinformation; COVID upends scientific career paths; how surveillance erodes community; rethinking risk and our response to it; getting a handle on sensor-generated health data; much more.
Forge AI Health Friday Roundup
In today’s Roundup: confronting AI applications that discriminate by appearance; strong showing for experimental oral COVID therapy; health burdens of air pollution may be worse than thought; HPV vaccine quashes cervical cancer in England; machine learning meets microscopy; alarming attrition among lab staff; a theory of justice for AI; much more.
In today’s Roundup: COVID vaccination for kids draws closer; AI unleashed on hypothesis creation; Facebook faces criticism, moves into “metaverse”; modeling study sheds light on early COVID transmission; bracing for the next variant; scientists wade into the public discourse; evaluating the effects of “open” peer review; the imperative for pharmacoequity; trial finds SSRI antidepressant is effective in helping to avoid COVID hospitalization; much more.
Forge AI Health Friday Roundup
In today’s Roundup: the pitfalls of oracular AI; climate change and its impact on almost every aspect of human health; big data, small data, and future directions for machine learning; the limits of tech whistleblowing; the effects of redacting identifying information on NIH grants; “universal animals” illuminate links between embodiment and intelligence; dataviz considered as superpowers; returning narrative to scientific publishing; much more.
Forge AI Health Friday Roundup
In this Roundup: COVID’s impact on “fly-in” medical missions; alarm and debate over FHIR hacking report; real-world AI study finds “negligible” tradeoff between fairness, accuracy; breast cancer poses greater risks for Black women; seeking clarity on ivermectin; convolutional neural networks gaining ground in facial recognition; mixing COVID vaccines and boosters; FDA seeks lower sodium levels; developing trustworthy AI; NISO seeks to make paper retraction more visible; much more.