AI Health Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, clinical research, health policy, and more.
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: AI for decoding dog barks; Surgeon General designates firearm violence as public health crisis; where next for AlphaFold?; coming to grips with AI’s water use; randomized trial evaluates extended-release ketamine for depression; parsing the implications of vaccine exemption rates; forking paths in statistics; Coalition for Health AI releases assurance standards guide; LLMs for analyzing radiology reports; how AI will impact workforces; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: scientific literature being flooded with bogus publications; LLMs can help screen for trial participants; lingering questions about H5N1 transmission; trial tests walking for lower back pain; interpretable deep learning model helps docs scan EEGs; why you shouldn’t cite chatbots; LLM model translates neglected languages; AI challenges for global governance; critiquing (and defending) Medicare Advantage; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: countering hype around AI for discovery science; CO2 levels associated with risk from airborne disease; disproportionate effects of Medicaid disenrollment; a path forward for AI in nursing; references to nonexistent cell lines reveal tracks of paper mill publications; medicine stares down challenges of heart disease in coming years; deciding whether a “frictionless” experience in instruction is actually desirable; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: sizing up GPT-4 with retrieval-augmented generation; progress in stretchable RF antennas; sociotechnical frameworks for AI; creating “assembloids” of organoids to explore complex biological systems; evaluating clinical text datasets for LLM training; legal and ethical challenges for using LLMs in medicine; steps toward tackling replicability problems in scientific research; the imperative for informing trial participants about results; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: an ML-enabled intervention provides nudges for hard medical conversations; informational “inoculation” for misinformation; understanding what LLMs can and can’t do; power dynamics affect medical care; assessing the impacts of race-based adjustments for lung function; quantum internet marks another milestone; links between race, environmental pollution, and Alzheimer disease; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the persistence of bias in large language models; genomic study sheds light on mammalian adaptations; failure to publish code with recent AlphaFold paper irks scientists; questioning LLMs’ value proposition; application flags papers discussed on PubPeer; the hidden human expenses of cost-sharing in healthcare; questioning whether generative AIs are ready for primetime in patient care; a late foray into alchemy; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: lessons for health AI from self-driving vehicles; monoclonal antibody for malaria prevention; intense pressures on foreign residency applicants; large language models offer second opinions; poor quality dogs some patient-facing materials; PCAST releases report on AI for science and research; mapping patterns of research misconduct in the literature; new improvements to AlphaFold debut; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: foundation models for reading echocardiograms; FDA weighs in on lab-developed tests; telling human from AI in conference abstracts; NIST addresses Generative AI; USPSTF revises age recommendations for mammogram screenings; Dana Farber describes institutional rollout of GPT for staff; how some drugs “hijack” brain’s reward circuits; ensuring publication integrity in the age of AI; good advice for responding to peer review; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: WHO debuts healthcare chatbot; eyeing US preparations to counter avian flu; video games for the phylogenetic win; the importance of evidence-based approaches to smoking cessation; AI-assisted email associated with some benefits for docs, but saving time is not one of them; AI sets sight on modern battlefields (and haunts some old ones); surprising results from studies of medical debt relief; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: TRIPOD-AI statement covers best practices for reporting on AI prediction models; taking stock of real-world effectiveness of RSV vaccination; totting up the balance sheet for generative AI; the importance of inclusivity in design decisions; imputing missing covariate data; AI hunts down source of metastatic cells; multiple nations gear up to battle smoking and vaping; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: new pretraining approach allows LLMs to cite sources; Light Collective releases draft guidance on AI rights for patients; NYC government chatbot delivers dubious advice; study evaluates precision medicine approach in pediatric cancers; weighing up AI X-risk; new analyses cast doubt on DIANA fMRI technique; counting the full data costs of zero-shot learning for multimodal generative AIs; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: benchmarking LLMs for extracting oncology data from charts; greenery and mental well-being; LLMs get around information asymmetry; tiny artificial liver shows promise for treating liver failure without transplantation; network analysis reveals fraudulent “paper mills”; turning a skeptical eye on LLM performance on bar exams; the serious health impacts of loneliness; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: White House issues policies for AI use by federal agencies; NEJM AI requires registration of interventional AI studies; 3D specimen imaging project reaches finish line; insights into the human immune system, courtesy of COVID; using “digital twins” in biomedical research; hallucinated software gets called by real computer code; who should be responsible for policing integrity in scientific publication?; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: a framework for human labor in AI; the global health risks of air pollution; dermatology database seeks to overcome skin color bias in previous datasets; using generative AI for science communication; LLMs being used to generate peer reviews; the effects of digital redlining; AI-generated images used in engagement farming and scams; predicting underlying text from ground-truth embeddings; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: implementing generative AI in healthcare; landmark study looks at health consequences of microplastics; using AI to distill summaries from patient discharge notes; scientific misconduct haunts Alzheimer research; foundation models on the cutting edge of biological discovery; lean budget times may be ahead for research agencies; study flags bias regarding use of GPT in hiring decisions; plagiarism in peer review; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: toward generalist medical AI; limited benefit for rapid respiratory virus testing in ED; why successful health AI is more than algorithms; discussion paper examines AI impacts for Black community; epithelial organoids cultivated from stem cells in amniotic fluid; Coalition for Health AI debuts as nonprofit, announces leadership; Alzheimer disease biomarkers present long before clinical diagnosis; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: configuring health AI for human benefit; criticism erupts over figure in All of Us paper; risk assessment for open foundation models; deprecated authorship practices still common in life sciences; flagging cross-task inconsistency in unified models; promising findings for treating food allergies; gene duplication implicated in antimicrobial resistance; adding up generative AI’s environmental tab; using ChatGPT to evaluate research articles; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: few-shot learning powers drug interaction model; how LLMs pick up new skills (and why it matters); gene-swapped bananas are bulwark against fungal foe; new papers build on trove of NIH All of Us genetic data; parsing recently dismissed lawsuit over EHR data; pressure builds for definitive path on AI regulation; training language models to build proteins; a really big PDF; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the toll of digital disconnection; teaching LLMs to mimic doctors’ cognitive approaches; prosthetic allows user to sense temperature; a benchmark for LLMs designed to diagnose rare diseases; bibliometric analysis shows lack of clarity regarding genAI use in scientific publishing; LLMs can autonomously hack websites; regulatory frameworks for thinking about AI; the lasting epigenetic effects of smoking, much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: Department of Commerce announces debut of US AI Safety Institute Consortium; AI literature may be facing its own replication crisis; where to next for public health?; FDA eyes bias in pulse oximetry; California legislators propose new AI regulations; AI benchmarks easily perturbed; PLOS looks back on four years of open peer review; Google makes its Gemini AI available for some products and customer; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: how AI is reshaping society; NLP evaluation experiments reveal flaws; new study illuminates the reason why insects circle streetlights; AI automation of jobs may proceed gradually; cardiologists call for better collection of SOGIE data; survey examines AI governance; sizeable proportion of dementia cases may be due to liver dysfunction; sifting EHR data for diseases transmitted via transfusion; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the importance of sharing imaging data; NSF debuts National AI Research Resource; microbe genomes give up food preferences; groundswell gathers against ‘paper mills’; AMIE boasts high performance as conversational medical AI; how AI may change liability; new model for how error correction works in brains; dodging dataset shifts; NASEM recommends training on social media impacts for healthcare providers; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: new ML approach boosts geometry problem-solving; GPU architecture allows LLM eavesdropping; “anthrobots” suggest future therapeutic possibilities; new kind of AI bias identified; biological retinas inspire improvements in computer color vision; paper mills branching out into bribery; UK Post Office software disaster offers AI lessons; how AI tools could reshape organizations; many docs unfamiliar with how FDA evaluates devices; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: quantum computing plus LLMs; the case for zero-shot translation in scientific LLM applications; FTC warns model-as-service companies to toe the line on privacy; semaglutide use associated with reduced suicidal ideation; series examines developing, validating clinical prediction models; using LLMs to surface social determinants of health; FDA warns over declining vaccination rates; predatory publishing in medical education; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: chatbots and Borgesian Babel; dogs are good for your health; chatbot errs in diagnosing pediatric conditions; assurance labs for health AI; digital apps for contact tracing; “Coscientist” AI shows research chops; health impacts motivate people to address racial disparities; new class of antibiotics debuts against resistant A. baumanii; wearables for depressive disorders; meeting a new paradigm for data sharing; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Roundup: a couple of big weeks for AI regulation; GPT-4 reveals serious biases in clinical tasks; brain organoids bridged to computer inputs; proposing a network of “assurance labs” for health AI; automated ECG-based tools for risk assessment; assessment framework for eHealth tools; Stanford AI experts look forward to 2024; diagnostic accuracy of large language models; genome of vanished “wooly dogs” decoded; surveys examine state of deep learning; more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: applying large language models to robotics; new insights into severe morning sickness; New England Journal reflects on historical medical injustices; clarifying the economics of generative AI; ozone pollution responsible for elevated risk of low birth weight in many LMICs; “productivity paradox” may temper expected benefits of AI in healthcare; prompt injection risks for customized GPT; surge of article retractions in 2023; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: exploring LLMs’ capacity for inductive reasoning; Google debuts new Gemini LLM; structural racism and lung cancer risk; “passive fatigue” behind virtual meeting burnout; fruit flies suggest approach for generative AI learning; simple attack prompt can make LLMs disgorge sensitive training data; early warning for ovarian cancer; rating LLM trustworthiness; the global warming contributions of a digital pathology deep learning system; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: AI needs to get the lay of the healthcare land; drones beat ambulances for AED delivery; AI stress validation tool stumbles on validation; lessons from COVID misinformation; more worries for screen time and kids; when not just the content but the author are AI-generated; LLMs can’t fix healthcare by themselves; using GPT-4-ADA to cook up bogus research dataset; adapting quality assurance methods for AI; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: testing GPT-4’s diagnostic chops; yeast with 50% synthetic genome survives, replicates; roles for AI in clinical trials; role of pets in zoonotic spillover; vaccine status, bias, and perceptions of risk; potential for bias in radiological deep learning models; what rats remember; developing standards for health-related chatbots; how publishing professionals perceive recent changes in social media; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: more earthly biota than stars in the sky; AI needed to subtract AI-created content; researchers apply cognitive tests to GPT; study highlights bad citations as serious problem for science; mental health resources for LGBTQ+ youth; new therapies needed to counter dengue’s march; bioRxiv uses LLMs to create tailored content summaries from papers; risks of generative AI not evenly distributed; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: dissecting the AI executive order; deep learning predicts macular degeneration; history of medical debt; unease over the surveillance campus; social vulnerability, diabetes, and heart health; open access and consolidation in scholarly publishing; AI may require new legal frameworks; diverse datasets needed for training AI; “watermarking” may not work for distinguishing AI-generated content; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: transparency index for foundation models; upending assumptions about 1918 flu; disparity dashboards considered; fixing drift in image classifiers; COVID trial shows no benefit for vitamin C; Excel data gremlin vanquished; LLMs reveal medical racism, bias when queried; external validation not enough for clinical AI; “data poison” fends off generative AI; NIH changes grant evaluation criteria; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: digital determinants of health; determining when a pandemic is “over”; despite law, academia and institutions slow to return Native American art and remains; cancer cells siphon mitochondria from T cells; AI deciphers scorched scrolls from Roman ruins; addressing “ecosystem level” bias in AI; who’s legally on the hook when LLMs do harm?; writing grants with ChatGPT; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the hidden influence of chronobiology; AI predicts immune escape; comparing COVID surveillance systems; yet another way to cheat at citations; updating models for ICU algorithm degrades performance; new “cooling” chemicals in cigarettes dodge menthol ban; AI image generator can’t be coaxed away from biased images; stroke deaths poised to rise in coming years; shedding light on AI’s dark corners; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: deep learning predicts variation in proteins; how AI effects clinical productivity; mRNA insights garner Nobel prize; US continues to lose ground in health, life expectancy; sitting is still bad for you; surveying algorithmic bias mitigation; antiracist approaches to clinical documentation; the surveillance and human labor interwoven into AI systems; the LLM hype cycle: peaks and troughs; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: lighting an (s)Beacon for genomic data; randomized trials for clinical AI; bees exhibit signs of sentience; scrutiny of AI chip design paper grows; the complexities of statistics vs. AI in medicine; deep brain stimulation for severe depression; worries about AI that sounds too human; tackling clinical conversations with GPT-4; YouTube disinformation videos being served to kids as STEM educational material; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: harnessing physical processes to power AI; xenograft study in mice sheds light on neuronal destruction in Alzheimer’s; speaking plainly in science; multimodal AI comes to the clinic; aligning AI fairness with medical practice; small-town healthcare imperiled by lack of doctors; GPT enhances consultant productivity and levels skills – with caveats; ableism in computer programming; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: navigating multimodal learning in AI; fine particulate pollution and breast cancer; surreptitious ChatGPT use pops up in scientific literature; the challenges of safeguarding generative AI against prompt injection; FDA panel gives thumbs-down to ubiquitous decongestant phenylephrine; study surveys standards for employing AI in journalism; twin study of WWII veterans sheds light on consequences of traumatic brain injuries; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: biased data offers window onto health equity issues; cancer therapeutics eye AI for drug discovery; testing machines with human exams; unveiling the “hidden curriculum” in medical education; once-vaunted telehealth startup collapses; eye movements combine with other data for early autism diagnosis; government seeks public input on AI and copyright; overemphasis on technology during COVID shutdown may have worsened education inequities; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: reinforcement learning to align LLMs with human preferences; modeling T cell exhaustion; examining clearance lineages of AI medical devices; writing as medicine for docs; healthcare needs more than current foundation models; watermarking images to spot AI influence; semaglutide tested in heart failure; NCSU researchers automate dragnet for fraudulent robocalls, much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: how bias emerges in healthcare algorithms; COVID vaccination and reduced maternal-fetal risk; research institutions need to beware predatory publishers; AI enables speech and expression by avatar for paralyzed woman; the protein “unknome” gets a closer look; figuring out what open AI really means; a testing schema for AI consciousness; sharing code helpful, encourages citations – but most authors still don’t share; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: transparency for AI-generated content; a critical appraisal of large language models; reconsidering radiation therapy; the future of governance for health AI; sport supplements whiff on truth in labeling; electronic payment charges siphon money from healthcare; focusing on AI’s real dangers; investigation reveals trouble with ethical oversight at French institute; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: Meta debuts LlaMA 2 large language model; calculating the toll of misdiagnosis; teaching writing in the age of GPTs; geographical concentration in AI industry; transgender youth, social media & mental health; responding to systemic racism in science; ML for extracting data from unstructured EHR records; regulatory implications for medical chatbots; building resiliency for a hotter world; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: a primer on foundation models; genetics and asymptomatic COVID; overblown claims for AI content detection; don’t trust GPT with the baby just yet; AI thirst for data drives interest in synthetic sources; the merits of working (out) for the weekend; physics offers window on sudden heart arrhythmias; tracing developments in press coverage of scientific preprints; expanding vaccination coverage for uninsured adults; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: the unseen human costs underpinning popular AI chatbots; oceanic plastic pollution comes in all sizes; neighborhood redlining casts long shadow on health; project eyes AI-assisted texts for health behavior nudges; drowning remains a persistent threat to young children in US; catching up with a flurry of recent AI applications in medicine; big hospital data breach exposes patient names, emails; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: need for a global AI observatory; humans like GPT-3’s medical information better, regardless of whether it’s true or false; Surgeon General tackles epidemic of loneliness; problems with recency bias in NLP literature; ticks surf static charge to land on hosts; will scholarly publishing be able to cope with AI-generated content?; EHR data, bias, and pragmatic clinical trials; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: reflecting on the future of AI; successful creation of model “human embryoid” raises challenging questions; transparency reporting for generative AI; health imperiled by excessive heat; internet already feeling the strain of AI-generated junk content; chemo shortage becoming acute; a health AI code of conduct; disappointing report card for internet privacy protections for kids; leveraging social networks for contact tracing; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: nurses’ judgment vs the algorithm; mapping mutations in primate and human genomes; GPT4 tackles differential diagnosis; the maternal death crisis threatening Black women; AI for drug design; ventilation as public-health priority; cut-and-paste errors proliferate in EHRs; aspirin and anemia in older adults; applying Ubuntu to AI ethics; informed consent in psychedelic research; cognitive impairments bring financial peril for elderly; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: helping kids navigate the AI landscape; transformer models can tackle a myriad of predictive clinical applications; the surprising health burdens of noise; doctors, scientists receiving elevated levels of online harassment; large quantities of LLM-generated text swamp Amazon’s mTurk platform; digital media survey shows shifts toward image- and video-centric social media; how AI image generators can supercharge existing societal bias; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: probing the limits of transformer models; the merits of visual explanations in healthcare; potential for racial and ethnic bias in algorithmic healthcare tools; gratitude ceremonies for donated bodies; AI targets antibiotic candidate for resistant pathogen; postdoc pipeline for life sciences in danger of drying up; climate change may narrow range of livable area on Earth for millions; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Roundup: does explainable AI help with decision-making?; “skeletal editing” opens new doors in chemistry; watching out for persuasive language in science; Charles Babbage and the use of data for control; hospital CT scanner illuminates hidden manuscripts; ChatGPT exceeds brief, writes fiction in court filing; many older patients use health portals; conference eyes growing threats from misinformation; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: large language models and evidence-based medicine; “digital bridge” restores mobility after paralysis; impact of legislation on access to gender-affirming care; switching endpoints common in clinical trials, reporting less so; prospects for generative AI in medicine; publishing credits join college entrance arms race; Surgeon General addresses concerns about social media and youth mental health; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: neural nets help robots find their place in the world; two studies underscore the toll of inequity on health, wealth; Google presents results from Med-PaLM 2; study puts price tag on manuscript formatting; GPT-4 task: explain GPT-2; CRISPR screening helps identify potential antidote for deadly mushrooms; how small can a language model go (and still make sense?); ChatGPT goes to college; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: what the medical world still doesn’t understand about AI; meeting the needs of small vulnerable newborns; embracing failure in science; Google goes big on generative AI; machine learning for hunting shipwrecks in Thunder Bay; regression vs ML for breast cancer prognostication; data availability statements often disappoint; EU committees signal move toward stronger data privacy; much more:
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: Duke’s Cynthia Rudin on transparent AI; Black Americans face “cardiology deserts”; marked jump in proportion of youth emergency visits for mental health reasons; considering AI for infectious disease surveillance; problems emerge with nation’s organ donation systems; share of US oncology clinics owned by private equity grows; language may affect musical perceptions; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: study reveals bumpy path to integrating AI into clinical care; revisiting questions about credit for DNA discovery; asking ChatGPT to help with your research; graphene “tattoo” pacemaker tested in rats; model uses EHR data to predict hospitalization risks for kids with complicated health issues; transplanting pancreatic islet cells to control diabetes; testing a video explainer for patients who can benefit from ICDs; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: does AI need a body for real understanding?; GPT4 and Epic join hands; Duke scientists achieve imaging breakthrough; tech companies behind health data “gold rush”; pros and cons of owning an emergency defibrillator; the case for independent, open-source AI; imaging journal editors resign over publication charges; associations between COVID and development of diabetes; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: reports from AI research groups stress governance, ethical issues; studies examine genetic lineages of lung cancer; cross-site tracking of hospital patients nearly ubiquitous; newly discovered form of archaea found in ocean mud puzzles, delights scientists; chatbots and the coming of AI-generated “grey goo”; FDA revises safety warning for opioids; plastic waste: the new geology; warning labels for data collection risks; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: Coalition for Health AI releases blueprint for trustworthy AI; stark differences in US-UK life expectancy; US youth mortality rises, and COVID is not the only driver; socioeconomic stigma among PhD students; open letter calls for “pause” on AI development; brain research adds to understanding of Alzheimer’s among Black patients; transgender persons may face increased healthcare costs, reduced access; taking stock of brain-computer interfaces; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: guiding patients through the digital care maze; human-centered design for AI; dissecting the Internet Archive ruling; demanding safeguards for biometric data; new organ donation rules create winners and losers; chatbots and a theory of mind; global equity in research collaborations; FDA issues guidances on AI, cybersecurity; US stillbirth rate remains high; trying to understand AI risks without being able to see under the hood; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: simple strategies for countering bias in large language models; debate swirls about new childhood obesity guidelines; pumping the brakes on AI; genetics of dogs living near Chernobyl’s ruins; researchers intrigued by GPT4 but want more info; military aviators, groundcrew at heightened risk for some cancers; White House releases equitable data report; how to work with a data commons; finding collaborators across academic medical centers; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: GPT4 AI debut wows users, raises questions; US maternal deaths continue to climb; early lower-respiratory tract infections have long-term consequences; transformer model predicts hundreds of millions of protein structures; revisiting the evidence for masking; portable MRIs possible; transparency practices in AI; benefits and burdens of open-access publishing; building healthy campuses; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: securing AI against hidden backdoors; reining in the size of generative AI models; implications of new obesity drugs; NC governor proposes more funding for mental health; US “deaths of despair” during COVID pandemic; coerced citation in medical literature; how UC Davis achieves med school diversity; protecting rainforests can also shield against future pandemics; artificial sweetener associated with cardiovascular risks; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: clinical vs general large language models for medical NLP; Lilly announces cost caps for insulin products; dangers of ‘algorithmic paternalism’; parental social support and mental health of LGBTQ kids; scoring system for housing help may be adding to inequity; rural hospitals see loss of obstetric/maternity services; working to improve health literacy and fighting misinformation; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: Bing AI chatbot’s churlishness surprises, alarms users; RCT of high-dose ivermectin for COVID shows no benefit for symptom length, hospitalization; “style cloaks” for art confound generative AIs; vascular surgery practices at Kansas VA draw scrutiny; data brokers are trafficking in sensitive health data; a skeptical perspective on chatbots’ prospects in education; 8 days a week needed for complying with clinical practice guidelines; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: The blurriness of large language models; troubling errors crop up in genetic studies; chatbots take center stage; a call for regulating AI now; patching injured rat brains with organoids; racial disparity and ambulance transportation; using AI to help parse animal communication; paper deluge heightens severity of peer review crisis; revisiting chocolate’s health benefits; azithromycin prophylaxis in childbirth; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: examining future prospects for large language model chatbots; Scandinavian study evaluates myocarditis outcomes; Black and Hispanic dialysis patients at greater risk for infections; FDA issues guidance for external controls; “jailbreak” prompting technique overrides chatbot’s ethical brakes; global agricultural use of antibiotics much higher than previously thought; closing the gap on building a culture of open research, much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: the uncanny valley of AI applications; EHR data powers early autism screening; deer may be serving as reservoir for COVID; study delineates acute effects of diesel exhaust on human brains; FDA announces reorganization around food oversight; AI assistance makes its way to the patient bedside; wearable trackers to provide digital biomarkers for progression of Duchenne muscular dystrophy; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: heart failure outcomes worst for rural Black men; looking forward to the future of clinical trials; stroke risk algorithms perform worse for Black patients than white ones; avian flu spreads at mink farm; drug manufacturing lapses harm young leukemia patients; trial will assess AI for lung cancer risk prediction; the effect of Twitter tumult on scholarly publishing; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: looking ahead to clinical trials for 2023; algorithm solves shortest-path problem for negative graphs; study finds widespread PFAS contamination in freshwater fish; relatively few hospitals compliant with federal price transparency mandates; ChatGPT creates artificial abstracts that pass scientific review; injection drug use fuels rise in endocarditis; Getty Images sues Stability AI over image scraping; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: developers’ roles in building ethical AI; median US prices for new drugs top $200K in 2022; whole-parasite malaria vaccine tested; the “political economy” of misinformation; are AIs breaking copyright laws?; AI, clinical decision-making, and risks to LGBTQ patients; inpatient adverse events still common, many preventable; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: using machine learning to create “synthetic” x-rays and answer medical questions; the continuing evolution of gender-affirming care; how glass frogs manage their disappearing act; validating survival models; going beyond Tuskegee when examining medical racism; how the scholarly community deals with paywalled papers; much more:
AI Health Friday Roundup
In today’s Duke AI Health Roundup: tools for disaggregating large datasets for bias evaluation; global deaths from COVID may be substantially undercounted; NIH proposes streamlining peer review for grants; engaging with AI issues is important – for everyone; enormous potential for mRNA platforms; “smart bandages” for healing, monitoring wounds; some telehealth companies funnel data to tech and social media companies; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: transferring skills between robots; NHLBI report scrutinizes social determinants of health in atrial fibrillation; kicking the tires on ChatGPT; making pulse oximeters work for everyone; considering race & ethnicity in medical school admissions; national database will track nonfatal opioid overdoses; testing the generalizability of a kidney injury model; researchers buckling under administrative burdens; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: large language model gets pushback from scientists; competition among viruses may blunt effects of feared winter “tripledemic”; oversight for machine learning software in healthcare; fruit fly connectome resembles machine learning architectures; “evidence map” for maternal health risk factors; the importance of trust between patients and physicians; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: new report envisions transformed digital ecosystem; refinement in neuromorphic chip design may open new frontiers in AI; March of Dimes report card shows worsening rates of preterm birth in US; potential complications for Twitter via EU GDPR regulations; in-utero enzyme replacement therapy for Pompe disease; totting up 8 years of predatory publishing onslaught; applying regulatory science for better medical AI; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: Meta language model predicts proteins; Paxlovid and long COVID risks; innate immune system offers possibilities for Alzheimer’s approaches; lawsuit sends ripples through the world of generative AI; harnessing value-based payment for health equity; teasing out the implications of OSTP publications access policy; pitfalls of using demographic data to promote algorithmic fairness; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: graph neural networks to describe galaxy evolution; bias in risk prediction models; CDC issues new opioid guidelines; app for exploring bias in AI-generated images; realizing biology’s potential for this century; how Wikipedia citations can affect impact of journal articles; giving an eel an MRI; the current state of medical malpractice law; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: time series classification for sensor data; most US maternal deaths are preventable; confronting stigmatizing language about substance use; digital repository houses wealth of 3-D specimen scans; user evaluations for explainable AI systems; retooling research funding mechanisms; the immunological reverberations of the Black Death pandemic; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: the invisible work underpinning AI; meta-research study reveals unexplained variance; dish of neurons learns to play Pong; toolkits for ameliorating AI bias; results from Moderna vaccine trial in kids; inequities in internet access; confronting racism in the culture of science; factors behind steep US life expectancy declines; AI translators for spoken language; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: modeling the perfect cup of joe; new regulations grant patients more control over health data; how COVID slips past cellular defenses; rates of physician burnout climb; what not to do in designing biomarker studies; misinformation’s threat to health cybersecurity; a human rights framework for AI; atrial fibrillation and use of DOACs in disadvantaged neighborhoods; interventions to reduce partisan animosity; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: White House OSTP releases “AI Bill of Rights”; Pääbo wins Physiology/Medicine Nobel for paleogenomics; COVID lessons and coming pandemics; messaging as public health tool; postdoc pipeline slows to a trickle; systematic review finds paucity of randomized trials of machine learning interventions; the limits of mental health chatbots; common pitfalls of AI journalism, much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: transformer neural networks mimic the human hippocampus; NIH undertakes to ID function for every human gene; FDA releases new guidance for health AI; “nanorattles” shine a light on cancer detection; the impact of elite universities on hiring for US faculty; light pollution gets worse across much of Europe; association between type 1 diabetes and COVID infections in kids; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: probing reading comprehension for machines; CAR-T for lupus; reflections on loss of public trust in science (and how to fix it); reverberations of racism in digital image collections; FDA eyes pulse oximeter performance with darker skin; USPSTF recommends widespread anxiety screening; wearable sensors for measuring tumor regression; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: peering through COVID-induced “brain fog”; critiquing academic culture at computer science conferences; cardiovascular polypill trial results show benefit for secondary prevention; medical racism, radiation, and x-rays; the case for better data on race, ethnicity & language; cases of acute flaccid myelitis increase; AI decodes speech from thought without invasive probes; much more:
AI Health Friday Roundup
In today’s Roundup: health and the genetics of circadian rhythms; trust and human-robot interactions; how bias gets built into GANS; Stone Age surgery; probing the limits of scientific education and civic engagement; judge rules against PrEP coverage; prosthetics for memory; arguing for and against including AI in medical training; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: digital biomarkers for disease surveillance; weak electrical current for countering memory loss; racial disparities in prostate cancer diagnosis persist in affluent neighborhoods; proposing a new approach for managing journal retractions; cybersecurity primer for healthcare; a boom in rare kidney disease research; segregation, redlining, and firearm violence in Baltimore; “touchless” sensing for detecting Parkinson disease; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: the limits of language in AI; a worsening child mental health crisis in North Carolina; big open-access policy change from OSTP; using machine learning to predict carcinogenic compounds; choosing whether to attempt to eradicate diseases; new method may offer cheap path to unravelling “forever” PFAS chemicals; reporting of NIH-funded clinical trials still lags; lip service vs meaningful action in publication integrity; equity, justice, and disability data; much more:
AI Health Friday Roundup
In today’s Roundup: assessing the health merits of fitness trackers; Florence Nightingale’s contributions to data visualization; racial inequity in uterine cancer; emergency departments under strain; how “social capital” shapes our world; educational tech and cyber risk; why evidence-based medicine needs implementation science; the hidden chaos of living systems; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: the evolution of lactose tolerance; philosophical NLP AI hard to tell from the real thing; possible data fraud rocks Alzheimer research; free library of AlphaFold protein structures released; humans may be less resilient to extreme heat than thought; nursing homes pursue aggressive legal tactics over unpaid bills; study homes in on COVID outbreak epicenter; White House pivots toward harm reduction in drug policy; why data breaches keep happening; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: data leakage as challenge for machine learning replication; effectiveness and uptake of machine-learning application for sepsis detection; associations between community violence and cardiovascular risk; differences in aging have implications for dementia risk; assessing the state of telehealth in NC; bolstering diversity in clinical trials; tiny windup motor runs on DNA; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: BLOOM debuts as open-source, open-access large language model; bias in foundation models translates to real world via robots; calculating the “missing Americans” of higher US mortality rates; what to do when physicians spread medical misinformation; saving lives by brushing teeth; bringing back the single-panel figure; advances in wastewater analysis allow scientists to track individual COVID variants; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: AI considered as “late-stage teenager”; racial equity in clinical trials; ethical filtering for large language models; precision medicine for rheumatology; updating approaches to disease surveillance; Facebook inundates cancer patients with dubious ads; making pulse oximeters work for everyone; adjuvant boosting for COVID vaccines; legal framework for biometric tech; adversarial training for NLP models; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: embodied AI reaches toward a new kind of problem-solving; considering the role of pragmatic and virtual clinical trials; Lancet surges to the top of journal impact factor ratings; mistrust of tech and the collapse of contact tracing efforts; FDA orders Juul to cease marketing vaping products; managing polypharmacy in heart failure; the ethics of large language models; the threat to privacy posed by inferential analytics; making inroads on childhood food insecurity; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: the mathematics of randomized trials; medical debt affects huge proportion of Americans; k-safety properties help keep machine learning models on track; COVID worsens peer-review crisis; new analysis reveals US health disparities; human activity overwhelms animal senses; one girl’s illness yields new insights into lupus; FDA advisory panels give thumbs-up for COVID vaccines in small children; US still lags in clinical trial diversity; learning to measure what matters to patients; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: new lightning-fast algorithm solves maximum flow; discrimination puts strain on hearts; skeptical views on artificial general intelligence; head-turning cancer trial results from ASCO; using machine learning to reduce cognitive load on healthcare professionals; digital innovations in mental health may not reach everyone; tracking what may be multiple monkeypox outbreaks; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: supercomputer breaks exascale barrier; pulse-oximetry meters yielded underestimates of COVID effects in people of color; machine perfusion keeps liver viable for transplant; ancient victims of Vesuvius have genomes sequenced; gender bias in math prizes; lobbying against data privacy legislation intensifies; how to spot a “hijacked” scientific journal; machine learning algorithms ID potentially dangerous asteroids in old astrophotos; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: avoiding the Turing Trap in AI; monkeypox emerges in US, Europe; roadmap for better western blot data; patent law on collision course with AI; individual variability may still confound mouse models; firearms lead causes of death for children in 2020; EMA puts hold on generics due to dubious bioequivalence studies; retracing the path that let COVID jump from minks to humans; move toward Medicare Advantage plans has implications for availability of data; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: machine learning deduces physical law; marking a somber COVID milestone; rebuilding trust in public institutions; lack of diversity still a problem for clinical research; frameworks for evaluating clinical AI; digital ID can leave most vulnerable behind; study compares vaping vs. nicotine patches for quitting smoking; European digital privacy protections poised to go beyond GDPR; Great Pacific Garbage Patch turns out to be surprisingly rich in marine life; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: automatic bias detection; integrating AI into clinical workflows; the pitfalls of ancestry data; going beyond fairness in AI ethics; figuring out the “why” of some cancers; why preprints are good for patients, too; transparency and reform for medical debt; US public still esteems scientists; urging social media to open its book for researchers; imaging the invisible at cosmic scales; much more.
AI Health Friday Roundup
In this week’s Duke AI Health Friday Roundup: toolkit for applying NLP to EHR free-text; AI powers wildlife conservation efforts; addressing racism in medical education; what’s next for AlphaFold; questioning the review process for NSF fellowships; new hydrogel is crushing it, literally; mobile health for reducing health inequities; a new framework for managing medical technologies; AI and a new era of colonialism; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: Global review of bias in clinical AI studies; reconsidering hypertension in pregnancy; how humans build and share algorithms; science journals’ responsibilities to mend old harms; European regulators clear AI x-ray reader for use; starlings and Shakespeare; brain imaging reference spans entire lifespan; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: siloed storage risks big data fading into obscurity; renewed focus on viral factors in MS; the racial legacy of the Flexner Report; audits for medical algorithms; patients link up with researchers to help drive studies of long COVID; Sharpless to step down as NCI chief; the rewards of “diving into a new field” later in life; study examines different COVID vaccines in head-to-head comparisons, “sonification” portrays exoplanet data as music; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: estimating the health risks of longer-term space missions; report takes pulse of AI in 2022; no COVID benefit for early ivermectin in Brazilian RCT; dashboard condenses firehose of AI research into manageable views; light pollution’s impact on human health; postpartum Medicaid extension goes into effect in NC; lack of mental health resources to counter effects of racism on campuses; trove of 1950 Census data released; much more.
AI Health Friday Roundup
In today’s AI Health Friday Roundup: drone delivery for blood products; geometry, human cognition & AI; the FTC & “algorithmic disgorgement”; magpies: even smarter than we realized; revisiting data dashboards after 2 years of COVID; rethinking disability and the workplace; credit reporting companies’ new approach to medical debt; NIST publishes report on AI bias standards; brain implant allows “locked-in” person to communicate; much more.
AI Health Friday Roundup
In today’s Roundup: Snakebitten? Data science can help; large (harmless) spiders on the march; adversarial attack with lasers foxes self-driving LIDAR; impact of state policy on COVID mortality; Cow Clicker as a window onto online culture; creating guardrails for health AI; growing impatience with data blocking; ARPA-H gets funded but organizational questions remain; disparities impact healthcare workers, too; possible unintended consequences of open access publishing; challenges in getting data and code from study authors; much more.
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: why “AI” and “machine learning” can be loaded terms; lead exposure may have docked IQ points; DeepMind’s Ithaca parses, dates ancient Greek writing; pharma marketing explores the rest of the color wheel; effects of school masking policies; preprints need clarity on policies; Shackleton’s Endurance located on ocean floor; Surgeon General issues call for misinformation data and perspectives; NC to use Medicaid to tackle social determinants of health; IoT, medical devices at risk from security vulnerabilities; much more:
AI Health Friday Roundup
In today’s Duke AI Health Friday Roundup: war in Ukraine spills over into cyber realm; data shifts spell trouble for clinical AI; Berkeley loses CRISPR patent battle; differences in neighborhood mobility can affect disease risk; racism (not race) as a risk factor; piping digital notebooks directly into manuscripts; grim news from latest climate report; perfect cryptographic secrecy possible; new analyses point to Wuhan market as point of origin for COVID pandemic; who’s keeping track of your location data?; new lemur makes debut at Duke Lemur Center; much more:
AI Health Friday Roundup
In today’s Roundup: when docs spread misinformation; deep learning holds reins in fusion reactor; parsing NC regulations on syringes; sudden collapse of pain clinics leaves patients stranded; trade secrets, patents and bioscience; remembering Paul Farmer; the “wicked problem” posed by retracted scientific papers; fighting ageism in AI; groundswell gathers for federal privacy protections; much more:
AI Health Friday Roundup
In today’s Roundup: why bigger is better for neural nets; REDUCE-IT eyes cost effectiveness for statin alternative; COVID’s burdens for immunocompromised; reinforcement learning yields AI that can beat humans at driving simulator; policy journal devotes issue to racial equity in healthcare; fighting smartphone addiction to boost scientific productivity; scientists not always equipped for social media furor; Califf returns to FDA leadership post; transgenic zebrafish on the loose in Brazil; much more:
AI Health Friday Roundup
In today’s Roundup: assessing algorithmic impact for healthcare; nerve stimulation to treat paralysis; teaching robots to generalize; extending sleep linked to reduced caloric intake; countering quantum hackers; making space for compassionate care; reconsidering the toll from the Black Death in medieval Europe; more weirdness from the Burgess shale; when caregivers are machines; spotlight on ad targeting and data sharing practices; much more:
AI Health Friday Roundup
In today’s Roundup: Crisis helpline passes data to for-profit spinoff; the ethics of visual representations of AI; COVID’s toll on kids in sub-Saharan Africa; Algorithmic Accountability Act introduced; untangling tau protein; “fingerprinting” for journal PDFs; new nonprofit clinical trials org launches; the long half-life of problematic datasets; cybercriminals benefit from lax attitudes toward data protection; countering buggy scientific programming; much more:
AI Health Friday Roundup
In today’s Roundup: tracking years of work in medical AI and machine learning; considering data ethics for mathematicians; neurological consequences of COVID; new antivirals will be needed for COVID in future; OpenAlex debuts research database; digital medicine and targeted ads; practice-base research networks struggle in COVID’s wake; inclusivity and bias in human-machine interactions; AI forays into breakfast cereal, much more:
AI Health Friday Roundup
In today’s AI Health Roundup: Black patients more likely to have stigmatizing descriptions in EHR notes; Office of the National Coordinator debuts Trusted Exchange Framework; global toll of COVID likely undercounts deaths; impact of “nocebo effect” on reported adverse events in COVID trials; world’s children still face dire health impacts from lead; trash piles up as Omicron spreads among sanitation workers; links between eviction and Medicaid disenrollment, much more:
AI Health Friday Roundup
In today’s Roundup: digital phenotyping with patient-generated data; Epstein-Barr virus role in mutliple sclerosis; COVID vaccination effort stalls in younger kids; “growing pains” for arXiv preprint server; pig heart transplantation raises ethical issues; unpacking Medicare coverage decision for Alzheimers medication; digital literacy not the only factor in sharing of misinformation; much more.
AI Health Friday Roundup
In today’s Roundup: looking ahead to the next pandemic; best data visualizations of 2021; health AI for the Global South; meta-analysis sharpens focus on ‘long COVID’; diet, gut microflora, and immunotherapy; sorting through Web3 hype; despite progress, chatbots still go off the rails; flattery for dictator still enshrined in scientific literature; the state of scientific peer review in 2021; much more.
AI Health Roundup – Looking Back on 2021
Well, it’s 2022, and we’re already running a bit behind. Nevertheless, here is an entirely subjective selection of Roundup items from 2021 that caught our eye, raised our eyebrows, or made us stop and think awhile. We hope you’ll enjoy them as well.
Thanks for reading, and here’s hoping for a better 2022.
AI Health Friday Roundup
In today’s Roundup: Spread of omicron variant may make for a gloomy winter; the ethics of exporting AI models; large study examines cardiovascular side effects from COVID, vaccines; abandoning traditional publishing for preprints; deciding authorship position with videogame duels; transparent jellyfish open window on neurobiology; ditching systematic reviews for something faster; Senate committee meets to consider Califf FDA nomination; more people skimping on medical care due to cost; much more.
Forge AI Health Friday Roundup
In today’s Roundup: introducing simulation intelligence; replication project for cancer studies has hard time getting data; DeepMind makes splash with compact language transformer; survey bias overestimated vaccine uptake; pandemic takes toll on nation’s blood pressure; synthetic embryos raise thorny questions; study finds no benefit from Medicare Advantage bonus program; what ethnography can tell us about the reproducibility crisis; much more.
AI Health Friday Roundup
In today’s Roundup: federated learning on the Internet of Things; recognizing cells via barcodes; favoritism in scientific publishing; calling for better BIPOC representation in health data; building an evidence base to fight health misinformation; ethical complications for large population genetics datasets; survey indicates growing burnout among scientists; closing the global gap in COVID vaccination; much more.
AI Health Friday Roundup
In today’s Roundup: healthcare professionals buckling under the strain of a second pandemic year; unintended consequences from health apps; GPT-3 livens up software error messages; worrying rise in COVID cases ahead of holidays; reimagining diagnostic excellence; school nurses exhausted; second patient found to have naturally cleared HIV; just how much we owe peer reviewers; Califf tapped to head FDA for second time; much more.
Forge AI Health Friday Roundup
In today’s Roundup: specially engineered bacteria solve mazes; checking up on the Delphi project’s “machine ethics”; white-tailed deer may be a reservoir for COVID; the cardiovascular toll of pollution; Surgeon General releases primer on countering health misinformation; COVID upends scientific career paths; how surveillance erodes community; rethinking risk and our response to it; getting a handle on sensor-generated health data; much more.
Forge AI Health Friday Roundup
In today’s Roundup: confronting AI applications that discriminate by appearance; strong showing for experimental oral COVID therapy; health burdens of air pollution may be worse than thought; HPV vaccine quashes cervical cancer in England; machine learning meets microscopy; alarming attrition among lab staff; a theory of justice for AI; much more.
Forge AI Health Friday Roundup
In today’s Roundup: COVID vaccination for kids draws closer; AI unleashed on hypothesis creation; Facebook faces criticism, moves into “metaverse”; modeling study sheds light on early COVID transmission; bracing for the next variant; scientists wade into the public discourse; evaluating the effects of “open” peer review; the imperative for pharmacoequity; trial finds SSRI antidepressant is effective in helping to avoid COVID hospitalization; much more.
Forge AI Health Friday Roundup
In today’s Roundup: the pitfalls of oracular AI; climate change and its impact on almost every aspect of human health; big data, small data, and future directions for machine learning; the limits of tech whistleblowing; the effects of redacting identifying information on NIH grants; “universal animals” illuminate links between embodiment and intelligence; dataviz considered as superpowers; returning narrative to scientific publishing; much more.
Forge AI Health Friday Roundup
In this Roundup: COVID’s impact on “fly-in” medical missions; alarm and debate over FHIR hacking report; real-world AI study finds “negligible” tradeoff between fairness, accuracy; breast cancer poses greater risks for Black women; seeking clarity on ivermectin; convolutional neural networks gaining ground in facial recognition; mixing COVID vaccines and boosters; FDA seeks lower sodium levels; developing trustworthy AI; NISO seeks to make paper retraction more visible; much more.