AI Health Roundup

Looking Back on 2021

Well, it’s 2022, and we’re already running a bit behind. Nevertheless, here is an entirely subjective selection of Roundup items from 2021 that caught our eye, raised our eyebrows, or made us stop and think awhile. We hope you’ll enjoy them as well.

Thanks for reading, and here’s hoping for a better 2022.

January 4, 2022

We rang in 2021 with an examination of algorithmic uncertainty, the virality of health misinformation, Americans’ turn toward crowdfunding to bear medical expenses, the use of gene drives in conservation, the appeal of conspiracy theories, the health burden of air pollution, and the context shaping Black patients’ levels of trust in the healthcare system.

January

Photograph of person in silhouette against a brightly lighted abstract light display. Image credit: Fernand De Canne/Unsplash
Image credit: Fernand De Canne/Unsplash
  • “Effective quantification and communication of uncertainty could help to engender trust with healthcare workers, while providing safeguards against known failure modes of current machine learning approaches. As machine learning becomes further integrated into healthcare environments, the ability to say “I’m not sure” or “I don’t know” when uncertain is a necessary capability to enable safe clinical deployment.” A perspective article by Kompa, Snoek, and Beam published this week in NPJ Digital Medicine tackles one of the more vexing (and potentially dangerous) shortcomings of algorithmic outputs in the healthcare domain – the lack of any indication of the degree of uncertainty that accompanies the algorithmically generated values.
  • “[Fighting health misinformation] isn’t the job of any one clinician to do alone. It also is an opportunity for our health care organizations to think about different roles that they could develop and different opportunities that they might have. One of those is to think about ways that they might be able to use tools to better monitor the information environments that their patients encounter.” In conversation with JAMA Medical News’ Jennifer Abbasi, communication science expert Brian Southwell talks about the viral spread of health misinformation amid the COVID-19 pandemic and shares lessons from his recently launched training workshop at the Duke University School of Medicine, where he is providing tools to help physicians deal effectively and compassionately with misinformation.
  • “In a world of synthetic gene drives, the border between the human and the natural, between the laboratory and the wild, already deeply blurred, all but dissolves. In such a world, not only do people determine the conditions under which evolution is taking place, people can—again, in principle—determine the outcome.” In a fascinating piece for The New Yorker, Elizabeth Kolbert narrates how she learned to use CRISPR gene-editing technology and underlines the potential and dilemmas of using CRISPR gene drives in conservation biology.
  • “From May 2010 through December 2018, more than $10 billion was sought through online medical fundraisers in the US, with more than $3 billion raised. Cancer represented the most common medical condition for which funding was sought, followed by trauma/injury.” A research article published in JAMA Network Open by Angraal and colleagues provides a sharper picture of just how much Americans are relying on crowdfunding to cover medical expenses.
  • “One reason that conspiracy theories find fertile ground in the human mind has to do with epistemology — the philosophy of how we know what we know (or think we do). Because any individual can know only a tiny sliver of the world firsthand, we have no choice but to accept a great deal of information we can’t verify for ourselves….The assumptions and cognitive shortcuts we use to decide what’s true make sense most of the time, but they also leave the door open for bad information, including conspiracy theories.” Conspiracy theories – many of which were on full view over the past few weeks – often seem completely ludicrous to those outside the bubble. So why do they persist and grow? At Knowable, Greg Miller explores the possible reasons that the appeal of conspiracy theories remains so stubbornly intractable (H/T @charlesweijer).
  • “A considerable proportion of premature deaths in European cities could be avoided annually by lowering air pollution concentrations, particularly below WHO guidelines. The mortality burden varied considerably between European cities, indicating where policy actions are more urgently needed to reduce air pollution and achieve sustainable, liveable, and healthy communities.” A study published in Lancet Planetary Health by Khomenko and colleagues provides an estimate of the health burden inflicted by air pollution in major European cities (H/T @califf001).
  • In a trenchant essay at the New England Journal of Medicine, Simar Singh Bajaj and Fatima Cody Stanford explain why explanations for African American mistrust of the US medical and research systems that default to invocations of Tuskegee are missing some important – and more immediate – context: “…attributing distrust primarily to these instances ignores the everyday racism that Black communities face. Every day, Black Americans have their pain denied, their conditions misdiagnosed, and necessary treatment withheld by physicians. In these moments, those patients are probably not historicizing their frustration by recalling Tuskegee, but rather contemplating how an institution sworn to do no harm has failed them.”

February saw the FDA trying to manage an avalanche of AI-based health apps, increasing scrutiny of digital health tools, an initiative aimed at gathering evidence to guide school safety and policy during the COVID pandemic, an interview with the scientist who helped create the Moderna vaccine, a heart-wrenching look at the toll of Alzheimer’s disease, and an article warning of viral variants to come.

February

Colorful heart shape drawn with crushed chalk. Image credit: Sharon McCutcheon/Pexels
Image credit: Sharon McCutcheon/Pexels
  • “What emerged was an uneven patchwork of sample sizes and methods that more resembles the building of a frontier town, with hastily bolted-on porches and uneven roof lines, than a standardized approach to assuring safety, efficacy, and fairness. The upshot, physicians and health data experts said, is that companies fresh with FDA approvals are pitching their devices to patients and doctors who know very little about whether they will work or how they might affect the cost and quality of care.” A lengthy exploration of the state of play in health AI regulation by STAT News’ Casey Ross highlights how the US Food and Drug Administration is struggling to keep pace with the flood of new AI-based health applications and a relative paucity of data to support how (or if) they’ll work in practice.
  • “Even his increasingly rare moments of clarity and awareness reveal the depths of his debility. At one point, as Susan and I stood chatting, he looked up suddenly from the book in his lap and, flashing that familiar smile, asked me in his soft, sueded whisper, ‘How’s the weather outside?’ Had I not known that he and Susan had just returned from walking their dog in the park, I might not have suspected that anything was amiss.” You may want to have some Kleenex handy for John Colapinto’s beautiful, heart-wrenching AARP profile of legendary singer Tony Bennett, whose Alzheimer disease diagnosis has been revealed – or perhaps “shared” might be a better word – by his closest family members.
  • “…understanding the digital medicine community, the sensors and file types commonly used, and perspectives on interoperability will enable the development and implementation of solutions to the critical interoperability needs in digital medicine.” In a survey article published in JMIR mHealth and uHealth, Bent and colleagues perform a wide-angle scan of the digital health community that seeks to characterize the landscape of digital health professionals and the tools they use.
  • “When it comes to vaccine distribution, the elderly parent who is slow in getting to the right website, the immigrant who is confused by the dashboard because he doesn’t read English well, the low-wage worker who can’t afford a computer or smartphone and the farmworker who has no high-speed internet connection are all shut out of opportunities to access a vaccine that can literally save their lives.” In an opinion article for The Hill, Ranit Mishori sheds light on how the seemingly basic issues of internet connectivity and access are threatening to slow vaccine distribution in many communities across the United States. 
  • “A national public health initiative funded by the NIH, the collaborative has three aims: to provide science-based educational resources to schools, to help interpret the guidelines for on-the-ground implementation, and to support schools in tracking symptoms, exposure, and testing via a customized ABC Science Collaborative app. Additionally, collaborative members are working to create the first-ever national clinical research registry that will be used to study the impact of COVID-19 on children, focusing on quality of life research.” A profile by Kaitlin Jansen at the Duke School of Medicine’s Magnify blog examines a project – the ABC Science Collaborative – kicked off by two Duke pediatrician-researchers, Kanecia Zimmerman and Danny Benjamin, that was designed to provide health departments, educators, and other policymakers across North Carolina with actionable information about COVID that would allow them to make evidence-based decisions about school reopenings.
  • “For a long time, we left the general public on the outside of vaccine development, until it was time to give them their shot. And that’s just unacceptable. I can’t even blame anyone for being sceptical about this, because they don’t have any idea what went into it. So, our goal is to inform people. It’s very helpful for people to feel like they’re part of something.” Nature’s Nidhi Subbaraman interviews Kizzmekia Corbett, the NIH immunologist who was instrumental in the creation of the Moderna COVID vaccine.
  • “Scientists anticipate that coronaviruses will converge on more mutations that give them an advantage — against not only other viruses but also our own immune system. But Vaughn Cooper, an evolutionary biologist at the University of Pittsburgh and a co-author of the new study, said lab experiments alone wouldn’t be able to reveal the extent of the threat.” An article by Carl Zimmer in the Sunday New York Times describes the current state of play concerning the 7 known new coronavirus variants spreading cross the US that appear to have arisen in North America (distinct from variants from the United Kingdom and South Africa that have also been causing concern).
  • “This cross-sectional study found that among US-based vaccine clinical trials, members of racial/ethnic minority groups and older adults were underrepresented, whereas female adults were overrepresented. These findings suggest that diversity enrollment targets should be included for all vaccine trials targeting epidemiologically important infections.” A new research article by Flores and colleagues published in JAMA Network Open underscores the urgency of ensuring that clinical trials are enrolling diverse groups of participants – something that many trials are not reporting. An article at STAT News by Nicholas St. Fleur has additional perspective and analysis (H/T @eperakslis).
  • “DPLA contains a rich trove of historical artifacts that tell the stories of Black women’s leadership in the Suffrage Movement.  Yet, like many libraries and archives in the United States, the majority of materials in DPLA are by and about White people and men.  Surfacing materials about Black women is no small challenge.” A wonderful series of four blog posts by Audrey Altman at the Digital Public Library of America describes the thoughtful approach that the DPLA employed to make sure that an algorithm designed to locate and retrieve collection artifacts pertaining to the history of Black women’s suffrage were not compromised by the same kinds of biases that all too often caused them to be scanted in library collections in the first place.

In March, we saw emerging signs of burnout in academia, pondered a new data modernization plan for the FDA, examined the effectiveness of mask mandates, grew some artificial blastocysts, were introduced to “spatial transcriptomics,” encountered a mini-course in AI ethics, and took a look at the state of COVID data reporting nationwide.

March

Low-angle photograph of a branch of a blossoming dogwood tree against blue sky and white clouds. Image credit: Image credit: Joe Dudeck/Unsplash
Image credit: Joe Dudeck/Unsplash
  • “Legions of professors are hitting the wall in their own ways. For some, the problem has been a crushing workload combined with child-care challenges. For others, it’s a feeling that their institution expects them to be counselors and ed-tech experts on top of their regular responsibilities, even if it means working seven days a week…Their responsibilities as teachers are causing many of them to feel pressed to meet the needs of the moment.” At The Chronicle of Higher Education, Beth McMurtrie shares the plight of professors and instructors on the brink of burnout due to the stresses and constraints of the COVID-19 pandemic.
  • “…modernizing the FDA’s approach to data isn’t just about preparing for those futuristic applications. ‘We need to think in both directions,’ said Abernethy, including ‘getting the stuff we have already ready to use.’ That stuff, specifically, is PDFs — piles and piles of PDFs.” STAT NewsKatie Palmer reports on the FDA’s newly released Data Modernization Action Plan, which is designed to overhaul antiquated systems at the agency that have sequestered potentially useful data in relatively inaccessible formats.
  • “Mask mandates were associated with statistically significant decreases in county-level daily COVID-19 case and death growth rates within 20 days of implementation. Allowing on-premises restaurant dining was associated with increases in county-level case and death growth rates within 41–80 days after reopening. State mask mandates and prohibiting on-premises dining at restaurants help limit potential exposure to SARS-CoV-2, reducing community transmission of COVID-19.” A newly released paper by Guy and colleagues at the CDC’s Morbidity and Mortality Weekly Report examines associations between masking mandates, on-site restaurant dining, and rates of COVID cases and death.
  • Data might seem like an overly technical obsession, an oddly nerdy scapegoat on which to hang the deaths of half a million Americans. But data are how our leaders apprehend reality. In a sense, data are the federal government’s reality. As a gap opened between the data that leaders imagined should exist and the data that actually did exist, it swallowed the country’s pandemic planning and response.” The Atlantic’s Robinson Meyer and Alexis C. Madrigal – who created the magazine’s “Covid Tracking Project” to fill an information vacuum in the public sphere – dissect the shortcomings of US data tracking efforts in the COVID pandemic, and explain that while things have improved somewhat, the national data infrastructure is still not equal to another pandemic-sized challenge without further reform.
  • “With great power comes great responsibility, and thus reflections on the ethics of Artificial Intelligence are just as important as the development of actual technology and implementations. This Deep Dive gives you an overview on the state of AI Ethics.” HIIG Berlin researcher and AI expert Anna Jobin has assembled a mini-course that provides a primer on different aspects of applied ethics in artificial intelligence.
  • “Because blastocysts grown in the laboratory from human stem cells differ from human embryos, they might avoid some of the ethical limits on human embryo research and could increase access to this type of research, scientists say. They do not expect the new blastocyst-like structures to have the ability to develop into a complete embryo.” Nature’s Nidhi Subbaraman covers a slew of papers (two published in Nature this week; two others available as preprints from bioRxiv) that describe different methods that arrived at the same result: an artificially created cellular model of the early stage of human embryonic development known as a blastocyst.
  • “Until recently, scientists wanting to know all the genes at work in a tissue could analyze single cells without knowing their position, or they could measure average activity levels of genes across thousands of cells. Now, an emerging technology called spatial transcriptomics combines precision and breadth, mapping the work of thousands of genes in individual cells at pinpoint locations in tissue.” At Science, Elizabeth Pennisi reports on the “total game changer” of spatial transcriptomics, which allows a new window for researchers to understand the actual mechanisms of gene activity within living systems.

In April, we witnessed the worries of retail workers confronting health risks as mask mandates expired, considered the pitfalls of unregulated facial recognition technologies, confronted the specter of grief among children who had lost a close family member to COVID, weighed the growing risks of cyberattacks, considered the possibilities of fine-tuning clinical trial eligibility criteria, and bid a strangely fond farewell to one of the nation’s most infamous airline gates.

April

Closeup photo of mylar balloon printed with "grimacing" emoji. Image credit: Bernard Hermant via Unsplash
Image credit: Bernard Hermant via Unsplash
  • “For many people who work in retail, especially grocery stores and big-box chains, the mask repeals are another example of how little protection and appreciation they have received during the pandemic. …Grocery employees were not initially given priority for vaccinations in most states, even as health experts cautioned the public to limit time in grocery stores because of the risk posed by new coronavirus variants.”  The New York Times’ Sapna Masheshwari documents the apprehension felt by retail workers – already shouldering disproportionate loads during the pandemic – as some states drop mask mandates for customers.
  • “Gate 35X was just a bus station. In an airport….Except, somehow, it was more than that. It was a funnel, a choke point, a cattle call. One gate, as many as 6,000 travelers per day. The ceilings were lower. The seats were all taken, as were the electrical outlets. There was no bathroom down there, no vending machine, no water fountain. Dante’s circles were over-invoked. The complaining was olympic.” Another cause for rejoicing: as the Washington Post’s Dan Zak reports, the nation’s worst airport gate, the notorious 35x at Washington National, is being replaced this month.
  • “Countries around the world have regulations to enforce scientific rigour in developing medicines that treat the body. Tools that make claims about our minds should be afforded at least the same protection. For years, scholars have called for federal entities to regulate robotics and facial recognition; that should extend to emotion recognition, too. It is time for national regulatory agencies to guard against unproven applications, especially those targeting children and other vulnerable populations.” In a viewpoint article published in Nature, AI expert Kate Crawford argues that it’s time to get a handle on the current free-for-all in machine learning applications that purport to identify and interpret human emotions via facial recognition technologies.
  • “The number of children experiencing a parent dying of COVID-19 is staggering, with an estimated 37 300 to 43 000 already affected….Sweeping national reforms are needed to address the health, educational, and economic fallout affecting children. Parentally bereaved children will also need targeted support to help with grief, particularly during this period of heightened social isolation.” A research letter published this week by Kidman and colleagues in JAMA Pediatrics confronts a particularly grim reality of the COVID pandemic – that numerous children have lost parents to the disease.
  • “Our analyses reveal that many common criteria, including exclusions based on several laboratory values, had a minimal effect on the trial hazard ratios. When we used a data-driven approach to broaden restrictive criteria, the pool of eligible patients more than doubled on average and the hazard ratio of the overall survival decreased by an average of 0.05. This suggests that many patients who were not eligible under the original trial criteria could potentially benefit from the treatments.” A research article published in Nature by Liu and colleagues describe Trial Pathfinder, a data analysis tool designed to allow researchers to fine-tune participant eligibility criteria in cancer clinical trials using data from Flatiron’s EHR-derived database to conduct simulations. STAT News’ Katie Palmer also reports on the study.
  • “The lead author of the new study—a cave researcher himself—used to gravitate toward jargon as well. At the beginning of his career, Alejandro Martínez peppered his papers with “fancy words,” he says, because that’s what others in his field did and he thought it would impress his colleagues. But, he continues, his work wasn’t getting cited.” An article by Science’s Katie Langin describes findings from a new paper by Martinez and Mammola published in Proceedings of the Royal Society B that show an inverse relationship between the frequency of a scientific paper’s citation and the amount of specialized technical jargon it contains.
  • “The campuses are part of an escalating number of extortion and ransomware attacks the FBI has been tracking since March 2020, when the Covid-19 pandemic took hold in the U.S. Cybercriminals have taken advantage of the unique circumstances of the pandemic to double down on their demands.” At the Chronicle of Higher Education, Katherine Mangan describes an uptick in concern over cyberdefense at US universities as hacking attempts increase in volume and sophistication. (For a quick primer in cyberdefense, check out DCRI Chief Information Officer and former FDA infosec chief Eric Perakslis’ blog on the topic.)
  • “Hundreds of medical staff and public-health scientists across the world have reported a barrage of abuse, attacks and serious threats during the pandemic. The reports are the tip of an iceberg that observers say is poorly understood. Their stories reflect a tense relationship between science, media, politics and the public; when that relationship turns toxic, the cost is not only personal.” An article by Anita Makri published this week in Nature Medicine describes the burdens – including the threat of physical violence – that public health researchers have borne during the pandemic.
  • “Because the government gave plasma to so many patients outside of a controlled clinical trial, it took a long time to measure its effectiveness. Eventually, studies did emerge to suggest that under the right conditions, plasma might help. But enough evidence has now accumulated to show that the country’s broad, costly plasma campaign had little effect, especially in people whose disease was advanced enough to land them in the hospital.” The New York Times’ Katie Thomas and Noah Weiland report on the aftermath of investments made by the federal government to make so-called convalescent plasma available to patients during the earlier days of the COVID pandemic.

In May, we took a deeper look at the datasets undergirding regulatory decisions about medical AI applications, read about efforts to more efficiently evaluate research claims, considered how the healthcare system could be transformed, confronted the threat to knowledge presented by ubiquitous linkrot, and saw long-awaited results from a trial of aspirin dosing for the prevention of myocardial infarction.

may

Closeup photograph of red poppies in bloom. Image credit: Monica Galentino/Unsplash
Image credit: Monica Galentino/Unsplash
  • “The path to safe and robust clinical AI requires that important regulatory questions be addressed. Are medical devices able to demonstrate performance that can be generalized to the entire intended population? Are commonly faced shortcomings of AI (overfitting to training data, vulnerability to data shifts, and bias against underrepresented patient subgroups) adequately quantified and addressed?” An analysis published last month in Nature by Wu and colleagues present the key findings from an exhaustive analysis of how medical AI applications are being reviewed and approved for the Food and Drug Administration, in which they identify some potential weaknesses, including a reliance on retrospective study data and a lack, in some cases, of salient details. An article by Elise Reuter at MedCityNews provides additional context.
  • “The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.” The SCORE Collaboration has published a preprint paper at SocArXiv describing their effort to create an efficient, reliable means for assessing the strength of research claims in ways that would improve upon some of the intensely laborious and time-consuming processes that are currently employed (H/T @BrianNosek).
  • A special, multi-author series published at Health Affairs Blog takes a look at the transformation of the current clinical trials enterprise from multiple angles. Included in the series are essays on the need for strategic approaches to inclusivity, the potential and challenges presented by the incorporation of new, patient-centered technologies, and the importance of communication and transparency in fostering a more inclusive clinical research system.
  • “With the pandemic now deep into its second year, it’s clear the crisis has exposed major weaknesses in the production and use of research-based evidence — failures that have inevitably cost lives. Researchers have registered more than 2,900 clinical trials related to COVID-19, but the majority are too small or poorly designed to be of much use (see ‘Small samples’). Organizations worldwide have scrambled to synthesize the available evidence on drugs, masks and other key issues, but can’t keep up with the outpouring of new research, and often repeat others’ work.” A Nature feature article by Helen Pearson takes a long, hard look at the performance of the global system for generating medical evidence during the COVID pandemic, and finds much to praise – and much to lament (H/T @Arrianna_Planey).
  • “In this pragmatic trial involving patients with established cardiovascular disease, there was substantial dose switching to 81 mg of daily aspirin and no significant differences in cardiovascular events or major bleeding between patients assigned to 81 mg and those assigned to 325 mg of aspirin daily.” The landmark ADAPTABLE clinical trial – the first large pragmatic controlled trial undertaken by the Patient-Centered Outcomes Research Institute – has announced the results of its investigation into the optimal dosing of aspirin for secondary prevention of myocardial infarction at the American College of Cardiology conference and in a report by Jones and colleagues published in the New England Journal of Medicine.
  • “We define, identify, and present empirical evidence on Data Cascades—compounding events causing negative, downstream effects from data issues—triggered by conventional AI/ML practices that undervalue data quality. Data cascades are pervasive (92% prevalence), invisible, delayed, but often avoidable. We discuss HCI opportunities in designing and incentivizing data excellence as a first-class citizen of AI, resulting in safer and more robust systems for all.” A fantastic preprint paper by Sambasivan and colleages, presented at this May’s ACM CHI conference, turns a sociological lens on a major problem besetting the progress of machine learning – insufficient investment in ensuring the quality of the upstream data (H/T @Marshallk).
  • “Of these deep links, 25 percent of all links were completely inaccessible. Linkrot became more common over time: 6 percent of links from 2018 had rotted, as compared to 43 percent of links from 2008 and 72 percent of links from 1998. Fifty-three percent of all articles that contained deep links had at least one rotted link.”  An analysis presented by John Bowers, Clare Stanton, and Jonathan Zittrain at Columbia Journalism Review shines a light on the problem of “linkrot,” whereby online resources and references become harder to find as the URL links to them stop working. A long-standing vexation in the digital era, linkrot is becoming both more pervasive and more serious in its consequences as more and more media is accessed primarily through the internet.

The month of June saw a “credibility crisis” for machine learning, a gathering wave of pandemic grief, a widening gap in mortality between urban and rural populations, a randomized trial that used mosquitoes infected with Wolbachia bacteria to fight dengue fever, and a side trip down the history of data visualizations.

June

Closeup photo of a rack of drives in a computer server. Image credit: Panumas/Pexels
Image credit: Panumas/Pexels
  • “Machine learning, a subset of AI driving billions of dollars of investment in the field of medicine, is facing a credibility crisis. An ever-growing list of papers rely on limited or low-quality data, fail to specify their training approach and statistical methods, and don’t test whether they will work for people of different races, genders, ages, and geographies….These shortcomings arise from an array of systematic challenges in machine learning research.” STAT News’ Casey Ross dives deep into the emerging “credibility crisis” for machine learning applications in healthcare, even as AI-powered tools are seeing increasingly widespread use in clinical settings.
  • “The scale and complexity of pandemic-related grief have created a public health burden that could deplete Americans’ physical and mental health for years, leading to more depression, substance misuse, suicidal thinking, sleep disturbances, heart disease, cancer, high blood pressure and impaired immune function.” As the numbers of vaccinated persons rise and the number of daily cases continues to fall in the United States, it may seem that the country has turned the corner on the pandemic. But as Liz Szabo’s article for Kaiser Health News makes clear, a wave of grief for the people lost to the disease is still gathering.
  • “Rural residents experienced greater mortality and the disparity between rural and large metropolitan areas tripled from 1999 to 2019. Even though there were reductions in AAMRs [age-adjusted mortality rates] for all ages, there was a 12.1% increase in the AAMR for rural residents aged 25 to 64 years, which was driven by an increasing AAMR among non-Hispanic White people. However, non-Hispanic Black people had greater AAMRs across all 3 US Census–categorized areas than all other racial/ethnic groups. A research letter published this week in JAMA by Cross, Califf, and Warraich points out an ominous widening in a gap in life expectancy between urban and rural populations in the United States.
  • “A strategy for fighting dengue fever with bacteria-armed mosquitoes has passed its most rigorous test yet: a large, randomized, controlled trial. Researchers reported today dramatic reductions in rates of dengue infection and hospitalization in areas of an Indonesian city where the disease-fighting mosquitoes were released. The team expects the World Health Organization (WHO) to formally recommend the approach for broader use.” For some time now, scientists have been testing the idea that infecting mosquito populations with a particular bacteria (Wolbachia pipientis) could impeded the spread of some mosquito-borne illnesses, including dengue fever. Science’s Kelly Servick reports on the encouraging findings from a large randomized trial of the intervention, which may pave the way for introducing Wolbachia-infected mosquitoes as a public health measure (H/T @ArthurCaplan).
  • “These are scatter plots that no one ever needs to see. They exist in vast number arrays on the hard drives of powerful computers, turned and manipulated as though the distances between the imagined dots were real. Data visualization has progressed from a means of making things tractable and comprehensible on the page to an automated hunt for clusters and connections, with trained machines that do the searching. Patterns still emerge and drive our understanding of the world forward, even if they are no longer visible to the human eye.” At the New Yorker, Hannah Fry takes readers back to a time when now-ubiquitous ways of displaying data, such as plotting a time series or pie chart, were utterly novel, and looks at the literally world-changing impact of data visualization.

In July, we featured articles on the rising mental health toll among public health workers, the difficulty of teaching machines to detect sarcasm, the cognitive biases that can affect work in AI (and also the interpretation of that work), the thinning out of emergency services in rural areas, dissension over the future path for web governance, and the discovery of “Borg” DNA strands in a species of archaebacteria that eagerly assimilate genes from elsewhere.

July

Closeup of two hands holding lit sparklers against a dark background, with sunset visible in the distance. Image credit: Ian Schneider/Unsplash
Image credit: Ian Schneider/Unsplash
  • During the COVID-19 pandemic, public health workers have experienced symptoms of depression, anxiety, PTSD, and suicidal ideation. Addressing work practices that contribute to stress and trauma is critical to managing workers’ adverse mental health status during emergency responses.” A new paper published by Bryant-Genevier and colleagues in the CDC’s Morbidity and Mortality Weekly Report examines mental health issues reported by the nation’s public health workers during March-April of this year.
  • “In rural America, it’s increasingly difficult for ambulance services to respond to emergencies like Greyn’s. One factor is that emergency medical services are struggling to find young volunteers to replace retiring EMTs. Another is a growing financial crisis among rural volunteer EMS agencies: A third of them are at risk because they can’t cover their operating costs.” A story at Kaiser Health News by Aaron Bolton illuminates a growing problem for rural communities, many of which have seen local hospitals close amid waves of consolidation: the ambulance services that are becoming ever more critical for transporting residents for urgent medical care are becoming more expensive to maintain, and the pool of volunteers that staff those services are aging – and not enough new volunteers are replacing them.
  • “Sarcasm detection is the task of identifying irony containing utterances in sentiment-bearing text. However, the figurative and creative nature of sarcasm poses a great challenge for affective computing systems performing sentiment analysis. This article compiles and reviews the salient work in the literature of automatic sarcasm detection.” “Automatic Sarcasm Detection” sounds like something you’d find affixed to a prop in the Adam West version of Batman, but not only is it a real endeavor in artificial intelligence, it’s a genuinely challenging task for AI systems. A preprint review article by Yahgoobian and colleagues available from arXiv surveys current approaches to the problem.
  • “By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.” An article at IEEE Spectrum by Boston Dynamics scientist Sangbae Kim explores the cognitive biases that permeate the fields of AI and robotics research – and that greatly influence the public narratives around these technologies.
  • “These extra-long DNA strands, which the scientists named in honour of the aliens, join a diverse collection of genetic structures — circular plasmids, for example — known as extrachromosomal elements (ECEs). Most microbes have one or two chromosomes that encode their primary genetic blueprint. But they can host, and often share between them, many distinct ECEs. These carry non-essential but useful genes, such as those for antibiotic resistance.” Nature’s Amber Dance reports on a recent study by Al-Shayeb and colleagues (available as a preprint from bioRxiv) that describes a potentially new (and remarkably large) extrachromosomal element, informally named “the Borg” due to its assimilationist habits – a strand of DNA, recently identified residing in a species of archaea known as Methanoperedens, that hoovers up genes from surrounding microorganisms.
  • “…lately, that spirit of collaboration has been under intense strain as the W3C has become a key battleground in the war over web privacy. Over the last year, far from the notice of the average consumer or lawmaker, the people who actually make the web run have converged on this niche community of engineers to wrangle over what privacy really means, how the web can be more private in practice and how much power tech giants should have to unilaterally enact this change.” A fascinating essay by Issie Lapowsky at Protocol draws back the curtain on the deliberations of an obscure yet influential group of engineers and computer scientists – the World Wide Web Consortium (W3C) – over contentious issues related to the tracking of user activity across websites.
  • “STAT’s investigation, based on interviews with data scientists, ethics experts, and many of Epic’s largest and most influential clients, underscores the need for extreme caution in using artificial intelligence algorithms to guide the care of patients. Errant alarms may lead to unnecessary care or divert clinicians from treating sicker patients in emergency departments or intensive care units where time and attention are finite resources.” STAT News’ Casey Ross returns to a story that emerged earlier this summer: that widely used predictive algorithms created by the ubiquitous Epic electronic health record vendor may be fundamentally flawed. Duke AI Health’s Michael Pencina is quoted in the article.

August brought articles revealing that the animal world may be more mathematically savvy than previously thought, that spreadsheets can mangle your data, that referring to real science serve as a wedge for pseudoscience, that graph theory has some limits when it comes to describing data, and that CRISPR gene editing may show promise in eliminating mosquito-borne illness.

August

Composed photograph showing books stacked on a desk with an apple on top, next to colored pencils and a stack of colorful blocks displaying the letters a, b, and c.
Image credit: Element5 Digital/Unsplash
  • “Practically every animal that scientists have studied—insects and cephalopods, amphibians and reptiles, birds and mammals—can distinguish between different numbers of objects in a set or sounds in a sequence. They don’t just have a sense of “greater than” or “less than,” but an approximate sense of quantity: that two is distinct from three, that 15 is distinct from 20….Now, researchers are uncovering increasingly more complex numerical abilities in their animal subjects.” At Wired, Jordana Cepelewicz rounds up recent research that suggests the animal world’s capacity for number sense extends well beyond the genus
  • The spreadsheet strikes again! (and again, and again…): “Embarrassing autocorrect mistakes are common fodder for Internet listicles and Twitter threads. But they are also the bane of geneticists using spreadsheet programs such as Microsoft Excel. Five years after a study showed that autocorrect problems were widespread, the academic literature is still littered with error-riddled spreadsheets, according to an analysis of published gene lists. And the problem may be even worse than previously realized.” Nature’s Dyani Lewis has the story.
  • “We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.” An interesting social-science paper published by O’Brien and colleagues in the Journal of Experimental Social Psychology finds that embedding references to or citations of legitimate scientific research in pseudoscience can make people more receptive to the latter, even when the recipient espouses trust in science.
  • “Grochow is among a growing chorus of researchers who point out that when it comes to finding connections in big data, graph theory has its limits. A graph represents every relationship as a dyad, or pairwise interaction. However, many complex systems can’t be represented by binary connections alone. Recent progress in the field shows how to move forward.” At Quanta, Stephen Ornes looks at new developments in graph theory that may help sift the enormous datasets of “big data” for meaningful relationships.
  • Blinding me with science, indeed: “For the first time, scientists have used the gene-editing tool Crispr-Cas9 to render humans effectively invisible in the eyes of Aedes aegypti mosquitoes, which use dark visual cues to hunt, according to a paper recently published in the journal Current Biology. By eliminating two of that mosquito’s light-sensing receptors, the researchers knocked out its ability to visually target hosts.” The New York Times’ Sabrina Imbler reports on what may prove to be a significant milestone in the long trek to defeat mosquito-borne diseases.

In September, we sampled articles on the limits of the machine learning technique known as gradient descent, the link between alcohol consumption and the heart arrhythmia known as atrial fibrillation, the emergence of “Zoom dysmorphia,” the surprising secret history of the codex book, the difficulties of replicating scientific studies, the “mortality penalty” suffered by Americans, and the connections between big data, forestry practices, and society.

September

Selective focus photograph showing brown and orange leaves on a tree branch. Image credit: Timothy Eberly/Unsplash
Image credit: Timothy Eberly/Unsplash
  • “…despite this widespread usefulness, researchers have never fully understood which situations the algorithm struggles with most. Now, a research paper explains it, establishing that gradient descent, at heart, tackles a fundamentally difficult computational problem. The work’s result places limits on the type of performance researchers can expect from the technique in particular applications.” At Nautilus, Nick Thieme covers recent work that establishes limitations to the applicability of widely used machine-learning technique, an algorithm known as gradient descent.
  • “Doctors have long suspected a link between alcohol and atrial fibrillation, but until now, they did not have definitive evidence that alcohol could cause arrhythmias. The new study is among the most rigorous to date: The researchers recruited 100 people with a history of atrial fibrillation and tracked them intensely for four weeks, monitoring their alcohol intake and their cardiac rhythms in real time.” The New York Times’ Anahad O’Connor reports on findings from a study by Marcus and colleagues, recently published in the Annals of Internal Medicine, that used a variety of methods (including electronic devices) to establish a clear link between alcohol consumption and incidents of a kind of heart arrhythmia called atrial fibrillation.
  • “During the pandemic, the fun house mirror of Zoom twisted the images being reflected back to us, and at the same time, although trapped inside, we were still bombarded with edited images on social media and on television. These factors combined had a damaging impact on self-perception, anxiety and mental health – and it’s not going away.” A malaise for our era, certainly: in an article for Wired UK, Amit Katwala introduces readers to “Zoom dysmorphia,” in which unflattering camera representations during videoconference (often due to the distorting effects of camera lens geometry) are having real-world impacts on people’s psychological well-being.
  • A codex is what many of us think of when we think of a “book” – sequential written or printed pages bound between hard covers. But for a very long time, the continuous scroll was the dominant format for written material across much of the world. What changed to make the codex design the eventual winner across Europe? As this Twitter thread by @incunabula explains, the answer involves religion, ham & cheese, linen underwear, and marine snails – among a host of other esoteric factors.
  • “More than 60% of respondents to a 2016 Nature survey said they had tried to repeat other scientists’ experiments and been unable to do so. A poll of members of the American Society for Cell Biologists similarly found that more than 70% had been unable to replicate a published experimental result, with incomplete detail in the original protocol given as the most common explanation.” A feature article at Nature by Monya Baker elucidates some key approaches and tools for ensuring that experimental protocols are sufficiently clear and unambiguous that they can allow others to replicate experimental results.
  • “In general, the world would be far better off if there had been fewer but better Covid clinical trials. I’m all for trying out new ideas – that’s essential, in fact. But try them out for real. Don’t throw something together just because you can, or because you might get lucky, or because you might get a paper out of it one way or another. If you’re going to do research on human beings, you owe it to the subjects of your trial and to the rest of the medical community – and to the rest of the world, in this case – to do it right.” At In the Pipeline, Derek Lowe surveys the proliferation of COVID-related therapeutic clinical trials and asks whether too much is enough, considering the dubious quality of some of the investigations.
  • “We’re a long way from a complete understanding of the American mortality penalty. But these three facts—the superior outcomes of European countries with lower poverty and universal insurance, the equality of European life spans between rich and poor areas, and the decline of the Black-white longevity gap in America coinciding with greater insurance protection and anti-poverty spending—all point to the same conclusion: Our lives and our life spans are more interconnected than you might think.” The Atlantic’s Derek Thompson unpacks the implications of a recently published National Bureau of Economic Research working paper by Schwandt and colleagues that shows that people in the United States, regardless of age cohort, tend to die earlier than people living in European countries – a trend that long predates current downturns in longevity due to COVID, drug overdose, and “diseases of despair.”
  • “Wouldn’t that be grand? An algorithm that could calculate how many trees would atone for the historical and contemporary inequities of urban planning and environmental injustice, that could undo processes of deforestation wrought through centuries of colonial violence, that could heal a landscape destroyed by clear cutting? A dashboard that grants us datafied dominion over all of creation?… Or maybe not. As trees become data points, they are all too readily cast as easy fixes for profound problems.” A truly remarkable essay by Shannon Mattern published in Places Journal explores the forking, branching, and intertwining of forestry, ecology, data science, and human societies (H/T @CatBrinkley).

In October, we found a great explainer on how an AI system is trained, a clever clinical trial that brought some clarity regarding statin side effects, the problematic aspects of the growing interest in health equity studies, and the ways that the COVID pandemic has challenged science writers. We also learned that long-serving NIH Director Francis Collins was planning to step down, confronted possible security flaws in FHIR data exchange, and considered the use of AI in clinical decision support – and as an ethical oracle.

OCTOBER

  • “Artificial intelligence may seem like some amorphous, all-knowing entity that could outperform humans at even the most complex of tasks. But behind the scenes, humans must spend countless hours cleaning data and teaching these algorithms to ‘think.’” STAT News’ Hyacinth Empinado brings us a really nifty video explainer created by researchers at Emory University, who use a combination of narration and animation to describe the laborious process of training machine learning systems to recognize clinical relevant changes in images.
  • “Despite having permanently abandoned statin tablets because of intolerable side effects, most participants could nevertheless complete a 12-month multiple-crossover protocol intended to verify these side effects and identify their origins. These side effects predominantly arose from taking a tablet, rather than from the statin within it.” An innovative study, the SAMSON trial, reported by Howard and colleagues in the Journal of American College of Cardiology, sheds new light on the phenomenon of patients discontinuing statin therapy due to side effects such as muscle aches and weakness.
  • “Health equity researchers say they welcome new interest — and white allies — in their area, which focuses on finding solutions for poorer health outcomes in people from different races, ethnicities, genders, sexual identities, or income levels. But many are troubled by ‘health equity tourists’ — some seen as well-meaning and motivated by their new awareness of racism, others as opportunistic scientific carpetbaggers — parachuting in to ‘discover’ a field that dates back more than a century.” A trenchant piece in STAT News by Usha Lee McFarling examines who has recently been stampeding into the long-neglected field of health equity – and who is being elbowed aside during the inrush.
  • “To the extent that the pandemic has been a science story, it’s also been a story about the limitations of what science has become. Perverse academic incentives that reward researchers primarily for publishing papers in high-impact journals have long pushed entire fields toward sloppy, irreproducible work; during the pandemic, scientists have flooded the literature with similarly half-baked and misleading research. Pundits have urged people to “listen to the science,” as if “the science” is a tome of facts and not an amorphous, dynamic entity, born from the collective minds of thousands of individual people who argue and disagree about data that can be interpreted in a range of ways.” An personal essay by The Atlantic’s Ed Yong looks at how the extraordinary events of the COVID-19 pandemic affected the field of science writing.
  • “It’s remarkable that the reputation of the National Institutes of Health has remained mostly intact through the covid-19 pandemic, even as other federal science agencies, including the Food and Drug Administration and Centers for Disease Control and Prevention, have come under partisan fire….That is in no small part due to NIH’s soft-spoken but politically astute director, Dr. Francis Collins.” The news that Francis Collins, the much-lauded genetics researcher who has helmed the National Institutes of Health through 12 years and across 3 presidential administrations, is planning to step down from his post has rocked the biomedical world and triggered an outpouring of appreciation for his work. Kaiser Health News’ Julie Rovner has the story, and at Science, Jocelyn Kaiser has some additional perspective.
  • “While the report found that the EHR platforms examined in the study had good security in place, third-party clinical data aggregators and mobile apps were a completely different story: with “widely systemic” vulnerabilities that allowed access to EHR data….The report makes it clear that the vulnerabilities aren’t inherent to FHIR, rather, it’s how the blueprint is implemented as it’s up to the developer.” An article by Jessica Davis at SC Media reports findings from a study by cybersecurity expert Alissa Knight that identifies what Knight characterizes as serious vulnerabilities with the health apps designed under to meet FHIR (Fast Healthcare Interoperability and Resources) standards. The report is generating a substantial degree of controversy and concern among the health tech, cybersecurity, and patient advocacy communities and could have significant implications for the burgeoning FHIR API ecosystem (H/T @BraveBosom).
  • “Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development. While the performance of NLP methods has grown enormously over the last decade, this progress has been restricted to a minuscule subset of the world’s 6,500 languages. We introduce a framework for estimating the global utility of language technologies as revealed in a comprehensive snapshot of recent publications in NLP.” A preprint research paper by Blasi and colleagues, available at arXiv, surveys global inequities in research and applications of natural language, particularly among the large body of  languages that fall outside the small handful of those in which NLP studies most frequently take place. 
  • “We already have a tendency to frame AI systems in mystical terms — as unknowable entities that tap into higher forms of knowledge — and the presentation of Ask Delphi as a literal oracle encourages such an interpretation. From a more mechanical perspective, the system also offers all the addictive certainty of a Magic 8-Ball.” At The Verge, James Vincent explores a remarkable (and, at intervals, alarming) experiment in machine learning ethics: Delphi, a machine learning model trained on a database of human ethical judgments on a wide variety of situations and designed to provide – like its oracular namesake – a definitive response. In doing so, this real-world experiment is revealing significant shortcomings and biases, many of which can be manipulated by minor alterations in how questions are presented.
  • “These topics cover the importance of methodology, the need for evidence of performance, expected heterogeneity in performance, model and data availability, and the difficulty of implementation in clinical practice.” A video seminar by KU Leuven statistician Ben Van Calster covers the use of AI in clinical decision support.

November brought us a tour of quack gizmos and bogus medical devices, a Rawlsian theory of justice for AI, a map of health-threatening pollution concentrations, measures for incorporating sensor data into healthcare, a primer on counteracting medical misinformation, a view of the unfolding crisis of attrition among US healthcare workers, and a hypothetical bill for peer-review efforts.

November

  • “The Industrial Revolution of the early to mid-19th century and applications of scientific method to medicine in the early 1900s, combined with the absence of regulations, led to the proliferation of contraptions and so-called miracle cure production, advertising, and use. Despite court-ordered injunctions against their use, many quack devices had devoted adherents.” The fantastic, the fraudulent, the downright radioactive: at the AMA Journal of Ethics, Jorie Braunold guides readers through a yikes-worthy gallery of quack medical devices and pseudoscientific gizmos.
  • “This paper explores the relationship between artificial intelligence and principles of distributive justice….it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes…” A somewhat unusual preprint (for ArXiv, anyway) by Iason Gabriel develops a “theory of justice” for artificial intelligence, relying in part on a Rawlsian lens to focus the argument (H/T @wsisaac).
  • “At the map’s intimate scale, it’s possible to see up close how a massive chemical plant near a high school in Port Neches, Texas, laces the air with benzene, an aromatic gas that can cause Or how a manufacturing facility in New Castle, Delaware, for years blanketed a day care playground with ethylene oxide, a highly toxic chemical that can lead to lymphoma and breast cancer. Our analysis found that ethylene oxide is the biggest contributor to excess industrial cancer risk from air pollutants nationwide.” A groundbreaking, data-driven investigative report by ProPublica’s Lylla Younes, Ava Kofman, Al Shaw, Lisa Song, Maya Miller, and Kathleen Flynn reveals that air pollution may be threatening the health of substantially more communities than previously understood and exposes potential shortcomings in the protections afforded by the Clean Air Act.
  • “In health care, we are experiencing a revolution in the use of sensors to collect data on patient behaviors and experiences. Yet, the potential of this data to transform health outcomes is being held back. Deficits in standards, lexicons, data rights, permissioning, and security have been well documented, less so the cultural adoption of sensor data integration as a priority for large-scale deployment and impact on patient lives.” A paper published this week in the Journal of Medical Internet Research by Clay and colleagues surveys the challenges of integrating medical data gathered from sensors (including those in wearable and handheld devices) and using it to inform medical decision-making.
  • “Health misinformation is causing harm to individuals and to communities, but talking to one another about its impact can help slow the spread by prompting us to think twice about the information we’re reading and sharing. This toolkit will help you get started.” A toolkit developed under the leadership of US Surgeon General Vivek Murthy and released by the US Department of Health and Human Services provides practical advice for countering medical misinformation.
  • A marvelous YouTube video by the Vaccine Makers Project provides a vivid animated explanation of how the SARS-CoV-2 virus infects host cells, and how mRNA vaccines work with the body’s immune machinery to protect against that infection (H/T @SamIAm2021MD).
  • “Since COVID-19 first pummeled the U.S., Americans have been told to flatten the curve lest hospitals be overwhelmed. But hospitals have been overwhelmed. The nation has avoided the most apocalyptic scenarios, such as ventilators running out by the thousands, but it’s still sleepwalked into repeated surges that have overrun the capacity of many hospitals, killed more than 762,000 people, and traumatized countless health-care workers.” The Atlantic’s Ed Yong puts a spotlight on a quiet crisis unfolding in the US healthcare sector: the ominous attrition of healthcare workers across the spectrum of fields and specialties due to the enormous pressures exerted by the COVID pandemic.
  • “We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020.” In an article published in the journal Research Integrity and Peer Review, Aczel and colleagues attempt to tot up the hypothetical tab for peer review if researchers were actually charging for their services.

December saw us taking a look at shortcomings in data representation, growing concern over online acrimony, a roadmap for “simulation intelligence” in scientific computing, worries about a looming mental health crisis, the challenges of replicating studies in oncology, the perverse incentives at work in scientific research, the role that some physicians may be playing in spreading misinformation, the need for faster systematic reviews, and an omicron variant forecast.

december

Posed photograph showing a small toy robot standing on the keys of a laptop computer. Image credit: Jem Sahagun/Unsplash
Image credit: Jem Sahagun/Unsplash
  • “Most data being used to address Covid-19, and public health in general, are missing information from BIPOC communities. Despite federal requirements from the Office of Management and Budget, race and ethnicity data are often incompletely collected or misclassified. In 2020, more than half of U.S. health departments did not report data about all racial and ethnic groups. This lack of representation in data is leading to systemic erasure of already vulnerable populations, as well as making it difficult to assess and mitigate the impact of Covid-19 in these communities.” An opinion article at STAT News by Duke’s Warren Kibbe and UNC’s Giselle Corbie-Smith presents a call to action for engaging with communities to improve representation of BIPOC persons in datasets used to inform public health research.
  • “Worries over the rise in the acrid tone and harmful and manipulative interactions in some online spaces, and concerns over the role of technology firms in all of this, have spawned efforts by tech activists to try to redesign online spaces in ways that facilitate debate, enhance civility and provide personal security.” A report, the result of a joint effort between the Pew Research Center and Elon University, polls a series of experts to provide a glimpse of what the future may hold for an increasingly problematic public media sphere.
  • “We present the ‘Nine Motifs of Simulation Intelligence’, a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system.” A preprint by Lavin and colleagues available at arXiv presents a scientific manifesto for a synthesis of algorithms to support “simulation intelligence.”
  • “…many original papers failed to report key descriptive and inferential statistics: the data needed to compute effect sizes and conduct power analyses was publicly accessible for just 4 of 193 experiments. Moreover, despite contacting the authors of the original papers, we were unable to obtain these data for 68% of the experiments. Second, none of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments, so we had to seek clarifications from the original authors.” A paper recently published in eLife by a team seeking to replicate key findings in the oncology literature reports a high degree of difficulty in obtaining the basic materials needed to replicate findings in the first place.
  • “The report cited significant increases in self-reports of depression and anxiety along with more emergency room visits for mental health issues. In the United States, emergency room visits for suicide attempts rose 51 percent for adolescent girls in early 2021 as compared to the same period in 2019. The figure rose 4 percent for boys.” The New York Times’ Matt Richtel reports on US Surgeon General Vivek Murthy’s recent warning that the country is facing an unprecedented mental health crisis among children and adolescents.
  • “Ask most advocates of rigorous science why this is, and they will answer with two words: perverse incentives. Scientists are rewarded for getting things published, not for getting things right, and so they tend to favour speed and ease over robustness. But as an ethnographer, this explanation has never sat well with me. I’ve spent more than 15 years studying biomedical research cultures, and scientists’ behaviours are rarely so transactional.” A perspective article at Nature by Nicole C. Nelson makes the case for a rigorous ethnographic approach to examining the reasons why scientific findings are so often irreproducible.
  • “At a time when many Americans are willing to endorse conspiracies around the virus and top health officials are blaming misinformation for fueling vaccine refusal, there’s a growing demand for medical boards to make sure the doctors they license aren’t contributing to coronavirus misinformation. While only a small minority of doctors are actively spreading disinformation, experts argue that irresponsible doctors can have an outsize impact.” The Washington Post’s Alexandra Ellerbeck reports on efforts to rein in or sanction credentialed physicians who have been implicated in spreading medical misinformation, particularly in the context of the COVID pandemic.
  • “Even if Omicron causes milder disease, as some scientists hope, the astronomical case projections mean the outlook is grim, warns Emma Hodcroft, a virologist at the University of Bern. ‘A lot of scientists thought Delta was already going to make this a really, really tough winter,’ she says. ‘I’m not sure the message has gotten across to the people who make decisions, how much tougher Omicron is going to make this.’” Science’s Kai Kupferschmidt speaks with public health and epidemiology experts about the somber prospects for a winter dominated by spread of the omicron variant of COVID.
  • “Out-of-date systematic reviews are common any time there is a flood of new research. In the absence of up-to-date summaries of accumulating knowledge, decision makers’ attention often jumps from study to study. This muddles policymaking, fuels controversy and erodes trust in science. A better system would keep summaries of research evidence up to date.” Few facets of modern life have not been significantly reshaped by the demands imposed by the COVID pandemic, and scientific publishing is no exception. Now, in a commentary article at Nature, Elliot and colleagues propose a new approach to systematic reviews in medicine that will distill definitive data at a pace that can match the needs created by a global public health emergency.