AI Health
Friday Roundup
The AI Health Friday Roundup highlights the week’s news and publications related to artificial intelligence, data science, public health, and clinical research.
November 10, 2023
In this week’s Duke AI Health Friday Roundup: more earthly biota than stars in the sky; AI needed to subtract AI-created content; researchers apply cognitive tests to GPT; study highlights bad citations as serious problem for science; mental health resources for LGBTQ+ youth; new therapies needed to counter dengue’s march; bioRxiv uses LLMs to create tailored content summaries from papers; risks of generative AI not evenly distributed; much more:
AI, STATISTICS & DATA SCIENCE
- “Our results indicate that GPT 3.5 is unlikely to have developed sentience, although its ability to respond to personality inventories is interesting. It did display large variability in both cognitive and personality measures over repeated observations, which is not expected if it had a human-like personality. Variability notwithstanding, GPT3.5 displays what in a human would be considered poor mental health, including low self-esteem and marked dissociation from reality despite upbeat and helpful responses.” A paper by Ann Speed, available as a preprint from arXiv, presents results from a study that applied standard cognitive and personality evaluations to ChatGPT 3.5.
- “Drawing inspiration from social science findings, we design evaluation methods to manifest biases through 2 dimensions: (1) biases in language style and (2) biases in lexical content. We further investigate the extent of bias propagation by analyzing the hallucination bias of models, a term that we define to be bias exacerbation in model-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters.” A research paper by Wan and colleagues, accepted for presentation at EMNLP and available as a preprint from arXiv, examines how bias may surface in the output of large language models tasked with drafting reference letters.
- “Previous work using self-report data has suggested that important increases in heavy gaming may occur during pandemics because of containment and closure (“lockdown”) procedures. This study contrasts with the previous evidence base and finds no evidence of such a relationship. It suggests that significant further work is needed before increases in disordered or heavy gaming are considered when planning public health policies for pandemic preparedness.” A telemetry-based study by Zendle and colleagues, published in the Journal of Medical Internet Research, contradicts previous self-reported data that fueled claims of increases in “disordered” video gaming associated with COVID lockdowns.
- “Into this environment, generative AI systems will only exacerbate that problem. In the same way that robotics have made manufacturing processes more exact, more efficient, faster, and cheaper, AI tools will help everyone generate ever more content. As large language models and generative text creation AI systems make the authorship of content easier, ultimately this will only generate more and more content.” An essay by Todd Carpenter at Scholarly Kitchen points out that given the capacity of AIs to pump out counterfactual, irrelevant, or just plain redundant “content” at industrial scale, we will need to adapt AIs for the task of cleaning up the mess they create in the first place.
- “While there are numerous benefits that may emerge from the usage of Gen AI systems, these benefits are, to date, concentrated in countries with high-resourced languages (e.g., English), and underperform in countries with low-resourced languages. In contrast to these benefits, as highlighted in the recently held AI Safety Summit, there are several concerns around the potential risks associated with the advancement of cutting-edge Gen AI systems, such as the spread of mis/disinformation.” An essay by the Oxford Internet Institute’s Barani Maung Maung and Keegan McBride points out that the risks presented by the use of now-ubiquitous large language models are unequally distributed across nations.
BASIC SCIENCE, CLINICAL RESEARCH & PUBLIC HEALTH
- “From bacteria to blue whales, the number of cells in living things exceeds the estimated number of sand grains on Earth by a factor of a trillion. It’s 1 million times larger than all the stars in the universe. And the number of cells that have ever lived is 10 orders of magnitude larger still, according to new estimates researchers reported last week in Current Biology.” An article by Science’s Elizabeth Pennisi unpacks recent research that suggests in big-numbers contest with astrophysics, biology may come out on top.
- “Even without a name, it is a devil we all know: an article cites a source that does not support the statement in question, or, more commonly, the initial reference sends the reader down a rabbit hole of references, the bottom of which is difficult to find and interpret. This causes two problems. Firstly, it may propagate data that are false, misinterpreted, or both, spurring “academic urban legends” that become circulated as truth….Second, it undermines respect for the process of literature review…” An article published in the BMJ by Peoples and colleagues tackles an unfortunately perennial issue in biomedical science – inaccurate citation practices – and proposes some measures to improve the state of play.
- “In the face of positive media attention or a sudden (though often superficial) shift in public opinion, there is a risk that harm reduction advocates may prematurely claim victory. The problem with fads, however, is that they are short-lived, and the widespread support is fleeting. The substance use challenges we face are anything but fleeting and require sustainable, long-term investments in harm reduction, treatment and recovery support services in tandem.” In an article for STAT News, Alexandra Plante worries that the potential benefits of harm-reduction approaches to addiction issues may be attenuated if they are entangled in superficial fads and correspondingly abandoned when interest wanes.
- “We have found evidence of an association between actionable genotypes in the Icelandic population and a shortened life span. The identification and disclosure of actionable genotypes to participants holds considerable potential to mitigate the disease burden on individual persons as well as on society in general.” A genomic study by researchers in Iceland, published this week in the New England Journal of Medicine, links genomic profiles with lifespan data, and finds several “actionable” genetic variants that are significantly associated with reduced life expectancy (H/T @EricTopol).
- “There is no specific treatment for dengue, which is also known as breakbone fever and can cause fever, bone pain and even death. The available vaccines have important limitations, and controlling the mosquitoes that transmit the disease is challenging….But scientists are not sitting idle. At the annual meeting of the American Society of Tropical Medicine and Hygiene, held in Chicago, Illinois, last month, researchers shared the latest results of their efforts to develop vaccines, antiviral medications and mosquito-control methods to curb the disease. Every available tool is needed, they say.” As climate change opens up new opportunities for dengue to spread outside of its tropical environs, Nature’s Mariana Lenharo looks at whether countermeasures are capable of keeping pace.
COMMUNICATION, Health Equity & Policy
- “An unpublished analysis shared with Nature suggests that over the past two decades, more than 400,000 research articles have been published that show strong textual similarities to known studies produced by paper mills. Around 70,000 of these were published last year alone (see ‘The paper-mill problem’). The analysis estimates that 1.5–2% of all scientific papers published in 2022 closely resemble paper-mill works. Among biology and medicine papers, the rate rises to 3%.” Nature’s Richard Van Noorden spotlights a recent (not yet peer-reviewed) analysis that suggests the problem of fake scientific publications is worse than we thought – and we already thought it was bad.
- “This series of guides can help professionals, families, and communities support the mental well-being of Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, Asexual, and Two-Spirit (LGBTQIA2S+) youth. The series includes a resource guide and four companion focus guides designed for specific populations.” A new set of resources for promoting mental health and preventing suicide among LGBTQ+ youth is available from the Suicide Prevention Resource Center.
- “A new state law…requires hospitals with emergency departments to have a law enforcement officer on site at all times, unless they get local authorities to sign off on an exemption. The requirement takes effect in 2025….The law also calls for hospitals to report violent incidents to the state, to provide employees with violence-prevention training and to conduct a security risk assessment and create a detailed security plan.” North Carolina Health News’ Charlotte Ledger reports on NC legislative reaction to a seeming rash of violence directed at healthcare workers in recent years.
- “Today we are launching a new pilot project aimed at using LLMs to increase accessibility of content on bioRxiv. Every bioRxiv preprint will now be posted with three AI-generated summaries, each created for a different kind of reader: someone with little or no scientific training; a scientist with expertise in a different field; and someone whose expertise equals the author’s. The summaries are created from the full text of the preprint, not just the abstract.” Preprint server bioRxiv launches a new feature that will use LLMs to create tailored summaries of posted papers.