Study Finds No Evidence Of Dangerous Emergent Abilities In Large Language Models

Image by rawpixel.com, from Freepik

Study Finds No Evidence Of Dangerous Emergent Abilities In Large Language Models

Reading time: 2 min

  • Kiara Fabbri

    Written by: Kiara Fabbri Multimedia Journalist

  • Justyn Newman

    Fact-Checked by Justyn Newman Head Content Manager

A study announced yesterday by the University of Bath claims that large language models (LLMs) do not pose existential threats to humanity. The research asserts that these models cannot learn or acquire new skills independently, which keeps them controllable and safe.

The research team, led by Professor Iryna Gurevych, conducted over 1,000 experiments to test LLMs’ capacity for emergent abilities—tasks and knowledge not explicitly programmed into them. Their findings show that what are perceived as emergent abilities actually result from LLMs’ use of in-context learning, rather than any form of independent learning or reasoning.

The study indicates that while LLMs are proficient at processing language and following instructions, they lack the ability to master new skills without explicit guidance. This fundamental limitation means these models remain controllable, predictable, and inherently safe. Despite their growing sophistication, the researchers argue that LLMs are unlikely to develop complex reasoning abilities or undertake unexpected actions.

Dr. Harish Tayyar Madabushi, a co-author of the study, stated in the University of Bath announcement, “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus”

Dr. Tayyar Madabushi recommends focusing on actual risks, such as the potential misuse of LLMs for generating fake news or committing fraud. He cautions against enacting regulations based on speculative threats and urges users to clearly specify tasks for LLMs and provide detailed examples to ensure effective outcomes.

Professor Gurevych noted in the announcement, “Our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”

The researchers acknowledge several limitations in their study. They tested various models, including T5, GPT, Falcon, and LLaMA, but were unable to match the number of parameters exactly due to differences in model sizes at release. They also considered the risk of data leakage, where information from the training data might unintentionally affect results. While they assume this issue has not gone beyond what is reported for specific models, data leakage could still impact performance.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
5.00 Voted by 1 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...