99% Accurate Fake News Detector Announced By Keele University Researchers

Image by memyselfaneye, from Pixabay

99% Accurate Fake News Detector Announced By Keele University Researchers

Reading time: 3 min

Researchers at Keele University have developed a tool designed to detect fake news with 99% accuracy, offering a potential resource to address the growing issue of online misinformation. The development was announced yesterday by the university.

In a Rush? Here are the Quick Facts!

  • Keele University researchers developed a tool detecting fake news with 99% accuracy.
  • The tool uses an “ensemble voting” system combining multiple machine learning models.
  • The tool assesses news content to determine source reliability.

The team, comprising Dr. Uchenna Ani, Dr. Sangeeta Sangeeta, and Dr. Patricia Asowo-Ayobode from the School of Computer Science and Mathematics, employed various machine learning techniques to create a model capable of analyzing news content to assess its reliability.

The tool uses an “ensemble voting” approach, which combines predictions from multiple machine learning models to produce an overall judgment on whether a news source is trustworthy. Initial testing showed the method exceeded expectations, identifying fake news in 99% of cases.

Dr. Ani, a Lecturer in Cyber Security at Keele, highlighted the challenges posed by misinformation. He noted that the widespread dissemination of false information undermines public discourse and can influence attitudes and behaviors, posing risks to both local and national security.

The researchers hope to refine the model further as AI and machine learning technologies advance, aiming for even greater precision in identifying unreliable content. Dr. Ani emphasized the urgency of developing solutions to safeguard the credibility of online platforms, particularly social media, where misinformation is most prevalent.

Early research by Democracy Reporting International (DRI), a Berlin-based organization promoting democracy, warned that AI systems, particularly open-source Large Language Models (LLMs), pose significant risks for spreading misinformation.

DRI says that these risks arise because these models, such as Dolly, Zephyr, and Falcon, are often released without robust safeguards, leaving them vulnerable to misuse.

Their accessibility requires minimal technical skills, enabling malicious actors to manipulate them for creating false narratives or hate speech. This low barrier to entry exacerbates the risk of disinformation proliferation.

Additionally, DRI says that open-source LLMs like Zephyr demonstrate alarming capabilities, such as generating structured, persuasive malicious content in response to direct or suggestive prompts.

Such outputs are often coherent and contextually appropriate, making them particularly dangerous in shaping false narratives. Moreover, biases embedded in these models, often reflecting societal prejudices, further compound the risk of spreading harmful stereotypes.

While still in development, the tool developed at Keele University represents a step toward addressing the broader challenge of misinformation in digital communication.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
5.00 Voted by 1 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...