Researchers Discover Security Flaws In Open-Source AI And ML Models
In a Rush? Here are the Quick Facts!
- Over 30 security flaws found in open-source AI and ML tools.
- Severe vulnerabilities impact tools like Lunary, ChuanhuChatGPT, and LocalAI.
- LocalAI flaw allows attackers to infer API keys through timing analysis.
A recent investigation has uncovered over 30 security flaws in open-source AI and machine learning (ML) models, raising concerns about potential data theft and unauthorized code execution, as reported by The Hacker News (THN).
These vulnerabilities were found in widely used tools, including ChuanhuChatGPT, Lunary, and LocalAI, and were reported via Protect AI’s Huntr bug bounty platform, which incentivizes developers to identify and disclose security issues.
Among the most severe vulnerabilities identified, two major flaws impact Lunary, a toolkit designed to manage large language models (LLMs) in production environments.
The first flaw, CVE-2024-7474, is categorized as an Insecure Direct Object Reference (IDOR) vulnerability. It allows a user with access privileges to view or delete other users’ data without authorization, potentially leading to data breaches and unauthorized data loss.
The second critical issue, CVE-2024-7475, is an improper access control vulnerability that lets an attacker update the system’s SAML (Security Assertion Markup Language) configuration.
By exploiting this flaw, attackers can bypass login security to gain unauthorized access to personal data, raising significant risks for any organization relying on Lunary for managing LLMs.
Another security weakness identified in Lunary, CVE-2024-7473, also involves an IDOR vulnerability that allows attackers to update prompts submitted by other users. This is achieved by manipulating a user-controlled parameter, making it possible to interfere with others’ interactions in the system.
In ChuanhuChatGPT, a critical vulnerability (CVE-2024-5982) allows an attacker to exploit a path traversal flaw in the user upload feature, as noted by THN.
This flaw can lead to arbitrary code execution, directory creation, and exposure of sensitive data, presenting high risk for systems relying on this tool. LocalAI, another open-source platform that enables users to run self-hosted LLMs, has two major flaws that pose similar security risks, said THN.
The first flaw, CVE-2024-6983, enables malicious code execution by allowing attackers to upload a harmful configuration file. The second, CVE-2024-7010, lets hackers infer API keys by measuring server response times, using a timing attack method to deduce each character of the key gradually, noted THN.
In response to these findings, Protect AI introduced a new tool called Vulnhuntr, an open-source Python static code analyzer that uses large language models to detect vulnerabilities in Python codebases, said THN.
Vulnhuntr breaks down code into smaller chunks to identify security flaws within the constraints of a language model’s context window. It scans project files to detect and trace potential weaknesses from user input to server output, enhancing security for developers working with AI code.
These discoveries highlight the critical importance of ongoing vulnerability assessment and security updates in AI and ML systems to protect against emerging threats in the evolving landscape of AI technology.
Leave a Comment
Cancel