Microsoft Introduces New Azure AI Tools to Ensure Security and Trust in Generative AI Applications

Microsoft Introduces New Azure AI Tools to Ensure Security and Trust in Generative AI Applications

Reading time: 3 min

Microsoft has unveiled new tools in Azure AI to enhance the security and reliability of generative AI applications. The tools will help generative AI app developers using Azure AI Studio to prevent prompt injection attacks and hallucinations in model outputs.

The new tools are now available, while more will be coming soon to Azure AI Studio for generative AI app developers, says Sarah Bird, Chief Product Officer of Responsible AI at Microsoft. Bird acknowledged that “Prompt injection attacks have emerged as a significant challenge, where malicious actors try to manipulate an AI system into doing something outside its intended purpose, such as producing harmful content or exfiltrating confidential data.”

The announcement, which was made on March 28, added that these tools will be useful for business leaders who are “trying to strike the right balance between innovation and risk management” and “want to ensure that their AI systems are not generating errors or adding information” that can erode user trust.

First, it will address the issue of prompt injection. Prompt injection happens when malicious actors try to exploit vulnerabilities in AI systems and manipulate them to produce undesirable outcomes. Prompt injection attacks have emerged as a significant threat to the safety and security of AI applications, especially chatbots.

Bird explains, “Prompt injection attacks, both direct attacks, known as jailbreaks, and indirect attacks, are emerging as significant threats to foundation model safety and security. Successful attacks that bypass an AI system’s safety mitigations can have severe consequences, such as personally identifiable information (PII) and intellectual property (IP) leakage.”

Microsoft has introduced Prompt Shields, a feature designed to detect and block suspicious inputs in real-time, thereby safeguarding the integrity of large language model (LLM) systems. Safety Evaluations is another essential feature now available on the Azure platform. It’ll help developers assess an application’s vulnerability to jailbreak attacks and generate content risks. However, this feature is currently available in preview mode.

The tech giant will also introduce Groundedness detection, aimed at identifying and mitigating instances of ‘hallucinations’ in model outputs. These hallucinations occur when AI models generate outputs that lack factual grounding or common sense, posing a risk to the reliability of AI-generated content. Safety System Messages is another feature Bird says will be available soon. It will guide the behavior of AI models toward safe and responsible outputs, along with

Finally, the Risk and Safety Monitoring tool will enable organizations to gain insights into model inputs, outputs, and end-user interactions, helping them make informed decision-making and risk mitigation strategies.

Microsoft remains at the forefront of developments in the generative AI landscape. The company has also reaffirmed its commitment to advancing safe, secure, and trustworthy AI, as well as empowering developers to build safe and reliable AI applications.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...