OpenAI Will Give U.S. Safety Institute Early Access To New Models

Photo by Levart_Photographer on Unsplash

OpenAI Will Give U.S. Safety Institute Early Access To New Models

Reading time: 2 min

Sam Altman, OpenAI’s CEO, announced today on X that the company will provide early access to the U.S. Artificial Intelligence Safety Institute to the upcoming foundational model.

“As we said last July, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company,” wrote Altman, “Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations.”

Altman said he was excited about this decision and added that they want current and former employees to express their concerns about the AI technologies the company is developing. He emphasized that it was important for the company and its safety plan.

According to TechCrunch, the U.S. Artificial Intelligence Safety Institute—part of the National Institute of Standards and Technology (NIST)—addresses and assesses risks in AI platforms and OpenAI has already worked on a similar agreement with the United Kingdom just a few weeks ago.

Altman’s announcement comes after people and organizations have criticized the company’s safety measures. In June, multiple current and former employees from OpenAI signed an open letter to express concerns and warn about AI spreading misinformation and becoming a risk to society. Various workers have also quit the company due to safety concerns, including key OpenAI researcher Jan Leike, after OpenAI dissolved the Superalignment team, in charge of addressing AI risks.

Despite the security concerns, OpenAI keeps moving forward. The company just launched the voice feature for ChatGPT Plus, and a new AI-powered search engine called SearchGPT is available for selected users.

The new efforts to add security measures seem to prove that Altman and his team have not forgotten about AI risks and concerns.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...