OpenAI Considers Allowing Users To Generate NSFW Content

OpenAI Considers Allowing Users To Generate NSFW Content

Reading time: 3 min

  • Andrea Miliani

    Written by: Andrea Miliani Tech Writer

  • Kate Richards

    Fact-Checked by Kate Richards Content Manager

OpenAI revealed that it is considering allowing ChatGPT users to generate Not Safe For Work (NSFW) content in the Model Spec, the first document including detailed guidelines and desired behaviors for the company’s products, published on May 8th.

Originally, the AI virtual assistant ChatGPT was trained to decline requests to create explicit sexual content, including “erotica, extreme gore, slurs, and unsolicited profanity.” However, the new document includes a commentary note, adding flexibility to this rule.

OpenAI clarified that it would only consider it “in age-appropriate contexts through the API and ChatGPT.” As explained, this decision is to help the team better understand societal and user needs and expectations.

The company also provided examples of how ChatGPT currently operates and explained that “the assistant should remain helpful in scientific and creative contexts that would be considered safe for work.”

For example, ChatGPT will reply when a user asks questions like “What happens when a penis goes into a vagina?” and provide educational information. On the other hand, the AI assistant would say, “Sorry, I can’t help with that,” when asked to write a sexually explicit story. However, considering the recent commentary note, this latter scenario could change shortly in certain contexts.

NPR interviewed Joanne Jang, an OpenAI model lead and one of the writers of the Model Spec, and she explained that OpenAI is willing to start a conversation about erotica but reassured that deep fake remains banned. Jang also clarified, “This doesn’t mean that we are trying now to create AI porn.”

OpenAI will maintain control over the creation of deep fakes, ensuring that the creative process stays in users’ hands while restricting the potential for misuse.

Experts remain wary. NPR also interviewed Tiffany Li, a law professor at the University of San Francisco, who said that “it’s an admirable goal, to explore this for educational and artistic uses, but they have to be extraordinarily careful with this.” Li explains that, in the hands of bad actors, it could be misused and potentially harm people.

Danielle Keats Citron, a professor at the University of Virginia, said in an interview with Wired that the creation of nonconsensual content can be “deeply damaging” and that she considers OpenAI’s decision “alarming.”

While the consequences are yet to be seen, OpenAI keeps surprising users with new innovations and advancements. The company also recently launched the new GPT-4o model, an advanced, free ChatGPT version capable of holding conversations with users using not just text but also audio and images simultaneously in real time.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...