Opinion: Cofounders And Talents Are Leaving OpenAI, Are We Missing Something?

Image generated with DALL·E through ChatGPT

Opinion: Cofounders And Talents Are Leaving OpenAI, Are We Missing Something?

Reading time: 6 min

OpenAI is, without a doubt, one of the most disruptive companies in the tech industry right now. Its fascinating ChatGPT has dominated the chatbot assistant market and its influence on individuals and society is unmeasurable.

The company is currently worth over $80 billion, and it was founded with $1 billion in 2015. However, for the past few months, its organizational structure has been rapidly changing, almost as fast as its developments, new features, products, and models. But why?

Out of the 11 cofounders that initiated the project only three-ish remain. John Schulman became the 8th cofounder to leave after he publicly announced it this week.

There has been internal drama, a CEO temporarily fired, open letters, former employees worried about safety, and whistleblowers saying that the company’s non-disclosure agreements forbid them to raise concerns with regulators.

What should people know about the creators of the most advanced technology of our times? What are the facts and what are the conspiracy theories?

Let’s start from the beginning.

How It All Started: The Founders

OpenAI was launched in 2015 and the company shared a post, “Introducing OpenAI.” Creators described OpenAI as a “non-profit artificial intelligence research company” with a clear goal: “to benefit humanity as a whole.”

In the document, Ilya Sutskever was introduced as research director, Greg Brockman as chief technology officer, Altman, and Elon Musk as co-chairs, and they introduced the rest of the founders: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba.

“Since our research is free from financial obligations, we can better focus on a positive human impact,” wrote Sutskever and Brockman along with the rest of the team in that first post. They also clarified that a group of investors—including Altman, Musk, Amazon Web Services (AWS), YC Research, and more— committed to providing $1 billion that they expected to last for years.

The dream team was ready and the dream organization could begin its work.

Cofounders Leave, CEO Sam Altman Gets Fired

Not too long after OpenAI was launched, in 2016, Vagata quietly left the company to join Stripe. According to the Financial Times, the engineer does not mention OpenAI in her LinkedIn profile. Next year, in 2017, Cheung, Blackwell, and Kaparthy quit too.

The fifth founder to leave is Musk, in February 2018. He quit the board after disputes with Altman in one of the most mediatic departures in a quarrel that has persisted for years and remains active. Musk has filed multiple lawsuits against the company—the most recent one just a few days ago over a deal with Microsoft. OpenAI finally shared a public explanation a few months ago, in March, stating that Musk wanted to merge OpenAI with Tesla and fully control the company.

After Musk left, researcher and cofounder Kingma followed the trend and joined Google’s AI project. The remaining cofounders seemed to have functional organizational machinery—despite big for-profit updates as it turned into a dual-profit and non-profit organization—until drama with Sam Alman erupted and the CEO temporarily left the company in 2023. Something was definitely off again.

In 2023 Karpathy rejoined OpenAI but quit again in February 2024 to create his own AI education project—called Eureka Labs and launched in July. Then, Sutskever left his chief scientist role to create his own company, Safe Superintelligence, in May. And now, just a few days ago, two other cofounders take a step back. Schulman, who just announced his decision to join the rival company Anthropic, and two hours later Brockman announces his sabbatical.

At the moment, there’s only Altman, computer scientist Zaremba, and —technically— Brockman. So two and a half out of 11?

The Altman Drama

By the end of 2023, the world learned the shocking news: OpenAI’s board had fired Altman on November 17. During a couple of days, a lot of speculation about ChatGPT’s safety came along. The board claimed that Altman was not “consistently candid in his communication with the board” in its public explanation of the leadership transition.

“When ChatGPT came out, November 2022, the board was not informed in advance,” said Helen Toner, former OpenAI board member, during an interview for The TED AI Show podcast in May this year. “We learned about ChatGPT on Twitter,” she declared.

OpenAI’s non-profit board was concerned about Altman’s arbitrary decisions and big commercial agreements, like Microsoft’s $13 billion investments, in what started like a strange business relationship and just officially turned into an awkward competition.

American journalist Ezra Klein’s research and analysis for the New York Times concluded that the board was only trying to control the for-profit organization by sticking to the company’s original philanthropic mission. It didn’t work.

Altman is highly appreciated in the company and the tech sphere and about 90% of the employees threatened to quit if he wasn’t back—including Sutskever who initially was on the board’s side—and the board resigned. Many board members quit and Altman was back in the company within 5 days.

Theories And Facts

Weaving and making sense of the OpenAI drama is not an easy task. Many people believe that the development of new AI technologies has gone out of hand at OpenAI and that former co-founders and employees are leaving the company over safety concerns. Reddit groups believe that OpenAI has already secretly developed artificial general intelligence models.

This is not just speculation. Former and current OpenAI employees—and from other AI companies— signed a public warning about the risks with the technology in June, from misinformation to human extinction.

Jan Leike, not a co-founder but a key researcher in OpenAI quit in May, after 3 years working in the company. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time until we finally reached a breaking point,” wrote Leike on X. “over the past years, safety culture and processes have taken a backseat to shiny products.”

The Superalignment project created in July 2023 to “control AI systems much smarter than us” was dissolved after Sutskever left to create apparently the same project but with autonomy. Anthropic, where a few ex co-founders now work, was also created by former OpenAI workers, to develop “the good and ethical models” if that is even possible.

We are caught up, so what are we missing? What we are probably missing is the acceptance that OpenAI is no longer an organization with multiple leaders but has turned into a synonym for a very powerful and influential man: Altman.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...