Create a free account, or log in

Does OpenAI’s US$6.6 billion raise prove Elon Musk was right?

Elon Musk has been criticising OpenAI and its alleged shift to a for-profit company for awhile now. And maybe he was right.
Tegan Jones
Tegan Jones
Elon musk openai
Elon Musk. Source: BRITTA PEDERSEN/POOL/AFP VIA GETTY IMAGES

I hate to say it. And you probably hate to hear it. But maybe Elon Musk was right. Specifically, while the tech industry was singing the praises of OpenAI, Musk — a co-founder turned critic — was publicly sceptical of the business he helped found. And with increasingly good reason.

This week, OpenAI closed a gargantuan US$6.6 billion ($9.6 billion) funding round — the largest in venture capital history. This pushed the company’s valuation to a staggering US$157 billion.

The AI world had been anticipating this moment with bated breath for months. The backers included familiar names like Thrive Capital, Microsoft, and Nvidia.

However, eyebrows have been raised by notable absentees, including Apple, which quietly bowed out after early talks despite collaborations between the companies for iOS 18.

While the eye-watering amount of funding cements OpenAI’s status as the central player in the generative AI landscape, it raises fresh concerns about the company’s direction and future.

OpenAI’s shift from mission to profit

When OpenAI first launched in 2015, it had a far more idealistic mission: that AI should benefit humanity. It started as a non-profit initiative, designed to act as a counterbalance to companies like Google, which Musk saw as being too focused on profit at the expense of safety.

However, during the past year – marked by board squabbles, founders leaving, the firing and rehiring of CEO Sam Altman, and growing concerns around safety – OpenAI has noticeably drifted from those origins.

“…now it has become a closed-source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all,” Musk tweeted in February 2023.

Musk, who cut ties with the company in 2018 due to conflicts of interest with Tesla’s AI research, has repeatedly criticised the evolution of OpenAI.

He was also one of 1,800 signatories who called for a six-month pause on ‘giant’ AI experiments in late 2023.

Since then, Musk launched his own AI company, xAI, with the somewhat vague mission to “understand the true nature of the universe”. xAI also launched Grok, a ChatGPT competitor inspired by The Hitchhiker’s Guide to the Galaxy.

With OpenAI’s massive new raise and its shift toward a for-profit structure, Musk’s concerns seem increasingly relevant.

Although OpenAI began as a non-profit, it launched a for-profit subsidiary in 2019 to raise capital while still being directed by the non-profit.

This structure allowed the company to issue equity while legally binding the for-profit arm to pursue the nonprofit’s mission of developing AGI that “benefits all of humanity”.

The idea was to balance commercial success with ethical AI development, ensuring that profit wouldn’t dominate its priorities.

OpenAI’s governance model includes caps on financial returns to both investors and employees, designed to ensure that safety and broad societal benefit remain central.

The non-profit board, which retains full control over the for-profit subsidiary, has a fiduciary duty to keep the company aligned with its original mission of safe AGI development.

However, this balancing act has become increasingly strained, as the pressure to deliver returns mounts.

This will likely escalate with such a historic investment entering OpenAI’s coffers. While the raise is impressive, it also points to the immense financial pressure the company is under.

Despite the enthusiasm surrounding its generative AI tools like ChatGPT, OpenAI is losing money with every interaction. As Edward Zitron recently pointed out, the infrastructure costs of maintaining tools like ChatGPT are spiraling out of control, and the company is bleeding cash.

Even with over 250 million weekly active users, OpenAI is struggling to monetise at a level that justifies its massive valuation. Each use of the free version of ChatGPT costs the company money, and its path to profitability remains increasingly tumultuous.

In 2024, OpenAI is projected to lose up to US$5 billion on US $3.7 billion in revenue. For a company that started with a lofty mission of advancing AGI for the benefit of humanity, its current state is driven by an unsustainable need to scale rapidly at any cost.

The question now is whether safety and the pressure to generate profits can truly coexist—or whether OpenAI will continue to drift further from its altruistic roots.

Growing pains: Product vs research

OpenAI’s shift toward profitability has also resulted in internal friction, particularly between its product and research teams.

Last week, OpenAI’s CTO Mira Murat and two of the company’s top researchers resigned, citing safety concerns and the company’s fast-paced push for product development.

This rush to commercialise — which we have seen in the rush to push out paid tiers and voice functionality — could be seen as the company creating a culture where speed outweighs research.

While the company denies deprioritising safety, the departure of key staff is enough to raise eyebrows.

We also saw tensions culminate last year when the board fired CEO Sam Altman, shocking employees, investors and partners such as Microsoft. More than 500 employees threatened to resign at the time. Altman was reinstated within days, but the entire ordeal pointed towards leadership struggles and issues within the company that may not have entirely dissipated.

High-profile departures, such as co-founders Durk Kingma and John Schulman joining rival Anthropic, further highlight OpenAI’s internal challenges.

The ongoing clash between its safety mission and aggressive commercialisation has left many questioning whether the company can balance these priorities, especially after the latest investment.

As OpenAI moves further away from its non-profit roots and becomes more intertwined in the world of VC, one has to wonder what this means for the future of AI.

Was Elon Musk — perhaps the most controversial tech bro in a veritable sea of them — actually right on this one?

How can a company with such immense financial pressure still prioritise safety? Will the allegedly divided product and research factions at OpenAI ever reconcile their differences? And will it eventually abandon its mission charter entirely?

Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.