Create a free account, or log in

OpenAI CEO Sam Altman sees “a lot of value” in AI hallucinations

While hallucinations are one of the major concerns around generative AI, OpenAI CEO Sam Altman has a different perspective.
Tegan Jones
Tegan Jones
Sam Altman Marc Benioff Salesforce Dreamforce 2023 AI
Sam Altman (left) with Marc Benioff of Salesforce at Dreamforce 2023. Image: Salesforce

AI hallucinations have become one of the strongest negative talking points when it comes to the rollout of the new technology. But according to OpenAI’s CEO, Sam Altman, it’s actually a benefit.

AI hallucinations occur when large language models (LLM) deliver false information. It’s a big problem that we’ve seen consistently since generative AI in particular rose in popularity with the release of ChatGPT last year.

Chatbots have been considered unreliable — certainly not foolproof — due to their likeliness to deliver incorrect information. It’s something businesses need to be particularly wary of as they introduce generative AI into their workplaces. Or in some cases, when employees introduce it without their bosses knowing.

But the CEO of OpenAI — the creator of ChatGPT — offered a different view at Saleforce’s Dreamforce event this week.

“One of the non-obvious things is that a lot of value from these systems is related to the fact that they do hallucinate,” Altman said in an interview with Salesforce CEO Marc Benioff.

“If you want to look something up in a database, we already have good stuff for that.

“The fact these AI systems can come up with new ideas and be creative — that’s one of the powers.”

Altman tied AI hallucinations back to creative fields being one of the first to be disrupted by generative AI. He admitted to being initially surprised by this, assuming a decade ago that it would be physical and then cognitive tasks that would be impacted before creative.

According to Sam Altman, an LLM presenting information in a different way — what most people would call ‘incorrect’ — could be considered as creativity.

It’s certainly an interesting perspective. And there is something to it. LLMs can identify subtle correlations and details in datasets. And if it’s trying to fill in gaps in the data, the hallucinations can be detailed, complex and have the potential to be learned from.

Still, considering how hard workplace AI adoption is being pushed, organisations are likely to value things like accuracy and productivity a little higher than an AI’s ability to be “creative” with the information it spits out. At least this early in the game.

During the same interview, Altman commented that AI tools will make better art than we’ve ever seen before.

Artists who have seen their work utilised by LLM’s without their permission may have a different opinion on that.

The author travelled to San Francisco as a guest of Salesforce.