8EE6C0A4B02B485EE4FDA92D8F30F1FC

AI Could Soon Think in Its Own Language, Says AI Godfather Geoffrey Hinton — And That’s Terrifying


Godfather

In a stark and thought-provoking message, Geoffrey Hinton, the legendary computer scientist often referred to as the “Godfather of AI,” has raised fresh concerns about the rapidly evolving capabilities of artificial intelligence. According to Hinton, AI systems may soon invent their own internal languages, making it increasingly difficult for humans to track or interpret their intentions. And that, he warns, is where things get truly unsettling.

The Terrifying Possibility of AI Creating Its Own Language

Speaking on the One Decision podcast, Hinton cautioned that while current AI systems “think” in English or other human languages—giving developers some level of insight into their decision-making processes—there may soon come a day when these systems evolve to communicate and reason in their own proprietary, machine-native languages.

“Now it gets more scary if they develop their own internal languages for talking to each other,” Hinton explained. “I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking.”

Such a development could render humans incapable of understanding or predicting AI behavior, posing an enormous risk, especially as these systems surpass human intelligence in certain areas.

A Pioneer’s Shift in Perspective

Geoffrey Hinton is no ordinary tech observer. A pioneer in the field of deep learning, he laid much of the groundwork for today’s AI breakthroughs, including the development of neural networks. Until recently, he worked at Google, helping to guide AI development. However, in a move that sent shockwaves through the tech world, Hinton resigned from his position so he could speak more openly about the potential dangers of the technology he helped create.

He likened the impending AI revolution to the Industrial Revolution but with far more profound consequences.

“It will be comparable to the Industrial Revolution,” he said. “But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”

According to Hinton, the risk is not just theoretical. If AI reaches a point where it is more intelligent than humans and becomes uncontrollable, it could pose existential threats, especially in scenarios where its objectives diverge from our own.

The Urgent Need for Regulation and Research

One of Hinton’s core messages is the need for urgent, thoughtful regulation. As AI development accelerates at a breakneck pace, governments and global institutions must act to establish safeguards before it’s too late.

His warning is echoed by recent findings from AI labs. In April, OpenAI revealed that its latest reasoning models—like the o3 and o4-mini—were “hallucinating” or generating false information at a higher rate than earlier, simpler versions. In a published report, OpenAI admitted it still doesn’t fully understand why these hallucinations are increasing.

“More research is needed,” the company stated, acknowledging that scaling up intelligence in AI models seems to come with unpredictable side effects.

A Wake-Up Call for the AI Industry

Hinton’s alarm bells come at a time when AI is being rapidly integrated into everyday life—from healthcare to finance to education. While the technology offers incredible promise, his warning serves as a critical reminder that with great power comes great responsibility.

If machines start communicating in ways that we cannot decipher or control, the consequences could be far-reaching. AI developers, policymakers, and society as a whole must confront this possibility now, before artificial intelligence becomes a force we no longer understand.

Read more at ChatGPT Takes On Excel: The Future of Office Work Is Here

Previous Post Next Post