- Advertisement -
HomeTECHIs OpenAI Scaring The World To Slow Down Its Competitors?

Is OpenAI Scaring The World To Slow Down Its Competitors?

OpenAI warns of “catastrophic” risks from superintelligent AI and calls for slower development. But is the tech giant playing it safe or playing to win?

OpenAI has issued another warning about the future of artificial intelligence. The ChatGPT maker said that while superintelligent systems could bring huge benefits, they could also cause “potentially catastrophic” harm if not handled properly.

The company posted a blog on November 6, saying the AI industry “should slow development to more carefully study these systems.” OpenAI believes more research is needed to ensure advanced AI models stay safe and under control.

The company also hinted that the world is close to creating “systems capable of recursive self-improvement.” In simple terms, this means AI that can upgrade itself, moving closer to artificial general intelligence (AGI) – the point where machines could outperform humans in most tasks.

OpenAI said, “Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work.”

Is This A Smart Warning Or A Strategic Pause Planned By OpenAI?

Many might see OpenAI’s statement as a sensible call for caution. But some would think the company might be using safety concerns to slow down its competitors. OpenAI is already leading the AI race, and if other labs pause development, it could give the company more time to strengthen its own systems.

The timing also raises questions. The warning comes soon after Prince Harry and Meghan Markle joined other public figures like Steve Bannon and Glenn Beck in calling for limits on AI “superintelligence.” They feel that AI that surpasses human intelligence could pose serious risks to humanity.

Still, some researchers say such fears are premature. AI scientist Andrej Karpathy shared a different view, saying AGI is still years away. On a recent podcast, he said, “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking.”

What Has OpenAI Planned For Future?

In the same post, OpenAI said current regulations won’t be enough to manage future AI risks. It suggested working with government agencies around the world to prevent the misuse of AI, especially in sensitive areas like bioterrorism.

The company also proposed building an “AI resilience framework,” much like the cybersecurity system that protects internet users. It would include safety standards, emergency response teams and strong monitoring systems.

Despite its concerns, OpenAI sounded slightly hopeful about the future. It expects AI systems to make a little bit of scientific discoveries by 2026 and larger ones by 2028. However, the company warned that the economic shift caused by AI could be tough, even suggesting that “the fundamental socioeconomic contract” might need to change.

Enter Your Email To get daily Newsletter in your inbox

Latest Post

Latest News