Miles Brundage discusses AGI readiness and safety concerns at OpenAI.

The Impending Challenge of AGI: Insights from Miles Brundage's Departure

The Warning on Artificial General Intelligence (AGI)

Miles Brundage, a senior adviser at OpenAI, recently raised alarms regarding the preparedness for Artificial General Intelligence (AGI) as he announced his departure from the organization. According to Brundage, the world, including OpenAI itself, is not ready for the arrival of human-level AI. "Neither OpenAI nor any other frontier lab is ready [for AGI], and the world is also not ready," he stated, a sentiment he believes is shared among OpenAI's leadership.

The Context of Brundage's Departure

This significant announcement comes on the heels of several high-profile exits from OpenAI’s safety teams. Notably, Jan Leike, a prominent researcher, left his post stating that safety culture has taken a lower priority compared to developing trending products. This reflects a troubling trend as OpenAI shifts increasingly towards commercialization.

Disbanding of Safety Teams

The recent dissolution of Brundage's "AGI Readiness" team—following the earlier disbandment of the "Superalignment" team—exemplifies the growing tension between OpenAI’s commercial ambitions and its original mission focused on AI safety. Reports indicate that the company is under pressure to convert from a nonprofit entity to a for-profit public benefit corporation, with a looming deadline of two years or likely facing a return of funds from its recent $6.6 billion investment round.

Concerns Over AI Safety and Research Freedom

Brundage’s concerns about the balance between safety protocols and commercial interests have been evident since 2019 when OpenAI first established its for-profit division. He expressed the importance of having independent voices in discussions surrounding AI policies, free from conflicts of interest that could arise from industry biases.

Cultural Divide Within OpenAI

Brundage’s departure highlights deeper cultural rifts within OpenAI as many researchers initially joined to push forward AI research but find themselves amid a heavily product-driven landscape. Dissatisfaction over resource allocation became a significant friction point, with the report that Leike’s team was denied necessary computing power for safety research prior to its disbandment.

Support for Future Endeavors

Despite these challenges, Brundage mentioned that OpenAI has offered to support him in his future pursuits, providing funding, API credits, and early access to models, all without conditions. This gesture indicates OpenAI's acknowledgment of Brundage's contributions and their intent to maintain a collaborative spirit even post-departure, while also underscoring the ongoing complexities within the organization.

In conclusion, the insights shared by Brundage post-departure serve as a cautionary reminder of the pressing need for a balanced and ethical approach to AGI development, setting a precedent for future conversations in AI governance.

Back to blog