Sam Altman calls for 'New Deal'-scale response to superintelligence

Sam Altman urges a government response akin to the Great Depression's New Deal in anticipation of superintelligence's disruptive impact.
OpenAI CEO Sam Altman has made an attention-grabbing appeal for government intervention on a scale comparable to the New Deal to manage the profound disruptions superintelligence could bring. Speaking to Axios, Altman emphasized that advanced AI, or superintelligence, is not a distant hypothetical but a near-term reality. He likened the societal adjustments required to those sparked by the Great Depressionâarguably one of the most transformative periods in American history. Instead of framing this as mere rhetoric, Altman underscored the need for comprehensive policies to address the economic and societal shifts superintelligence will likely provoke.
The timing of Altmanâs comments is noteworthy. On the same day, OpenAI released a 13-page document outlining specific policy recommendations, offering a detailed framework for this proposed societal adjustment. The report situates superintelligence alongside transformative inventions such as electricity and the combustion engine, asserting that the arrival of true artificial intelligence represents a âcivilizational-level transition.â While the document aims to be informative, Altmanâs call for action is explicit: governments and institutions must start planning now for a world reshaped by superintelligence.
A timeline for disruption
One of the striking elements of Altmanâs remarks is the urgency of his timeline. While the arrival of generalized superintelligenceâAI capable of outperforming humans across practically all domainsâhas often been imagined in science fiction or vague future scenarios, Altman flagged that early versions of superintelligence could emerge within just a few years. Some in the AI community believe a tipping point could occur as soon as 2028. Such estimates push the conversation from speculative debate into the practical realm of policy-making, economic planning, and societal preparation.
âWhen the person building the thing says we need Depression-era government intervention,â observed the Axios commentary, âthatâs not hype. Thatâs a liability hedge and a lobbying document at the same time.â This duality reflects the tightrope Altman and OpenAI must walk: accelerating development of cutting-edge technologies while simultaneously advocating for regulatory frameworks to prevent catastrophic mismanagement or societal harm. The explicit comparison to the Great Depression is not merely metaphorical; it highlights the level of governmental response that Altman believes is essential to navigating the unprecedented challenges posed by superintelligence.
What does âa new social contractâ mean?
Altmanâs call for a ânew social contractâ draws directly from historical parallels, particularly the Progressive Era and the policies enacted during Franklin D. Rooseveltâs presidency in the 1930s. In the wake of the Great Depression, the New Deal delivered sweeping legislative reforms aimed at stabilizing the economy, expanding access to jobs, and updating social safety nets to address the inequities and vulnerabilities of the time. Altman suggests that superintelligence warrants a similarly systemic approach to mitigate potential economic upheaval, labor market disruptions, and shifts in societal norms.
The OpenAI document offers glimpses into what this might look like. While details were not included in the Axios report, earlier statements and proposals from Altman suggest that interventions could include measures like universal basic income (UBI), workforce retraining programs, and frameworks for equitable AI development. These ideas are not new to the tech world; UBI, for example, has been championed in various forms by figures as diverse as Elon Musk and Andrew Yang in response to automation. However, by framing superintelligence as a âcivilizational-levelâ shift, Altman signals that the scale of the response will need to go far beyond traditional policy proposals.
Risks of inaction
The risks of ignoring Altmanâs warning are not abstract. During his Axios interview, he highlighted grave threats stemming from inaction, although specific details were not mentioned. From previous comments made by OpenAI and similar organizations, the risks likely include unchecked AI development leading to massive economic inequality, societal fragmentation, or unintentional weaponization of AI systems. With even early iterations of generative AI causing industry-specific disruptionsâfrom content creation to customer serviceâthe prospect of more generalized intelligence only intensifies these concerns.
One particularly relevant risk is the potential erosion of trust in democratic institutions. Altman has previously noted the dangers of bad actors utilizing AI to spread misinformation on an unprecedented scale. Without proactive measures, the destabilizing effects of AI on the political process, economic systems, and personal freedoms could grow unchecked. By advocating for a guided governmental approach now, Altman is also seeking to build safeguards against these future scenarios.
Criticism of the OpenAI stance
Altmanâs proposals inevitably raise skepticism. Critics have historically accused OpenAI of delivering mixed messages: seeking to advance hyper-competitive AI development while simultaneously positioning itself as a cautious, regulation-friendly entity. The release of the policy document, coupled with his dramatic comparisons, leaves Altman and OpenAI open to accusations of self-interest, particularly as the company partners with governments and invests in projects that could benefit from new regulations they help shape.
A more fundamental criticism lies in whether centralized government intervention is the right mechanism for addressing the risks posed by superintelligence. Opponents might argue that innovation-driven disruption is inevitable and best managed through market mechanisms, with minimal regulatory overreach. Others warn about the risks of regulatory capture, where powerful entities like OpenAI could end up shaping rules in their favor, stifling competitors or smaller players in the AI field.
The policy window is now
Despite potential controversies, the argument for rapid policy-making is gaining traction. Altmanâs rhetoricâfrom his Congressional testimony earlier this year advocating for AI legislation, to his current invocation of New Deal-scale transformationsâunderscores that the âpolicy windowâ for meaningful action is open right now. Policymakers, industry leaders, and society as a whole will be forced to grapple with the potential of AI to profoundly reshape everything from healthcare to defense to daily life.
Superintelligence, should it materialize within timelines as short as Altman suggests, cannot be shoehorned into existing legal frameworks. It demands a forward-thinking approach, informed by lessons of the past but not constrained by them. Whether the world listens to Altman remains to be seen. What is certain is that the debate he is provoking is no longer theoretical or limited to academic circlesâitâs a question of how humanity will prepare for a fundamental shift in the balance of power, labor, and life itself.
Staff Writer
Chris covers artificial intelligence, machine learning, and software development trends.
Comments
Loading commentsâŠ


