Pentagon signs deals with 7 tech firms to bring AI onto classified systems

The US military has struck agreements with seven major technology companies, including OpenAI, Google, and SpaceX, to deploy their artificial intelligence tools within classified environments. The deals mark a significant shift in military access to commercial AI.
The United States military has formalized agreements with seven technology companies to bring their artificial intelligence capabilities into classified computing environments. Officials announced the deals this week, confirming that OpenAI, Google, and SpaceX are among the participants. The remaining four companies were not named in the brief announcement, which also did not disclose financial terms or specific use cases.
The agreements allow the Department of Defense to tap into the advanced AI models and infrastructure developed by some of the most prominent names in commercial technology. Until now, many of these companies — particularly OpenAI — maintained policies that restricted military applications of their tools, citing safety and ethical concerns. The new deals effectively carve out a pathway for classified defense work.
What the deals cover
The exact scope of each agreement remains classified. But the broad language from officials suggests that the Pentagon will be able to deploy the companies' AI models on secure, air-gapped systems that handle sensitive national security data. This could include using large language models for intelligence analysis, logistics planning, or autonomous decision support — the same kind of capabilities that defense contractors have been building in-house for years.
SpaceX's inclusion is notable because the company is primarily a launch and satellite communications provider, not an AI software house. Its Starlink constellation generates enormous amounts of data, and the military has already experimented with using that data for battlefield awareness. The deal may give the Pentagon access to SpaceX's proprietary machine learning algorithms for satellite imagery analysis or network optimization.
Google and OpenAI are the two pure AI powerhouses in the group. Google has long worked with the military on cloud services through its Google Cloud division, but the company has been cautious about applying its most advanced generative AI models to weapons or surveillance. OpenAI, founded as a nonprofit with a mission to ensure AI benefits all of humanity, revised its usage policies in early 2024 to allow "national security" applications, opening the door for contracts like this one.
The shift from ethical guardrails to national security
OpenAI's policy change was a watershed moment. The company had previously banned its technology from being used in "weapons development" or "military and warfare." The revision replaced those blanket prohibitions with a framework that permits work deemed critical to national security while still forbidding uses that violate international law or cause harm. The company said it wanted to contribute to the defense of democratic values.
Critics were quick to pounce. Open letters from AI researchers and advocacy groups argued that the change would accelerate an AI arms race and create new pathways for autonomous weapons. The Pentagon, however, has made clear it views AI as a decisive advantage. The source material notes that the move comes after the Pentagon labeled AI a "critical enabler" of future military capabilities — though the exact wording of that classification was cut short in the original announcement.
This is not the first time the US military has turned to Silicon Valley for cutting-edge technology. During the Cold War, companies like IBM and Lockheed Martin built specialized hardware for the Pentagon. In the last decade, the Defense Department's Joint Artificial Intelligence Center (JAIC) sought partnerships with major cloud providers. Project Maven, a 2017 effort to use machine learning for drone video analysis, sparked employee protests at Google and prompted the company to issue its own set of AI principles that limited military use. Those principles have now been effectively sidelined.
Seven companies, but which seven?
The Pentagon's announcement named three firms: OpenAI, Google, and SpaceX. Speculation about the remaining four is unavoidable but should be disciplined. Likely candidates include Microsoft, which already holds a massive cloud contract with the DoD called JEDI (now JWCC); Amazon Web Services, which runs the Top Secret cloud region AWS Secret Region; and possibly Palantir, the data analytics company that has deep ties to defense and intelligence agencies. Meta and Apple are less probable, given their limited history with classified government work. Without official confirmation, however, the full list remains unknown.
Why this matters for the industry
The deals represent a departure from the model in which the military relies primarily on traditional defense contractors such as Raytheon, Northrop Grumman, and Boeing for AI capabilities. Those companies have strong integration skills but rarely match the cutting-edge R&D velocity of a company like OpenAI. By contracting directly with commercial AI leaders, the Pentagon hopes to stay ahead of adversaries — especially China, which has prioritized AI for military modernization.
For the tech companies themselves, the agreements offer a lucrative revenue stream and a seal of approval that can help in competing for other government contracts. They also come with reputational risk. Employees at Google and OpenAI have publicly protested such collaborations in the past, and internal tensions may rise again. The companies are banking on the idea that defending democratic nations against authoritarian rivals is a morally defensible line, but the line between defense and offense in AI is notoriously blurry.
Limitations and concerns
Classified settings impose unique constraints on AI. Models cannot be connected to the public internet, which means they must run entirely on local hardware — a challenge for the massive neural networks that power today's chatbots. Companies like OpenAI and Google will need to either shrink their models or provide specialized hardware that can handle the computational load in a closed environment.
There is also the problem of reliability. Generative AI models are known to hallucinate facts, and in a military context, even a small error could have catastrophic consequences. The Pentagon will need to implement rigorous validation and human oversight mechanisms before these tools are used in any operational capacity.
Finally, the lack of transparency around the deals troubles transparency advocates. Publicly funded research and development should come with at least some disclosure, they argue. But the classification system makes that nearly impossible. The American people may never know exactly what AI capabilities the military has acquired or how they are being used.
What comes next
The Pentagon is expected to roll out these AI capabilities incrementally. Initially, the models will likely be used for analytical tasks such as sifting through intelligence reports or translating foreign-language documents. More consequential applications — such as targeting recommendations or autonomous vehicle control — will require additional testing and likely new policy frameworks.
Congress has shown bipartisan interest in accelerating military AI adoption. The recently passed National Defense Authorization Act includes provisions for AI pilot programs and increased funding for the Chief Digital and Artificial Intelligence Office. The seven deals announced this week are a concrete outcome of that political momentum.
For the tech industry, the message is clear: defense is now open for business. Companies that once drew firm ethical lines between commercial and military uses are erasing them. Whether this alignment ultimately strengthens national security or ignites a new era of AI-powered conflict will depend on the rules of engagement — and those rules are still being written.
Staff Writer
Chris covers artificial intelligence, machine learning, and software development trends.
Comments
Loading comments…



