![]()
OpenAI is facing mounting backlash after striking a controversial agreement with the Pentagon to deploy its artificial intelligence (AI) systems across US national security operations.
The most immediate internal consequence came from the departure of Caitlin Kalinowski, a senior leader who previously headed OpenAI’s robotics division.
Kalinowski announced her resignation on social media over the weekend (March 7), saying the company moved too quickly on an agreement that raises serious questions about surveillance and autonomous weapons.
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are…
— Caitlin Kalinowski (@kalinowski007) March 7, 2026
“This wasn’t an easy call,” Kalinowski wrote in a post on X. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people.”
Kalinowski also said she believed the agreement with the Pentagon had been rushed without clear governance safeguards.
“It’s a governance concern first and foremost,” she said. “These are too important for deals or announcements to be rushed.”
OpenAI confirmed her departure but defended the agreement, saying the contract contains safeguards governing how its technology may be used.
“We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world,” the company added.
Deal deepens AI industry divide
Prior to the deal, the Pentagon had already been negotiating with Anthropic, the developer of the Claude chatbot, over a similar partnership.
However, Anthropic CEO Dario Amodei refused to allow the company’s technology to be used for domestic mass surveillance or military attacks conducted without human input.
The standoff ultimately collapsed, prompting President Donald Trump to order federal agencies to stop using Anthropic technology. The Defense Department later designated the company a “supply chain risk,” a label typically applied to entities tied to foreign adversaries.
OpenAI then stepped in to sign its own agreement with the Pentagon.
The contract allows OpenAI models to be deployed in classified environments while maintaining several restrictions. The company said its systems cannot be used for mass domestic surveillance, autonomous weapons systems or high-stakes automated decision-making without human oversight.
The agreement also includes a “cloud-only” deployment architecture, which OpenAI says prevents its models from being used directly in autonomous weapons operating on physical devices. Engineers with security clearances will remain involved in the deployment process, and the company said it retains control over its safety systems.
Still, critics argue that the agreement opens the door to a much deeper integration of artificial intelligence into US military operations.
The controversy also further intensified as the US and Israel launched military strikes against Iran shortly after the agreement was finalized.
Consumer backlash
Shortly after the agreement was announced, downloads of Anthropic’s Claude chatbot surged while uninstallations of OpenAI’s ChatGPT spiked sharply.
Data cited by Sensor Tower showed ChatGPT uninstallations rising more than 295 percent on February 28, the day after the deal was made public.
Within days, Claude had climbed to the top spot among free apps on Apple’s (NASDAQ:AAPL) US App Store, overtaking ChatGPT. It also became the most downloaded productivity app on the platform.
The backlash has extended beyond digital platforms. Activists gathered outside OpenAI’s headquarters in San Francisco, launching what they described as a “QuitGPT” protest campaign against the company.
Lawmakers move to intervene
California Democratic Representative Sam Liccardo introduced an amendment to the Defense Production Act aimed at preventing the Pentagon from retaliating against companies that impose safety restrictions on high-risk technologies.
Liccardo argued that firms developing powerful AI systems should have a say in how their technology is deployed.
“Full disclosure: I am a Claude subscriber, though I can’t claim to have used it to create any homicidal bots,” Liccardo said during a House Financial Services Committee meeting.
“Regardless, when the company that designs and builds the jet fighter tells us when to use the brakes, we should listen. Instead, the Pentagon’s bureaucrats and lawyers believe they know better. They think they can fly the plane without brakes.”
The amendment ultimately failed on a 16-25 vote.
Altman acknowledges missteps
Facing mounting criticism, OpenAI CEO Sam Altman has sought to repair the company’s reputation by publicly acknowledging that the process of announcing the Pentagon agreement had been flawed.
“The process was definitely rushed, and the optics don’t look good,” Altman wrote on X shortly after the backlash began.
In a follow-up internal memo shared publicly, Altman also said OpenAI had revised the contract language to include clearer prohibitions against domestic surveillance using “commercially acquired” personal data.
He also reiterated that OpenAI’s models cannot be used to direct autonomous weapons systems and said the company’s engineers and safety researchers would remain directly involved in overseeing how the technology is deployed.
Securities Disclosure: I, Giann Liguid, hold no direct investment interest in any company mentioned in this article.