I unsubscribed from ChatGPT and subscribed to Claude
I had enough information to unsubscribe from ChatGPT and switch to Claude. So I did.
Let me start by saying that acting on incomplete information about extremely complex topics is risky. I might be wrong. The possibility that I might be wrong does not mean, however, that I should wait until I have complete information or understand all the relevant factors. That level of understanding may be impossible to attain. And in this case, my personal risk is low. Maybe Claude doesn’t work as well as ChatGPT, but that’s about it. The possible benefits of mass movement away from ChatGPT (OpenAI’s product) are significant. If enough consumers act, companies may receive a message that standing by democratic principles is good for business.
The Department of Defense retaliates against Anthropic
Here’s my understanding. In July 2025, Anthropic (the maker of Claude) and the Department of Defense entered into a $200 million contract. Anthropic and DoD had a dispute about whether the military’s proposed use of AI would violate Anthropic’s principles. According to Anthropic, DoD asked it to remove programmatic safeguards that would preclude DoD from using the company’s products for mass surveillance of U.S. citizens or fully-autonomous weapons systems.
Secretary of Defense Pete Hegseth absurdly threatened that the government would (1) compel Anthropic to allow use of its products by the military and/or (2) declare Anthropic to be national security risk and bar use of its products by the military. As many have observed, these threats are self-contradictory and reflect the government’s bad faith. Anthropic’s product is either essential to national security or a threat to national security, but not both, and if the government doesn’t know which, then it doesn’t know anything and should not be making threats.
To its credit, Anthropic refused to remove programmatic safeguards, as DoD had demanded. In retaliation, the Trump administration ordered federal agencies to stop working with Anthropic and declared Anthropic to be a “supply chain risk to national security,” which, the administration says, means that other companies can’t work with Anthropic either. If the administration’s statement were taken at face value, this would be a huge admission of error—as it turns out, the military has been working with a national security threat this whole time! But nobody takes the statement at face value; rather, the Trump administration is engaged in unlawful retaliation, again.
OpenAI takes advantage of DoD retaliation
In the wake of this blowup, OpenAI swooped in and made a deal with DoD to provide its AI systems for “any lawful purpose.” This is where things get very murky, very fast. When the Trump administration promises to use a system for “any lawful purpose,” that promise is meaningless. The President has already said that law imposes no restraints on him in international affairs. OpenAI reportedly will install guardrails, but this is where the details matter most. The content and operational effect of those guardrails is everything. For its part, OpenAI claims that its agreement with DoD has “more guardrails” than previously deployments, including Anthropic’s. If that is true, then what the hell is going on? Why would DoD fire Anthropic for insisting on guardrails, then hire OpenAI and agree to even more guardrails?
A former researcher at OpenAI, Sarah Shoker, thoughtfully points out that these claims are nearly impossible to evaluate at a technical or objective level. The DoD’s autonomous weapons policy is dense, subject to interpretation, and classified in its implementation. OpenAI’s policy is also subject to interpretation and also highly confidential in its implementation. And the inner workings of AI itself are often times inscrutable, even for AI experts, which most of us are not.
Assessing the available information
Although the information I have is incomplete, I don’t think we need to know how exactly how complex legal contracts interact with far more complex technical systems to conclude that the Trump administration and OpenAI are, probably, up to no good. To be clear, I am not arguing that we should not try to learn how the contracts or systems work or that these legal and technical points are irrelevant. Far from it. Instead, I am arguing that we should use the information we have to predict the answer to our pressing questions. Specifically, the fact that OpenAI stepped into the breach after DoD threatened and then retaliated against a rival company strongly suggests that OpenAI will be “flexible” in the future, purported guardrails or no.
There is another important reason to doubt the sincerity of OpenAI’s commitment to guardrails. As others have observed, the reason why OpenAI was able to win the military contract immediately after DoD pushed out Anthropic may have been that OpenAI’s CEO, Greg Brockman, gave $25 million to Trump. Or excuse me, gave $25 million to Trump’s SuperPAC. The fix was always in; OpenAI paid to play and Anthropic did not. This enabled DoD to play a game of head-I-win-tails-you-lose: it could issue an ultimatum to Anthropic and be indifferent about the result. If Anthropic capitulated and agreed to whatever DoD wanted, DoD wins a hostage AI company. If Anthropic resisted, DoD could substitute a demonstrably transactional and compliant AI company in its place. Win-win.
I must admit: I did not know that OpenAI’s Brockman had donated $25 million to Trump before I started researching and writing this post today. The zone is flooded. I consume more political news than is probably healthy and still missed it. This contribution alone changes my opinion of the company. There is no good-faith reason to give $25 million to Trump, of all people. If it is a bribe, Brockman cannot be trusted. If it is not a bribe for procuring the OpenAI contract (which would be logical in the usual greedy and self-interested way, but illegal), then Brockman has such bad judgment that he cannot be trusted. Either way, he cannot be trusted. And OpenAI cannot be trusted when it says that it has installed stringent guardrails.
Conclusion: Resist and unsubscribe
The guardrails matter. I would not take a single step toward allowing the Trump administration to conduct mass surveillance of American or develop fully autonomous weapons systems. Based on the information I have, the government’s substitution of OpenAI for Anthropic seems like a step in the wrong direction. So, I’m unsubscribing from ChatGPT and subscribing to Claude.

