AI company sues Trump administration

The suit comes after the administration attempted to use AI for nefarious purposes.

The artificial intelligence company Anthropic is suing the Trump administration after the administration claimed the company is a “supply chain risk” and blacklisted them from any future government contracts.

AI is a grotesque example of capitalism and “progress” being put above human life. It is harmful to the environment and human development. It must be kept out of creative fields, and those who substitute it for their own critical thinking should be condemned. This criticism obviously stretches to AI companies as well. Until recently, it was hard to imagine agreeing with an AI company about anything, and yet this is what it has come to.

The Trump Administration has proven AI is not the worst wrong in the most frustrating way possible. Some backstory: The US makes a deal with Anthropic to use “Claude,” Anthropic’s AI product. The government then says, “Hey, we are gonna use this for war, and no humans will keep an eye on the AI’s war plans!” (i.e. Mass domestic surveillance and fully autonomous lethal weapons.) Anthropic is understandably concerned and says, “No, our AI will not be doing that.” The government then cuts off the contract and slanders the company. Prompting Anthropic to sue in retaliation. Unfortunately, Anthropic has a point on this one.

What’s difficult to understand is why so many companies insist on incorporating AI in places where it is either unwanted or useless. Why does artificial intelligence have to be used for everything that we do? Using it to make spreadsheets is one thing, but art, music, and war plans should be 100% human. The US government is effectively handing a robot a gun and letting it do whatever it wants. Generating a reasonably accurate fake video is very far removed from mass surveillance. Let’s just hope Claude wasn’t trained on George Orwell’s novels.

If a computer is drawing up our war plans, then what exactly is our “Department of War” doing? They had planned to vest full control in Claude, letting it run without any human oversight. Why do we even have this department if its work can be so easily automated? It is more than likely a bit more complicated than that, but let’s be honest with ourselves here.

AI critics agree with Anthropic and appreciate them putting their foot down on one issue at least. However it’s hard to believe for one second that this will stop the Trump administration from continuing with its moronic plan for automated warfare. There are other AI companies out there with less of a backbone than Anthropic, and it’s sure they’re salivating at the idea of turning their AI models loose on civilian surveillance data.

As far as AI critics are concerned, more people need to be more worried about this. Regimes like our current administration capitalize on confusion. They cause immense chaos so that their real plans slip through the cracks before anyone notices. My advice: let the AI Bubble pop. ChatGPT is hemorrhaging money, so quit using it entirely. In this country, dollars talk far more than votes do; our only chance at getting the government to back off is to stop backing its benefactors with our money.

Supreme Court rules against Anthropic in legal battle against the Pentagon