The integration of America’s artificial intelligence industry into the military-industrial complex proceeds apace. The latest evidence comes in the form of two things that happened the week of the presidential election and so got little attention. One involved Meta, and the other involved Anthropic, one of the “big three” makers of proprietary large language models (along with OpenAI and Google).
The development involving Anthropic was particularly important, because its back story—which, like the story itself, didn’t get much attention—is testament to the formidable support in Silicon Valley for a confrontational approach to China.
Two days after the election, Anthropic announced that it will partner with Silicon Valley defense contractor Palantir to help US intelligence and defense agencies make use of Anthropic’s Claude family of LLMs. Three days earlier, Meta had said it was making Llama, its large language model, “available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work.” Those partners will include Lockheed-Martin and, again, Palantir (whose eccentric CEO was profiled by NZN’s Connor Echols in September).
In undertaking these military collaborations, both Meta and Anthropic crossed a line they had never crossed before.
In the case of Meta, the move had a fairly obvious explanation. Meta’s announcement came on the heels of reports that researchers linked to the Chinese military are using Llama. That news was ammunition for those who had warned that open-source models such as Llama could be exploited by China, and had argued that the US should therefore require the most powerful LLMs to be closed source. Such a regulation would undermine Meta CEO Mark Zuckerberg’s AI strategy, so it makes sense that Meta would counter this narrative by casting its open source AI as a US military asset.
Anthropic’s move had less obvious motivation— and even seemed, superficially, out of character. Though there’s a long tradition of tech companies doing business with the Pentagon—since, after all, profit is their goal—Anthropic had always seemed less profit-driven than the average company. It’s a “public benefit” corporation that was founded by people who left OpenAI because they felt that Sam Altman, OpenAI’s boundlessly ambitious CEO, wasn’t taking AI safety seriously enough. Whereas OpenAI opposed a recent AI Safety bill (SB 1047) that was vetoed by California Governor Gavin Newsom, Anthropic didn’t. Surely, if any major AI company was going to resist the allure of the military industrial complex it would be Anthropic?
Guess again.
Anthropic’s embrace of the Pentagon shouldn’t have surprised anyone who had been closely reading the tea leaves. Co-founder and CEO Dario Amodei, in a few largely overlooked paragraphs of an AI manifesto he posted in October, had shown himself to be no lefty peacenik. Most discussion of the 15,000-word essay (including Amodei’s epic recent conversation with podcaster Lex Fridman) had emphasized its profuse extolling of the possible benefits of AI and had ignored something else in it: