He’s Alexandr Wang’s general manager testifies to a subcommittee of the Armed Services of the Home for Cybernetics, Information Technology and Listening to the Battlefield Innovation in Capitol Hill in Washington, July 18, 2023.
Jonathan Ernst | Reuters
Scale he announced on Wednesday a historic agreement with the Department of Defense that may be a turning point in the controversial use of artificial intelligence in the military.
Giant He, who provides training data for the main players of him like Openai, Google, Microsoft AND MetaHe has been awarded a prototype contract by the Department of Defense for “Thunderforge”, “Flag Program” Dod’s use to use US planning agents and military operations, according to concessions.
It is a multimillion dollar deal, according to a source familiar with the situation, which demanded anonymity due to the confidential nature of the contract.
Chaired by the Protection Innovation Unit, the program will include a team of “Global Technology Partners”, including Anduril and Microsoft, to develop and set up agents. Uses will include modeling and simulation, decision -making support, proposed action courses and even automated work flows. Program participation will begin with the US Command Indo-Indo-European and the US European Command and then escalate in other areas.
“Thunderforge marks a crucial shift to the war driven by the data, ensuring that US forces can anticipate and respond to the threats rapidly and accurately,” according to a omission from DIU, who also said the program will “accelerate decision -making” and director “powerful Wargaming”. “
“Our solutions he will transform today’s military operation process and modernize American defense. … Improved speed and DIU will provide the military leaders of our nation the greatest technological advantage,” CEO Alexandr Wang in a statement.
Both the scale and Diu emphasized speed and how it will help military units make much faster decisions. Diu mentioned the need for speed (or synonyms) eight times in its release.
Doug Beck, the director of Diu, emphasized the “speed of machinery” in a statement, while Bryce Goodman, DUU Thunderforge’s program leader and contractor, said there is currently a “basic discrepancies between the speed of modern war and our ability to respond”.
Although Scale mentions that the program will function under human supervision, Diu did not emphasize that point.
Partnership
Scale announcement is part of a wider tendency of it not only walking behind the military use of their products, but also entering partnerships with the Defense Industry giants and the Department of Defense.
In November, anthropic, the beginning of the one supported by the Amazon founded by former Openai research leaders, and the Parantir Defense Contractor announced a partnership with Amazon Web Services to “ensure intelligence and access agencies in access to it [Anthropic’s] Claude 3 and 3.5 Family of Models in AWS. “This fall, Parantir signed a new five-year contract, up to $ 100 million to expand the US military access to its Maven war program.
In December, Openai and Anduril announced a partnership allowing the Protection Technology company to set up advanced systems for “national security missions”.
The Openai-Anduril Partnership focuses on “improving the country’s tireless anti-aircraft systems and their ability to detect, evaluate and respond to potentially dead-time fatal threats”, according to a release at the time, which added that the deal will help reduce the burden on human operators.
Anduril, co-founded by Palmer Luckey, did not answer a question from CNBC at the time if the reduction of Onus human operators will translate into fewer people into the war decisions of high interest.
At the time, Openai told CNBC that he stands close to politics in her statement of mission to stop using his systems to harm others.
But this has been said more easily than it is, according to some industry professionals.
“The problem is that you do not have control of how technology is really used-not in current use, then of course in the long run after sharing technology,” said Margaret Mitchell, researcher and leading ethics scientist in Hugging Face, CNBC told an interview. “So I’m a little curious about how companies are actually understanding this – do people who have security cleaning who are literally examining use and verifying that it is within the limitations of any direct damage?”
Hugging Face, a beginner of AI and Openai’s competitor, has rejected military contracts before, including contracts that did not include the potential for direct harm, according to Mitchell. She said the team “understood how it was a step away from direct damage”, adding that “even things that are seemingly innocent, it is very clear that this is part of a surveillance pipeline.”
Alexandr Wang, CEO of Scale he, speaking in the CNBC Squawk box outside the World Economic Forum in Davos, Switzerland on January 23, 2025.
Cnbc
Mitchell said that even the compilation of posts on social media could be seen as a step away from being directly harmful, as those summaries could be used to identify potentially and to draw enemy fighters.
“If it is a step away from the damage and helping the damage spread, is it really better?” Said Mitchell. “I Feel Like it’s a somewhat arbitrary line in the sand, and that work well for company for and Maybe Employs moral with actually being a better ethical situation … You can tell the department of defense, ‘we’ll give you this technology, pleasure wants to harm people Any way, ‘and they can say,’ we have ethical values ​​as well and so we will align with the Ethical Values, ‘But they can’t guarante that it is used for harm, and you as a company don’t have visitibility into it using for damage. “
Mitchell called it “a word game that offers some kind of acceptance veneer … or non-dick”.
Military pivot of technology
Google in February abolished a promise to abstain from using it for potentially harmful applications, such as weapons and supervision, according to the “principles of it” updated of the company. It was a difference from the previous version in which Google said it would not follow “weapons or other technologies, whose main purpose or implementation is to cause or directly facilitate people’s damage”, and “technologies collecting or using surveillance information that violates internationally accepted norms”.
In January 2024, Microsoft, supported by Openai calmly removed a ban on military use of chatgpt and his other vehicles, as he had begun working with the US Department of Protection in his tools, including open sources online.
Until then, the Openai policy site specified that the company did not allow the use of its models for “activity that has a high risk of physical damage” such as the development of weapons or military and war. But in the updated language, Openai removed the specific reference to the military, though its policy still states that users should not “use our service to harm itself or others”, including “development or use of weapons”.
News of military partnerships and mission changes are followed years of controversy over technology companies that develop technology for military use, highlighted by public workers’ concerns – especially those working in it.
Employees in almost every technology giant involved with military contracts have expressed concern as thousands of Google employees protested the company’s involvement with the Maven Pentagon project, which would use Google to analyze drone surveillance footage.
Parantir would later take over the contract.
Microsoft employees protested a $ 480 million army contract that would provide soldiers with increased reality headphones, and more than 1,500 Amazon and Google employees signed a letter protesting with a $ 1.2 billion, multi -year contract with the Israeli government and army, under which technology giants would be provided and the Giants. data.
“There is always a laundry swing with these kinds of things,” Mitchell said. “We are in a pace now where employees have less say within technology companies than them a few years ago, and so is a type of buyer market and sellers … The company’s interests are now much more severe than the interests of individual employees.”