Rise of the AI Soldiers

2026-03-1219:1531time.com

How humanoid robots could transform modern warfare.

SAN FRANCISCO — The Phantom MK-1 looks the part of an AI soldier. Encased in jet black steel with a tinted glass visor, it conjures a visceral dread far beyond what may be evoked by your typical humanoid robot. And on this late February morning, it brandishes assorted high-powered weaponry: a revolver, pistol, shotgun, and replica of an M-16 rifle.

“We think there’s a moral imperative to put these robots into war instead of soldiers,” says Mike LeBlanc, a 14-year Marine Corps veteran with multiple tours of Iraq and Afghanistan, who is a co-founder of Foundation, the company that makes Phantom. He says the aim is for the robot to wield “any kind of weapon that a human can.”

Today, Phantom is being tested in factories and dockyards from Atlanta to Singapore. But its headline claim is to be the world’s first humanoid robot specifically developed for defense applications. Foundation already has research contracts worth a combined $24 million with the U.S. Army, Navy, and Air Force, including what’s known as an SBIR Phase 3, effectively making it an approved military vendor. It’s also due to begin tests with the Marine Corps “methods of entry” course, training Phantoms to put explosives on doors to help troops breach sites more safely. 

In February, two Phantoms were sent to Ukraine—initially for frontline-reconnaissance support. But Foundation is also preparing Phantoms for potential deployment in combat scenarios for the Pentagon, which “continues to explore the development of militarized humanoid prototypes designed to operate alongside war fighters in complex, high-risk environments,” says a spokesman. LeBlanc says the company is also in “very close contact” with the Department of Homeland Security about possible patrol functions for Phantom along the U.S. southern border.

A Phantom robot is hauled back to its feet after toppling overMattia Balsamini for TIME

In just a few short years, the rapid proliferation of AI has turned what was once the stuff of dystopian sci-fi into a reality. LeBlanc argues humanoid soldiers are a natural extension of existing autonomous systems like drones. Compared with risking the lives of teenage grunts, with all the political backlash and risks of stress-induced war crimes and trauma, humanoid soldiers offer a more resilient alternative, with greater restraint and precision. Robots do not suffer from fatigue or fear and can operate continuously in extreme conditions while immune from radiation, chemicals, or biological agents. Moreover, LeBlanc believes that giant armies of humanoid robots will eventually nullify each side’s tactical advantage in any conflict much like nuclear deterrents—exponentially decreasing escalation risks.

The counterargument is, however, chilling: that humanoid soldiers lower political and ethical barriers to initiating conflict, blur responsibility for any abuses, and further dehumanize warfare. Current Pentagon protocols decree automated systems can engage only with a human green light, and Foundation insists that is also its intention for Phantom. However, AI-powered drones in Ukraine are already assessing targets and autonomously firing as Russian radio jamming renders remote operation ineffective. If an adversary decides to allow the autonomous operation of AI-powered soldiers, what’s to stop the U.S. and its allies from reciprocating in the fog of war?

“It’s a slippery slope,” says Jennifer Kavanagh, director of military analysis for the Washington-based think tank Defense Priorities. “The appeal of automating things and having humans out of the loop is extremely high. The lack of transparency between the two sides of any conflict creates additional concerns.”

Moreover, set against a drastic militarization of American society—with heavily armed ICE officers swarming U.S. cities, the National Guard deployed to six states last year, and local police equipped with armored vehicles left over from the Forever Wars—the specter of AI-powered soldiers with opaque mission directives and chains of command has civil-liberty alarm bells clanging. Then add in the well-documented algorithmic biases that are known to blight AI facial-recognition software. Yet in a sign of stripped-away guardrails for AI’s national-security implementation, on Feb. 28 President Donald Trump ordered federal agencies and military contractors to cease business with Anthropic, known as the most safety-conscious of the big AI firms. Anthropic’s contract decreed its technology couldn’t be used to surveil American citizens or program autonomous weapons to kill without human involvement. While both these restrictions chime with current government protocol, the White House refused to be bound by them.

And the U.S. is far from alone in exploring humanoid soldiers. Authoritarian regimes including Russia and China are developing the dual-use technology, pitting the West in a contest to create ever more powerful and efficient killing machines in human form. A humanoid-soldier arms race is “already happening,” says Sankaet Pathak, Foundation co-founder and CEO.

Mike LeBlanc, co-founder of Foundation and a 14-year Marine Corps veteranMattia Balsamini for TIME
Sankaet Pathak, co-founder and CEO of FoundationMattia Balsamini for TIME

Modern warfare is already hugely automated, from smart mines and antirocket defense shields to laser-guided missiles. The question is how much autonomy is too much. As companies like Foundation race to embody humanoids with lethal functionality, a parallel legal tussle is raging between AI-focused defense companies and international bodies seeking to codify what level of human control is appropriate in war. Lethal autonomous weapon systems are “politically unacceptable” and “morally repugnant,” U.N. Secretary-General António Guterres said last year, in remarks that seem to put the international order on a collision course with AI-focused defense firms with influential backing. TIME can reveal that Eric Trump is an investor and newly appointed chief strategic adviser at Foundation.

“Autonomy is a spectrum,” says Bonnie Docherty, a lecturer at the International Human Rights Clinic at Harvard Law School. “Technology is moving rapidly towards full autonomy. And there are serious concerns when life-and-death decisions are delegated to a machine.” 

In Ukraine, where Vladimir Putin’s war of choice has just entered its fifth year at a cost of some 350,000 lives and counting, that spectrum of autonomy has been stretched to new limits. For LeBlanc, who undertook over 300 combat missions for the Marines, what he discovered upon taking Phantom to Ukraine was “really shocking,” he says. “It’s a complete robot war, where the robot is the primary fighter and the humans are in support. It is the exact opposite of when I was in -Afghanistan: the humans were everything, and we had supplementary tools.”

Ukraine, which now launches up to 9,000 drones every day, has become the world’s premier testing ground for arms manufacturers—including Western startups—seeking to automate parts of the conventional “kill chain,” the step-by-step process used to identify, engage, and destroy an enemy target. These firms include Foundation, which wants to get Phantoms onto the front line of combat to hone the technology via a “feedback loop” of real-life use cases. 

“Just like drones, machine guns, or any technology, you first have to get them into the hands of customers,” says Pathak.

Pathak controls a Phantom using a virtual reality headsetMattia Balsamini for TIME

Increasingly, every aspect of the Ukraine war is being automated. Most stunning has been the proliferation of autonomous drones, which boast software that can navigate payloads over hundreds of miles and lock onto a target. AI-enhanced Ukrainian quadcopters can attack Russian soldiers without humans in the loop when communications fail and remote control becomes impossible. Computer vision can identify and eliminate specific targets, even flying through windows to assassinate individuals. In late January, three bloodied Russian soldiers emerged from a routed building to surrender to an armed Ukrainian ground robot, a kind of small, unmanned tank.

LeBlanc says what he saw in Ukraine only bolsters his belief in the value of humanoid soldiers. On the front lines, troops are burrowed down in stronghold positions but acutely vulnerable to drone attacks every time they venture outside. So humanoid soldiers could be invaluable for resupplying and reconnaissance work, especially in places that drones can’t access, like low bunkers. With a heat signature like that of humans, robots like Phantom may also throw off enemy surveillance. Moreover, having humanoid soldiers means existing stocks of weaponry can be deployed in their cold metal grip rather than being rendered obsolete by robots that require purpose-built tools of their own.

“How many .50-[caliber guns] do we have? How many grenade launchers? How many humvees?” asks LeBlanc. “We need something that can interact with all of these. So having a humanoid really unlocks the entire U.S. military.”

Ultimately, wars are won by breaking the enemy’s will. It can leave in body bags or as morale drains away. But even as strikes aimed at the latter, like the Russian energy-infrastructure attacks that have left Ukrainians without heat, can be considered a war crime, LeBlanc argues that such moves are preferable to firebombing a human population—and that they’ll be all that’s left when humans leave the field of war. “Droid battles, with a bunch of drones overhead and humanoids walking out towards each other, becomes an economic conflict,” he says. “I think that’s all for the better.”

Inside the arm of a Phantom MK-1 humanoid robot in San Francisco on Feb. 25Mattia Balsamini for TIME

There are downsides. Humanoid robots are heavy and expensive, need regular recharging, and are likely to break down. How will they cope with mud, dust, and driving rain? Movement in a humanoid is driven by some 20 motors, each of which must be powered and can be rendered -useless by even a minor glitch. Deploying humanoids alongside regular troops may also bring additional dangers. “If you fall over next to a baby, you know how to land without hurting the baby,” says Prahlad Vadakkepat, an associate professor at the National University of Singapore and founder of the Federation of International Robot-Soccer Association. “Will a humanoid be able to do that?”

Some risks are operational. Already, captured drones are a significant source of sensitive data, acting as flying smartphones that store or transmit detailed intelligence. Drones can also be spoofed by having their radio frequencies intercepted. A hacked humanoid soldier presents a whole host of risks. An enemy could potentially hijack a fleet of robots through software back doors, turning an army against its own creators or using them to commit untraceable atrocities.

Another sizable risk is a humanoid’s ability to properly assess a situation. Even if the intent is to keep humans in the kill chain, infantry battles are more frantic scenarios than drone missions are. If a child runs toward you clutching open scissors, it is self-evident to humans that the threat level is minimal. Would embodied AI feel the same way? Or, for that matter, does it feel anything at all?

“It’s a question of human dignity,” says Peter Asaro, a roboticist, philosopher, and chair of the International Committee for Robot Arms Control. “These machines are not moral or legal agents, and they’ll never understand the ethical implications of their actions.” 

They may not understand the true gravity, but machines are already making life-and-death judgment calls. An hour’s drive south of San Francisco, Scout AI is working to merge AI with existing American weaponry, including UTVs, tanks, and drones. In February, it ran a test event whereby seven AI agents—software that not only gathers information but then takes the initiative on actions—planned and executed a coordinated attack. After the firm’s Fury AI Orchestrator was told a blue enemy vehicle had last been seen at a certain location, it dispatched various ground and air agents controlling their own assets to identify, locate, and neutralize the target without any further human intervention. “There are agents that can replace all of ... the kill chain,” says Colby Adcock, co-founder and CEO of Scout AI, which is currently negotiating $225 million worth of Pentagon contracts. “And they’re way better and faster and smarter.”

A Foundation worker tests Phantom’s stability with an extended boxing gloveMattia Balsamini for TIME

“We’re the first people to actually do the entire kill chain remotely from the human,” says Collin Otis, Scout AI co-founder and CTO. “What we’re going to see over the next five years is you’re not going to have people flying drones anymore. It just will not make sense. As AI gets integrated everywhere, that will go away.”

In terms of humanoid soldiers, the technology is “probably a couple years out from deploying them into combat,” says Adcock, who also sits on the board of Figure AI, a humanoid-robot firm founded by his brother Brett.

Scout AI and Foundation are far from outliers. A burgeoning AI for Defense ecosystem is flourishing across the U.S. Three years after billionaire Palmer Luckey’s Oculus VR company was acquired by Meta, he founded the autonomous-weapons firm Anduril in 2017. Anduril produces a range of AI-empowered kits such as the Roadrunner twin-turbojet-powered drone interceptor, a headset that allows soldiers to see 360 degrees, and an electromagnetic-warfare system that can jam enemy systems to debilitate drone swarms. 

Luckey also full-throatedly backs autonomous weapons that work with no human intervention. “There’s no moral high ground to making a land mine” rather than a more intelligent weapon, Luckey told 60 Minutes last August. Anduril’s Ghost Shark autonomous submarine is already being employed by the Australian navy. Air Marshal Robert Chipman, vice chief of the Australian Defence Force, tells TIME that this key U.S. ally will “continue to invest in and adopt autonomous and uncrewed systems ... improving the survivability and lethality of our force in increasingly contested environments.” 

Still, critics of automation say the physical separation between the operator and target turns human beings into “data points,” diminishing the moral weight of killing with a sterile video-game-like process, stripping away the last vestige of human empathy from the battlefield and making it too easy to accept higher rates of casualties that we wouldn’t otherwise.

At the same time, if the ability to wage war remotely and autonomously leads to minimal human toll, that in itself may increase risk tolerance, meaning more operations that have higher escalation potential. For instance, it would be a gutsy move for a conventional U.S. Navy vessel to attempt to break any Chinese blockade of self-ruling Taiwan. Sending an unmanned submersible, however, feels less confrontational—as would a People’s Liberation Army decision to sink it. Yet those ostensibly lower-risk scenarios may in fact accelerate an escalatory spiral toward full-blown conflict. If a nation can wage war without the political cost of bringing home flag-draped coffins, will it be more likely to engage in unnecessary conflicts? “The human cost of war sometimes keeps us out of war,” says Kavanagh of Defense Priorities.

LeBlanc prepares to hand a shotgun to a PhantomMattia Balsamini for TIME

An additional worry is that AI is far from perfect. As anyone who has used ChatGPT or Google Gemini knows, LLMs make mistakes, known as hallucinations, all the time, as generative tools confidently produce false, misleading, or nonsensical information not based on training data. 

“With these AI large language models, we can’t explain how it’s making its decisions, and you just can’t have lethal autonomous systems that every now and then decide to hallucinate,” says Democratic Representative Ted Lieu, who in 2023 spearheaded the Block Nuclear Launch by Autonomous Artificial Intelligence Act, which limits AI’s role in nuclear command and control and is currently passing through the House.

AI models also suffer from algorithmic bias or behavioral drift. Over time, as the AI “learns” from the field, its logic may drift away from its original ethical constraints. It’s for these reasons that the Biden Administration, led by the State Department and Pentagon, initiated the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. As of late 2024, nearly 60 countries have signed on to this nonbinding agreement, which outlines a normative framework for the development and deployment of AI in military systems. Yet the Trump Administration has been steadily stripping back AI protections.

“Legal, ethical, and accountability concerns outweigh any potential benefits.”

Bonnie DochertyHarvard Lecturer

On his first day in office, Trump revoked a 2023 Biden Executive Order that sought to reduce the risks that AI poses to national security, the economy, public health, or safety by requiring developers to share the results of safety tests with the U.S. government before their public release. Despite Trump’s recent blacklisting of Anthropic, several competitors including the Grok AI model produced by Elon Musk’s xAI have inked alternative deals, notwithstanding controversies over generation of nonconsensual sexual content, anti-semitic commentary, political misinformation, and the promotion of conspiracy theories. Musk’s Tesla also produces a humanoid robot, Optimus, powered by Grok, though the firm didn’t reply to repeated requests for comment from TIME about whether it’s being readied for military applications.

As the trajectory in the U.S. at least turns away from regulation and oversight, the question of where accountability falls becomes all the more pressing. If a humanoid robot malfunctions and commits a war crime or kills a noncombatant, is the software programmer or commanding officer held responsible? Current international law is not yet equipped to handle “algorithmic accountability,” leaving a legal vacuum in the face of tragedy. “The plethora of legal, ethical, and accountability concerns outweigh any potential benefits,” says Docherty. 

Ultimately, military technology moves in ever narrowing generations. In 1861, at the start of the Civil War, Abraham Lincoln turned the hand crank of an early rapid-fire Agar gun and immediately purchased all 10 that were available. But it wasn’t until World War I, a half a century later, that machine guns became formidable battlefield tools. The first modern drone mission took place in Afghanistan in 2001—only 25 years ago. Humanoid soldiers “will be part of the U.S.’s next conflict,” says LeBlanc. “You can’t take decades for these things to develop anymore, because our adversaries aren’t going to.”

Against this backdrop, the international community is rushing to put protocols in place to govern the deployment of humanoid soldiers—and war’s automation more broadly. U.N. Secretary-General Guterres and the International Committee of the Red Cross have jointly called for a legally binding treaty prohibiting autonomous systems that function without “meaningful human control” by year’s end. While over 120 nations support this measure, major military powers like the U.S., Russia, and Israel are dragging their heels.

Current negotiations in Geneva are focused on a two-tier framework that would apply to autonomous weapons, including drones and humanoid soldiers. First, a total ban on systems that are “inherently unpredictable” or target people using biometric data like facial recognition. Second, strict rules on such factors as the geographic area and duration of autonomous missions, and a “stop-switch” requirement that allows a human to intervene at any time.
A central legal battlefield is what “control” looks like: human-in-the-loop, whereby automated systems can identify targets but a human must click fire; or human-on-the-loop, whereby a robot operates autonomously but a human monitors and has override protection.

A Phantom MK-1 humanoid robot, produced by San Francisco–based Foundation, at the firm’s headquarters on Feb. 25Mattia Balsamini for TIME

Whether the U.S. and its adversaries abide by any ruling is another matter, given the way the Trump Administration has repeatedly flouted and mocked international convention, not least when there are billions of dollars in government contracts in the offing. 

“World war is bad,” says Pathak, “but I think a cold war is genuinely a good thing, because it forces everybody to innovate at a very fast pace. We want China to have humanoid robots, we want America to have humanoid robots, everybody to have humanoid robots.”

Besides recent advances, there’s much work to be done. During TIME’s visit to Foundation, more than once a Phantom crumpled with an almighty crash, prompting not even a flinch from the firm’s founders. The Phantom MK-2 is due in April with numerous upgrades, including consolidated electronics that reduce the risk of short circuits, water-proofing, larger battery packs, and the ability to carry loads of 175 lb. The bodywork will be cast-molded to speed manufacturing and reduce costs. The aim is to eventually build 30,000 a year. 

“Once we get to half a million, each will probably cost less than $20,000,” says Pathak, who eventually envisions thousands-strong swarms of Phantoms conducting complex military operations.

Scout AI’s Otis says the future will be “massive unmanned system on unmanned system warfare, and then there’s a clear winner and you have a surrender of a nation.” That’s because pitting unmanned systems on humans would be “so catastrophic that no nation is going to want to subject their people to that.” At which point, depending on whom you believe, humanity may be relishing in Pax Automata—or staring down AI armageddon.

“Right now, what you’re seeing is the first flatfooted and clumsy attempt at how robots are going to fight our wars,” says LeBlanc. “But they’re really waiting for the start of the show.”


Read the original article

Comments

  • By jMyles 2026-03-1219:16

    ...maybe I missed a memo here or something, but I thought this is exactly what we (all of us - the humans) _weren't_ going to do?

    > “We think there’s a moral imperative to put these robots into war instead of soldiers,” says Mike LeBlanc, a 14-year Marine Corps veteran with multiple tours of Iraq and Afghanistan, who is a co-founder of Foundation, the company that makes Phantom. He says the aim is for the robot to wield “any kind of weapon that a human can.”

    Yeah, but hear me out here: go fuck yourself, dude. Are you really being the change you want to see here?!

HackerNews