The AI revolution is already here: The U.S. military must grapple with real dilemmas that until recently seemed hypothetical. (PETER W. SINGER, APRIL 14, 2024, Defense One)

In just the last few months, the battlefield has undergone a transformation like never before, with visions from science fiction finally coming true. Robotic systems have been set free, authorized to destroy targets on their own. Artificial intelligence systems are determining which individual humans are to be killed in war, and even how many civilians are to die along with them. And making all this the more challenging, this frontier has been crossed by America’s allies.

Ukraine’s front lines have become saturated with thousands of drones, including Kyiv’s new Saker Scout quadcopters that “can find, identify and attack 64 types of Russian ‘military objects’ on their own.” They are designed to operate without human oversight, unleashed to hunt in areas where Russian jamming prevents other drones from working.

Meanwhile, Israel has unleashed another side of algorithmic warfare as it seeks vengeance for the Hamas attacks of October 7. As revealed by IDF members to 972 Magazine, “The Gospel” is an AI system that considers millions of items of data, from drone footage to seismic readings, and marks buildings in Gaza for destruction by air strikes and artillery. Another system, named Lavender, does the same for people, ingesting everything from cellphone use to WhatsApp group membership to set a ranking between 1 and 100 of likely Hamas membership. The top-ranked individuals are tracked by a system called “Where’s Daddy?”, which sends a signal when they return to their homes, where they can be bombed.

Such systems are just the start. The cottage industry of activists and diplomats who tried to preemptively ban “killer robots” failed for the very same reason that the showy open letters to ban on AI research did too: The tech is just too darn useful. Every major military is at work on their equivalents or better, including us.