The AI revolution is already here: The U.S. military must grapple with real dilemmas that until recently seemed hypothetical. (PETER W. SINGER, APRIL 14, 2024, Defense One)

In just the last few months, the battlefield has undergone a transformation like never before, with visions from science fiction finally coming true. Robotic systems have been set free, authorized to destroy targets on their own. Artificial intelligence systems are determining which individual humans are to be killed in war, and even how many civilians are to die along with them. And making all this the more challenging, this frontier has been crossed by America’s allies.

Ukraine’s front lines have become saturated with thousands of drones, including Kyiv’s new Saker Scout quadcopters that “can find, identify and attack 64 types of Russian ‘military objects’ on their own.” They are designed to operate without human oversight, unleashed to hunt in areas where Russian jamming prevents other drones from working.

Meanwhile, Israel has unleashed another side of algorithmic warfare as it seeks vengeance for the Hamas attacks of October 7. As revealed by IDF members to 972 Magazine, “The Gospel” is an AI system that considers millions of items of data, from drone footage to seismic readings, and marks buildings in Gaza for destruction by air strikes and artillery. Another system, named Lavender, does the same for people, ingesting everything from cellphone use to WhatsApp group membership to set a ranking between 1 and 100 of likely Hamas membership. The top-ranked individuals are tracked by a system called “Where’s Daddy?”, which sends a signal when they return to their homes, where they can be bombed.

Such systems are just the start. The cottage industry of activists and diplomats who tried to preemptively ban “killer robots” failed for the very same reason that the showy open letters to ban on AI research did too: The tech is just too darn useful. Every major military is at work on their equivalents or better, including us.


PODCAST: America Needs More Techno-Optimism (Andreesen Horowitz, March 13, 2024, American Dynamism Summit)

In this fireside chat from the American Dynamism Summit, a16z Cofounder and General Partner Marc Andreessen sits down with economist, podcaster, and polymath Tyler Cowen to discuss the state of innovation in America, from recent AI advances to growing support for nuclear power. They’ll explain why the future many people claim to want — a better economy, better quality of life, and a safer world — is only possible if America leads. […]

Tyler: Now, how will AI make our world different five years from now? What’s the most surprising way in which it will be different?

Marc: Yeah, so there’s a great kind of breakdown on adoption of new technology that the science fiction author, Douglas Adams, wrote about years ago. He says any new technology is received differently by three different groups of people. If you’re below the age of 15, it’s just the way things have always been. If you’re between the ages of 15 and 35, it’s really cool and you might be able to get a job doing it. If you’re above the age of 35, it’s unholy and against the order of society and will destroy everything. AI, I think, so far is living up to that framework.

What I would like to tell you is AI is gonna, you know, be completely transformative for education. I believe that it will. Having said that, I did recently roll out ChatGPT to my eight-year-old. And, you know, I was, like, very, very proud of myself because I was like, “Wow, this is just gonna be such a great educational resource for him.” And I felt like, you know, Prometheus bringing fire down from the mountain to my child. And I installed it on his laptop and said, you know, “Son, you know, this is the thing that you can talk to any time, and it will answer any question you have.” And he said, “Yeah.” I said, “No, this is, like, a big deal that answers questions.” He’s like, “Well, what else would you use a computer for?” And I was like, “Oh, God, I’m getting old.”

So, I actually think there’s a pretty good prospect that, like, kids are just gonna, like, pick this up and run with it. I actually think that’s already happening, right? ChatGPT is fully out, you know, and barred and banging all these other things. And so, I think, you know, kids are gonna grow up with basically…you know, you could use various terms, assistant friend, coach, mentor, you know, tutor, but, you know, kids are gonna grow up in sort of this amazing kind of back-and-forth relationship with AI. And any time a kid is interested in something, if there’s not, you know, a teacher who can help with something or they don’t have a friend who’s interested in the same thing, they’ll be able to explore all kinds of ideas. And so I think it will be great for that.

You know, I think it’s, obviously, gonna be totally transformative and feels like warfare and you already see that. You know, the concern, quite honestly, I actually wrote an essay a while ago on sort of why AI won’t destroy all the jobs, and the sort of the short version of it is because it’s illegal to do that because so many jobs in the modern economy require licensing and are regulated. And so, you know, I think the concern would be that there’s just so much, sort of, glue in the system now that prevents change and it’ll be very easy to sort of not have AI healthcare or, you know, AI education or whatever because, literally, some combination of, like, you know, doctor licensing, teacher unions and so forth will basically outlaw it. And so I think that’s the risk.


Toward a Leisure Ethic: How people spend their time is a fundamental mark of civilization. (Stuart Whatley, Spring 2024, Hedgehog Review)

This preference for leisure over work was hardly unique to Pacific Islanders. Urban and rural artisans in preindustrial England also took it as a given that more free time was better than work, even when more work promised greater monetary returns. When the prices they could command for their goods rose, they saw it as an opportunity not to amass wealth but to work less.2

In this limited respect, they were much like the elites of antiquity and the Middle Ages. In the Athens of Socrates, Plato, and Aristotle, the idea of working beyond what was necessary was abhorrent. Likewise for the Roman elites, though their precise views on leisure differed from those of the Greeks. In both cultures, the word for leisure seems to have come first, with work and business framed as nonleisure—scholé versus aschole in Greek, otium versus negotium in Latin.

Similarly, in later centuries, following the rise of Christendom, religious thinkers generally favored leisure over work (vita contemplativa as opposed to vita activa), because that was how one drew closer to God. Work, after all, was punishment for humankind’s original sin. “The obligations of charity make us undertake righteous business [negotium],” wrote Augustine, but “if no one lays the burden upon us, we should give ourselves up to leisure [otium], to the perception and contemplation of truth.”3

All were expressing a leisure ethic: a worldview in which a preference for free time and intrinsically motivated pursuits is accompanied by an understanding of how time can best be spent.


A New Engine for Human Learning and Growth (SHRUTI RAJAGOPALAN, 3/11/24, Project Syndicate)

AI already shows great promise. India’s education system is in crisis. Over half of fifth graders cannot read at a second-grade level, and merely a quarter can manage simple division. If these students had a personalized curriculum – taught in their native dialect, without caste-based or economic discrimination – they could catch up. While poor incentives for educators, state-level politics, bad curricula, and socioeconomic circumstances have stood in the way of this solution, AI could make such obstacles surmountable.

Imagine an AI tutor interacting with a student from India’s poorest state, Bihar, where learning scores are abysmal, in her native Maithili dialect. It would evaluate homework through images, correct pronunciation, teach other languages, integrate numeracy through games, and offer endless, patient repetition. The same approach also could be used to offer teacher training at scale, with large language models (LLMs), like the one that powers ChatGPT, aiding curriculum development in India’s 100-plus languages and more than 10,000 dialects, all at low cost.

These AI tutors will be affordable, partly because of India’s huge market. One in three Indian students already pays for private tutoring, and well before the recent AI breakthroughs, Indians dominated YouTube, where education playlists help students master various state examinations. All the data these students provide will train models for foundational-learning tutors that can be deployed across the Global South, where students face similar problems.


Let AI remake the whole U.S. government (oh, and save the country) (Josh Tyrangiel, March 6, 2024, Washington Post)

Perna needed up-to-the-minute data from all the relevant state and federal agencies, drug companies, hospitals, pharmacies, manufacturers, truckers, dry ice makers, etc. Oh, and that data needed to be standardized and operationalized for swift decision-making.

It’s hard to comprehend, so let’s reduce the complexity to just a single physical material: plastic. Perna had to have eyes on the national capacity to produce and supply plastic — for syringes, needles, bags, vials. Otherwise, with thousands of people dying each day, he could find himself with hundreds of millions of vaccine doses and nothing to put them in.

To see himself, Perna needed a real-time digital dashboard of an entire civilization.

This being Washington, consultants lined up at his door. Perna gave each an hour, but none could define the problem let alone offer a credible solution. “Excruciating,” Perna tells the room, and here the Jersey accent helps drive home his disgust. Then he met Julie and Aaron. They told him, “Sir, we’re going to give you all the data you need so that you can assess, determine risk, and make decisions rapidly.” Perna shut down the process immediately. “I said great, you’re hired.”

Julie and Aaron work for Palantir, a company whose name curdles the blood of progressives and some of the military establishment. We’ll get to why. But Perna says Palantir did exactly what it promised. Using artificial intelligence, the company optimized thousands of data streams and piped them into an elegant interface. In a few short weeks, Perna had his God view of the problem. A few months after that, Operation Warp Speed delivered vaccines simultaneously to all 50 states. When governors called panicking that they’d somehow been shorted, Perna could share a screen with the precise number of vials in their possession. “‘Oh, no, general, that’s not true.’ Oh, yes. It is.”


March of the humanoids: Figure shows off autonomous warehouse work (Loz Blain, February 26, 2024, New Atlas)

It seems the Figure 01 won’t just be making coffee when it shows up to work at BMW. New video shows the humanoid getting its shiny metal butt to work, doing exactly the sort of “pick this up and put it over there” tasks it’ll be doing in factories.

Figure teaches its robots new tasks through teleoperation and simulated learning. If its videos are to believed – which is not always a given in this rapidly evolving space – its humanoids are capable of ‘figuring’ out the success and failure states of a given task, and working out how best to get it done autonomously, complete with the ability to make real-time corrections if things appear to be going off-track.


The U.S. economy is booming. So why are tech companies laying off workers? (Gerrit De Vynck, Danielle Abril and Caroline O’Donovan, February 3, 2024, washington Post)

[G]oogle, Amazon, Microsoft, Discord, Salesforce and eBay all made significant cuts in January, and the layoffs don’t seem to be abating. On Tuesday, PayPal said in a letter to workers it would cut another 2,500 employees or about 9 percent of its workforce.

The continued cuts come as companies are under pressure from investors to improve their bottom lines. Wall Street’s sell-off of tech stocks in 2022 pushed companies to win back investors by focusing on increasing profits, and firing some of the tens of thousands of workers hired to meet the pandemic boom in consumer tech spending. With many tech companies laying off workers, cutting employees no longer signaled weakness. Now, executives are looking for more places where they can squeeze more work out of fewer people.

Profits will be driven ever higher as labor and energy costs trend towards zero.


The white-collar class derided mass layoffs among the blue-collar workers. It’s about to feel their pain (Glenn H. Reynolds, Jan. 16th, 2024, NY Post)

[T]he worm has turned. Google is looking at laying off 30,000 people it expects to replace with artificial intelligence.

The Wall Street Journal reports that large corporations across the board are planning to lay off white-collar workers.

Investor Brian Wang notes ChatGPT is already causing white-collar job loss.

In fact, ChatGPT can even code.

Sometimes its code is quite good. Sometimes it’s not so good.

(Though God knows, the latter is true of much human-generated software code too.)

It can write press releases, ad copy, catalog descriptions, news stories and essays, speeches, encyclopedia entries, customer-inquiry responses and more.

It can generate art on demand that’s suitable for book covers, advertisements and magazine illustrations.

Again, sometimes these items are quite good, and sometimes they’re not, but there’s a lot of less-than-stellar human work in those categories too.

Learning to code is bad advice now.

And the kicker is, AI is getting better all the time.

ChatGPT-4 has demonstrated “human-level performance” on many benchmarks.

It can pass bar exams, diagnose disease and process images and text. The improvement since ChatGPT-3.5 is significant.

People, on the other hand, are staying pretty much the same.

The bad news for the symbolic analysts is they’re playing on AI’s turf.

When you deal with ideas and data and symbols, you’re working with bits, and AI is pretty good at working with bits.

People losing their jobs to AI is just the tip of the iceberg.

In the next decade, lots more people — possibly (gulp) including professors like me — will be facing potential replacement by machines.

It turns out that using your brain and not your hands isn’t as good a move as it may have once seemed.

…is a function of the fact that “we” are going to not have jobs, not just “them”.