March 27, 2023

THE TRIUMPH OF CAIN:

Will AIs Take All Our Jobs and End Human History--or Not? Well, It's Complicated... (Stephen Wolfram, March 15, 2023, Writings)

Inside ChatGPT is something that's actually computationally probably quite similar to a brain--with millions of simple elements ("neurons") forming a "neural net" with billions of connections that have been "tweaked" through a progressive process of training until they successfully reproduce the patterns of human-written text seen on all those webpages, etc. Even without training the neural net would still produce some kind of text. But the key point is that it won't be text that we humans consider meaningful. To get such text we need to build on all that "human context" defined by the webpages and other materials we humans have written. The "raw computational system" will just do "raw computation"; to get something aligned with us humans requires leveraging the detailed human history captured by all those pages on the web, etc.

But so what do we get in the end? Well, it's text that basically reads like it was written by a human. In the past we might have thought that human language was somehow a uniquely human thing to produce. But now we've got an AI doing it. So what's left for us humans? Well, somewhere things have got to get started: in the case of text, there's got to be a prompt specified that tells the AI "what direction to go in". And this is the kind of thing we'll see over and over again. Given a defined "goal", an AI can automatically work towards achieving it. But it ultimately takes something beyond the raw computational system of the AI to define what us humans would consider a meaningful goal. And that's where we humans come in.

What does this mean at a practical, everyday level? Typically we use ChatGPT by telling it--using text--what we basically want. And then it'll fill in a whole essay's worth of text talking about it. We can think of this interaction as corresponding to a kind of "linguistic user interface" (that we might dub a "LUI"). In a graphical user interface (GUI) there's core content that's being rendered (and input) through some potentially elaborate graphical presentation. In the LUI provided by ChatGPT there's instead core content that's being rendered (and input) through a textual ("linguistic") presentation.

You might jot down a few "bullet points". And in their raw form someone else would probably have a hard time understanding them. But through the LUI provided by ChatGPT those bullet points can be turned into an "essay" that can be generally understood--because it's based on the "shared context" defined by everything from the billions of webpages, etc. on which ChatGPT has been trained.

There's something about this that might seem rather unnerving. In the past, if you saw a custom-written essay you'd reasonably be able to conclude that a certain irreducible human effort was spent in producing it. But with ChatGPT this is no longer true. Turning things into essays is now "free" and automated. "Essayification" is no longer evidence of human effort.

Of course, it's hardly the first time there's been a development like this. Back when I was a kid, for example, seeing that a document had been typeset was basically evidence that someone had gone to the considerable effort of printing it on printing press. But then came desktop publishing, and it became basically free to make any document be elaborately typeset.

And in a longer view, this kind of thing is basically a constant trend in history: what once took human effort eventually becomes automated and "free to do" through technology. There's a direct analog of this in the realm of ideas: that with time higher and higher levels of abstraction are developed, that subsume what were formerly laborious details and specifics.

Will this end? Will we eventually have automated everything? Discovered everything? Invented everything? At some level, we now know that the answer is a resounding no. Because one of the consequences of the phenomenon of computational irreducibility is that there'll always be more computations to do--that can't in the end be reduced by any finite amount of automation, discovery or invention.

Ultimately, though, this will be a more subtle story. Because while there may always be more computations to do, it could still be that we as humans don't care about them. And that somehow everything we care about can successfully be automated--say by AIs--leaving "nothing more for us to do".

It's a deflationary epoch, as labor and energy costs trend toward zero.

Posted by at March 27, 2023 12:00 AM

  

« THE rIGHT IS THE lEFT: | Main | EVEN BEFORE ECONOMIES OF SCALE: »