[T]he worm has turned. Google is looking at laying off 30,000 people it expects to replace with artificial intelligence.
The Wall Street Journal reports that large corporations across the board are planning to lay off white-collar workers.
Investor Brian Wang notes ChatGPT is already causing white-collar job loss.
In fact, ChatGPT can even code.
Sometimes its code is quite good. Sometimes it’s not so good.
(Though God knows, the latter is true of much human-generated software code too.)
It can write press releases, ad copy, catalog descriptions, news stories and essays, speeches, encyclopedia entries, customer-inquiry responses and more.
It can generate art on demand that’s suitable for book covers, advertisements and magazine illustrations.
Again, sometimes these items are quite good, and sometimes they’re not, but there’s a lot of less-than-stellar human work in those categories too.
Learning to code is bad advice now.
And the kicker is, AI is getting better all the time.
ChatGPT-4 has demonstrated “human-level performance” on many benchmarks.
It can pass bar exams, diagnose disease and process images and text. The improvement since ChatGPT-3.5 is significant.
People, on the other hand, are staying pretty much the same.
The bad news for the symbolic analysts is they’re playing on AI’s turf.
When you deal with ideas and data and symbols, you’re working with bits, and AI is pretty good at working with bits.
People losing their jobs to AI is just the tip of the iceberg.
In the next decade, lots more people — possibly (gulp) including professors like me — will be facing potential replacement by machines.
It turns out that using your brain and not your hands isn’t as good a move as it may have once seemed.
…is a function of the fact that “we” are going to not have jobs, not just “them”.