Summarizing "Humans Need Not Apply"

The ever-popular CGPGrey recently released a video on the potential future economic challenges of AI and automation, "Humans Need Not Apply". The longest video on his channel yet, it has attracted a good amount of attention in the first 24 hours, particularly on reddit, where long discussions broke out in a handful of popular sub-reddits, including /r/Futurology, /r/Economics, /r/videos, and /r/CGPGrey itself.

It's a departure from his typical style, which is usually something of a depth dive on a purely factual, but complex, topic. This time he focuses on the modern problem of how future expansion of AI and automation will affect the human job market. The video covers, at a high level, the current state of automation and how he strongly believes it will soon prove capable of taking over many jobs we typically think of as outside the realm of computer and machinery automation, leaving a lot of people "out of work through no fault of their own".

Having read a lot of replies to the video, I would like to offer my take on Grey's position in a way that will hopefully clarify some points and address some common counter-arguments. I think he made his point clearly, but it may help some to see things restated slightly differently. I will try my best to stick to what was put forth in the video and avoid my own thoughts. (Naturally, I will assume you watched the video.)

Grey argues that how automation is adopted in the future will be very different from how it was in the past. Historically, automation has benefited humans. When a job becomes more automated we obviously reap the benefit of not having to do it ourselves, reap the benefit of having a job performed better than we did it, and the job the machines took over gets replaced in some way. For the large part we still have enough jobs available, particularly for skilled workers. There is a prevalent attitude of thinking that future automation won't pose a significant threat to the job economy because "it's always worked out in the past". But Grey argues that the future of automation cannot be extrapolated from the past:

  • Automation will expand at a speed we haven't seen before. In the past, automation has tended to expand relatively slowly. Usually only a couple industries were revolutionized at a time and it would take many years to make the transition. New generations have time to train for different jobs, workers from one industry can shift to another industry, etc.

    But in the future, automation will invade many industries very quickly. We won't have the luxury of easing into a massively automated world. There are major business incentives to capitalize on automation, and the engineering effort is actually there to deliver. AI used to be a theoretical field of study, but now it's one of the most popular subjects of academic and self-motivated study in engineering and one of the most demanded fields of computer science employment. We're seeing practical growth in this area like we've never seen before.

  • More types of jobs will be automated than ever before. In the past, we've largely seen manual tasks automated. Machines were single-purpose and largely stupid at some level. Highly specialized skill sets, professional occupations, and the such, have typically been safe from automation. Not so anymore. High-paying professions dominated by highly trained, specialized humans will fall prey to automation. There won't be a safe-haven of jobs that we can turn to.

    These have always been our fall-back plan. We've usually been able to give less-desirable jobs to machines and move the workers to equivalent or nicer jobs. But what happens when the machines are doing those jobs too? So far, automation doesn't do the "smart" jobs, it does jobs that we don't want to do (like dig ditches) or that are too complex for us (like calculate an inverse square root millions of times a second), but they don't really do "intelligent" or "judgment" things. That's our domain. Except, Grey contends, it isn't. AI has started to automate things we consider "intelligent" and it's doing quite well. General-purpose computing and learning will allow AI to enter many new domains.

    The implication here is that historically we've seen individual jobs become automated. In the future, it may be entire professions.

  • Grey points out that the 32 most common professions in the United States are over 100 years old, meaning that we have not yet to deal with automation taking a huge chunk of our jobs. And many of those most common professions are solid candidates for the next generation of automation.
In short, we have many times more automation waiting for us in the near future than we've experienced in the past and a good portion of it's going to be for types of jobs we've never seen automated.

Of course, worries about jobs in a future of increased automation isn't new. People have always worried about machines taking their jobs. The novella Manna takes Grey's exact warnings about AI and explores them to the extreme; it was published 11 years ago. Books on the general end of capitalism significantly predate that. Why does Grey contend that this is the time to look for a falling sky?

Because those futuristic machines are already here. The video quickly mentions many jobs for which automations sounds like a futuristic dream but in reality already have existing, successful automation. They may not be mass-produced, mass-marketed, cheap enough for bulk purchase, or finely tuned enough to fully replace a human, but the near-human proof-of-concept has already been built and demonstrated successfully. We already have the blueprints to build them and we've done it, so the "hard" part is behind us, the rest is time and business.

So to fix the problem, what's his call to action? Actually, there's a notable lack thereof. Rather, my primary take-away was that we need to start thinking about these kinds of problems. When we have economic instability, automation taking more jobs than it is creating, and a rising gaps between the poor and middle class, it will be too late to sit down and refactor large portions of society and the economy from scratch. Rather, we need to be constantly aware of the impending problem so that when it starts to manifest itself we can quickly react to it. Being aware of the impending problem allows us to pre-think through the possible solutions and be prepared to adopt them when the time comes.

We will have problems like:

  • How do you treat multiple industries of people who were hard workers with skilled job sets that have been largely obsoleted?
  • How do you maintain standards in living in a country that doesn't need everyone to work?
  • How do you educate the next generation when most jobs are a stone's throw away from being automated?
And so forth. None of those questions are new to philosophers, but a lot of first world nations haven't seriously pondered them, let alone come to any decisions, let alone made any moves toward adapting.

People are resilient, and while it's true that societies can adapt to substantial change, they generally need the change to be gradual. Macroeconomic changes are not exactly agile. One of Grey's main points above is that the changes will probably happen quickly. If we're unlucky, entire industries could disappear from the human job market not in a couple decades, but a couple years. The necessary changes to accommodate it would be substantial.

Because the economic shift will be unprecedented and will possibly usher in completely new ideas about job, career, and education expectations and standards of living, we will need to re-think a lot of how society operates. If re-thinking large portions of society is the only long-term solution, we're going to wish that we'd spent the preceding years thinking about those problems and taking every preemptive step possible.

I like Grey's video (and I also enjoy his musings in general). But once again, here's a reminder that this is just an article summarizing my take on the video, it is not necessarily my personal thoughts on the matter.