I believe it’s indisputable that computers simply “think” differently from our brains. The best way to increase computer intelligence is to develop general computational methods (like deep learning and GPT-3) that scale with more processing power and more data. In the past few years, we’ve seen the best NLP models ingest ten times more data each year, and with each factor of ten, we saw qualitative improvements. In January 2021, just seven months after the release of GPT-3, Google announced a language model with 1.75 trillion parameters, which is nine times larger than GPT-3. This continued the trend of language model prowess growing by about ten times per year. This language model has already read more than any one of us could in millions of lifetimes. This progress will only grow exponentially.
While GPT-3 makes many basic mistakes, we are seeing glimmers of intelligence, and it is, after all, only version 3. Perhaps in twenty years, GPT-23 will read every word ever written and watch every video ever produced and build its own model of the world. This all-knowing sequence transducer would contain all the accumulated knowledge of human history. All you’ll have to do is ask it the right questions.
So, will deep learning eventually become “artificial general intelligence” (AGI), matching human intelligence in every way? Will we encounter “singularity” (see chapter 10)? I don’t believe it will happen by 2041. There are many challenges that we have not made much progress on or even understood, such as how to model creativity, strategic thinking, reasoning, counter-factual thinking, emotions, and consciousness. These challenges are likely to require a dozen more breakthroughs like deep learning, but we’ve had only one great breakthrough in over sixty years, so I believe we are unlikely to see a dozen in twenty years.
In addition, I would suggest that we stop using AGI as the ultimate test of AI. As I described in chapter 1, AI’s mind is different from the human mind. In twenty years, deep learning and its extensions will beat humans on an ever-increasing number of tasks, but there will still be many existing tasks that humans can handle much better than deep learning. There will even be some new tasks that showcase human superiority, especially if AI’s progress inspires us to improve and evolve.
What’s important is that we develop useful applications suitable for AI and seek to find human-AI symbiosis, rather than obsess about whether or when deep-learning AI will become AGI. I consider the obsession with AGI to be a narcissistic human tendency to view ourselves as the gold standard.
Copyright © 2021 by Kai-Fu Lee. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.