Download high-resolution image
Listen to a clip from the audiobook
audio pause button
0:00
0:00

Rebooting AI

Building Artificial Intelligence We Can Trust

Listen to a clip from the audiobook
audio pause button
0:00
0:00
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence.

Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence.

The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust—in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better.
© Athena Vouloumanos
GARY MARCUS is a scientist, best-selling author, and entrepreneur. He is the founder and CEO of Robust.AI and was founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. He is the author of five books, including Kluge, The Birth of the Mind, and the New York Times best seller Guitar Zero.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Ernest Davis. View titles by Gary Marcus
© Joe Iano
ERNEST DAVIS is a professor of computer science at the Courant Institute of Mathematical Science, New York University. One of the world's leading scientists on commonsense reasoning for artificial intelligence, he is the author of four books, including Representations of Commonsense Knowledge and Verses for the Information Age.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Gary Marcus. View titles by Ernest Davis
from Chapter 1:
 
MIND THE GAP
 
Since its earliest days, artificial intelligence has been long on prom­ise, short on delivery. In the 1950s and 1960s, pioneers like Marvin Minsky, John McCarthy, and Herb Simon genuinely believed that AI could be solved before the end of the twentieth century. “Within a generation,” Marvin Minsky famously wrote, in 1967, “the prob­lem of artificial intelligence will be substantially solved.” Fifty years later, those promises still haven’t been fulfilled, but they have never stopped coming. In 2002, the futurist Ray Kurzweil made a public bet that AI would “surpass native human intelligence” by 2029. In November 2018 Ilya Sutskever, co-founder of OpenAI, a major AI research institute, suggested that “near term AGI [artificial general intelligence] should be taken seriously as a possibility.” Although it is still theoretically possible that Kurzweil and Sutskever might turn out to be right, the odds against this happening are very long. Getting to that level—general-purpose artificial intelligence with the flexibility of human intelligence—isn’t some small step from where we are now; instead it will require an immense amount of foundational progress—not just more of the same sort of thing that’s been accomplished in the last few years, but, as we will show, something entirely different.
 
Even if not everyone is as bullish as Kurzweil and Sutskever, ambi­tious promises still remain common, for everything from medicine to driverless cars. More often than not, what is promised doesn’t mate­rialize. In 2012, for example, we heard a lot about how we would be seeing “autonomous cars [in] the near future.” In 2016, IBM claimed that Watson, the AI system that won at Jeopardy!, would “revo­lutionize healthcare,” stating that Watson Health’s “cognitive sys­tems [could] understand, reason, learn, and interact” and that “with [recent advances in] cognitive computing . . . we can achieve more than we ever thought possible.” IBM aimed to address problems ranging from pharmacology to radiology to cancer diagnosis and treatment, using Watson to read the medical literature and make rec­ommendations that human doctors would miss. At the same time, Geoffrey Hinton, one of AI’s most prominent researchers, said that “it is quite obvious we should stop training radiologists.”
 
In 2015 Facebook launched its ambitious and widely covered project known simply as M, a chatbot that was supposed to be able to cater to your every need, from making dinner reservations to planning your next vacation.
 
As yet, none of this has come to pass. Autonomous vehicles may someday be safe and ubiquitous, and chatbots that can cater to every need may someday become commonplace; so too might superintel­ligent robotic doctors. But for now, all this remains fantasy, not fact.
 
The driverless cars that do exist are still primarily restricted to highway situations with human drivers required as a safety backup, because the software is too unreliable. In 2017, John Krafcik, CEO at Waymo, a Google spinoff that has been working on driverless cars for nearly a decade, boasted that Waymo would shortly have driverless cars with no safety drivers. It didn’t happen. A year later, as Wired put it, the bravado was gone, but the safety drivers weren’t. Nobody really thinks that driverless cars are ready to drive fully on their own in cities or in bad weather, and early optimism has been replaced by widespread recognition that we are at least a decade away from that point—and quite possibly more.
 
IBM Watson’s transition to health care similarly has lost steam. In 2017, MD Anderson Cancer Center shelved its oncology collabo­ration with IBM. More recently it was reported that some of Wat­son’s recommendations were “unsafe and incorrect.” A 2016 project to use Watson for the diagnosis of rare diseases at the Marburg, Ger­many, Center for Rare and Undiagnosed Diseases was shelved less than two years later, because “the performance was unacceptable.” In one case, for instance, when told that a patient was suffering from chest pain, the system missed diagnoses that would have been obvious even to a first year medical student, such as heart attack, angina, and torn aorta. Not long after Watson’s troubles started to become clear, Facebook’s M was quietly canceled, just three years after it was announced.
 
Despite this history of missed milestones, the rhetoric about AI remains almost messianic. Eric Schmidt, the former CEO of Google, has proclaimed that AI would solve climate change, poverty, war, and cancer. XPRIZE founder Peter Diamandis made similar claims in his book Abundance, arguing that strong AI (when it comes) is “definitely going to rocket us up the Abundance pyramid.” In early 2018, Google CEO Sundar Pichai claimed that “AI is one of the most important things humanity is working on . . . more pro­found than . . . electricity or fire.” (Less than a year later, Google was forced to admit in a note to investors that products and ser­vices “that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges.”)
 
Others agonize about the potential dangers of AI, often in ways that show a similar disconnect from current reality. One recent non­fiction bestseller by the Oxford philosopher Nick Bostrom grappled with the prospect of superintelligence taking over the world, as if that were a serious threat in the foreseeable future. In the pages of The Atlantic, Henry Kissinger speculated that the risk of AI might be so profound that “human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them.” Elon Musk has warned that working on AI is “summoning the demon” and a danger “worse than nukes,” and the late Stephen Hawking warned that AI could be “the worst event in the history of our civilization.”
 
But what AI, exactly, are they talking about? Back in the real world, current-day robots struggle to turn doorknobs, and Teslas driven in “Autopilot” mode keep rear-ending parked emergency vehi­cles (at least four times in 2018 alone). It’s as if people in the four­teenth century were worrying about traffic accidents, when good hygiene might have been a whole lot more helpful.
 
[ . . . ]
“Artificial intelligence is among the most consequential issues facing humanity, yet much of today’s commentary has been less than intelligent: awe-struck, credulous, apocalyptic, uncomprehending. Gary Marcus and Ernest Davis, experts in human and machine intelligence, lucidly explain what today’s AI can and cannot do, and point the way to systems that are less A and more I.”
—Steven Pinker, Johnstone Professor of Psychology, Harvard University, and the author of How the Mind Works and The Stuff of Thought
 
“Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough. No matter how smart and useful our intelligent machines are today, they don’t know what really matters. Rebooting AI dares to imagine machine minds that goes far beyond the closed systems of games and movie recommendations to become real partners in every aspect of our lives.” 
—Garry Kasparov, Former World Chess Champion and author of Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins
 
“Finally, a book that says aloud what so many AI experts are really thinking. Every CEO should read it, and everyone else at the company, too. Then they’ll be able to separate the AI wheat from the chaff, and know where we are, how far we have to go, and how to get there.”
—Pedro Domingos, Professor of computer science at the University of Washington and author of The Master Algorithm
 
“A welcome antidote to the hype that has engulfed AI over the past decade and a realistic look at how far AI and robotics still have to go.”
—Rodney Brooks, former director of the MIT Computer Science and Artificial Intelligence Laboratory
 
“AI is achieving superhuman performance in many narrow applications, but the reality is that we are still very far from artificial general intelligence that truly understands the world. Marcus and Davis explain the pitfalls of current approaches with humor and insight, and provide a compelling path toward the kind of robust AI that can earn our trust.”
—Erik Brynjolfsson, Professor at MIT and co-author of The Second Machine Age and Machine | Platform | Crowd

 
Rebooting AI is a blast to read. It's erudite, it's witty, and it neatly unpacks why today's AI has such trouble doing truly smart tasks—and what it'll take to reach that goal.”
—Clive Thompson, Wired magazine columnist and author of Coders: The Making of a New Tribe and the Remaking of the World
 
“Will machines overtake humans in the foreseeable future, or is it just hype? Marcus and Davis lay out their answer with elegant prose and a sure quill, drawing the distinction between today’s deep-learning based narrow, brittle artificial “intelligence” and the ever-elusive artificial general intelligence. Common sense and trust, which are intrinsically human, emerge as grand challenges for the field. If you plan to read one book to keep up with AI—this is an outstanding choice!”
—Oren Etzioni, CEO of Allen institute for AI & Professor of computer science at University of Washington. 
 
“Artificial intelligence is here to stay. What are its achievements, its prospects, its pitfalls and misdirected initiatives—and how might these be remedied and overcome? This lucid and deeply informed account, from a critical but sympathetic perspective, is a valuable guide to developments that will surely have a major impact on the social order and intellectual culture.”
—Noam Chomsky

“When I was a child I saw 2001: A Space Odyssey and then read everything I could about AI. All the smart people said it was twenty years away.  Twenty years later I was an adult and the smart people said that AI was twenty years away. Twenty years after that we passed 2001 and the smart people said it was about twenty years away.  Yup, it’s getting better and better, but it still ain’t HAL. It can tag photos pretty good but on understanding stories my son passed all the AI before he went to his stupid preschool. Now is the time to listen to  smarter people: in Rebooting AI, Gary Marcus and Ernest Davis do a great job separating truth from bullshit to understand why we might not have real A.I. in twenty years and what we can do to get way closer.”
—Penn Jillette, Emmy-winning magician and actor and New York Times best-belling author

“A must-read for anyone who cares about the future of artificial intelligence, filled with masterful storytelling and clear and easy-to-digest examples. Simultaneously puncturing hype and plotting a new course towards toward truly successful AI, Rebooting AI offers the first rational look at what AI can and can’t do, and what it will take to build AI that we can genuinely trust. And it does it in a way that engages the reader and ultimately celebrates both what AI has accomplished and the strengths and power of the human mind.”
—Annie Duke, best-selling author of Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts


About

Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence.

Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence.

The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust—in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better.

Author

© Athena Vouloumanos
GARY MARCUS is a scientist, best-selling author, and entrepreneur. He is the founder and CEO of Robust.AI and was founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. He is the author of five books, including Kluge, The Birth of the Mind, and the New York Times best seller Guitar Zero.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Ernest Davis. View titles by Gary Marcus
© Joe Iano
ERNEST DAVIS is a professor of computer science at the Courant Institute of Mathematical Science, New York University. One of the world's leading scientists on commonsense reasoning for artificial intelligence, he is the author of four books, including Representations of Commonsense Knowledge and Verses for the Information Age.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Gary Marcus. View titles by Ernest Davis

Excerpt

from Chapter 1:
 
MIND THE GAP
 
Since its earliest days, artificial intelligence has been long on prom­ise, short on delivery. In the 1950s and 1960s, pioneers like Marvin Minsky, John McCarthy, and Herb Simon genuinely believed that AI could be solved before the end of the twentieth century. “Within a generation,” Marvin Minsky famously wrote, in 1967, “the prob­lem of artificial intelligence will be substantially solved.” Fifty years later, those promises still haven’t been fulfilled, but they have never stopped coming. In 2002, the futurist Ray Kurzweil made a public bet that AI would “surpass native human intelligence” by 2029. In November 2018 Ilya Sutskever, co-founder of OpenAI, a major AI research institute, suggested that “near term AGI [artificial general intelligence] should be taken seriously as a possibility.” Although it is still theoretically possible that Kurzweil and Sutskever might turn out to be right, the odds against this happening are very long. Getting to that level—general-purpose artificial intelligence with the flexibility of human intelligence—isn’t some small step from where we are now; instead it will require an immense amount of foundational progress—not just more of the same sort of thing that’s been accomplished in the last few years, but, as we will show, something entirely different.
 
Even if not everyone is as bullish as Kurzweil and Sutskever, ambi­tious promises still remain common, for everything from medicine to driverless cars. More often than not, what is promised doesn’t mate­rialize. In 2012, for example, we heard a lot about how we would be seeing “autonomous cars [in] the near future.” In 2016, IBM claimed that Watson, the AI system that won at Jeopardy!, would “revo­lutionize healthcare,” stating that Watson Health’s “cognitive sys­tems [could] understand, reason, learn, and interact” and that “with [recent advances in] cognitive computing . . . we can achieve more than we ever thought possible.” IBM aimed to address problems ranging from pharmacology to radiology to cancer diagnosis and treatment, using Watson to read the medical literature and make rec­ommendations that human doctors would miss. At the same time, Geoffrey Hinton, one of AI’s most prominent researchers, said that “it is quite obvious we should stop training radiologists.”
 
In 2015 Facebook launched its ambitious and widely covered project known simply as M, a chatbot that was supposed to be able to cater to your every need, from making dinner reservations to planning your next vacation.
 
As yet, none of this has come to pass. Autonomous vehicles may someday be safe and ubiquitous, and chatbots that can cater to every need may someday become commonplace; so too might superintel­ligent robotic doctors. But for now, all this remains fantasy, not fact.
 
The driverless cars that do exist are still primarily restricted to highway situations with human drivers required as a safety backup, because the software is too unreliable. In 2017, John Krafcik, CEO at Waymo, a Google spinoff that has been working on driverless cars for nearly a decade, boasted that Waymo would shortly have driverless cars with no safety drivers. It didn’t happen. A year later, as Wired put it, the bravado was gone, but the safety drivers weren’t. Nobody really thinks that driverless cars are ready to drive fully on their own in cities or in bad weather, and early optimism has been replaced by widespread recognition that we are at least a decade away from that point—and quite possibly more.
 
IBM Watson’s transition to health care similarly has lost steam. In 2017, MD Anderson Cancer Center shelved its oncology collabo­ration with IBM. More recently it was reported that some of Wat­son’s recommendations were “unsafe and incorrect.” A 2016 project to use Watson for the diagnosis of rare diseases at the Marburg, Ger­many, Center for Rare and Undiagnosed Diseases was shelved less than two years later, because “the performance was unacceptable.” In one case, for instance, when told that a patient was suffering from chest pain, the system missed diagnoses that would have been obvious even to a first year medical student, such as heart attack, angina, and torn aorta. Not long after Watson’s troubles started to become clear, Facebook’s M was quietly canceled, just three years after it was announced.
 
Despite this history of missed milestones, the rhetoric about AI remains almost messianic. Eric Schmidt, the former CEO of Google, has proclaimed that AI would solve climate change, poverty, war, and cancer. XPRIZE founder Peter Diamandis made similar claims in his book Abundance, arguing that strong AI (when it comes) is “definitely going to rocket us up the Abundance pyramid.” In early 2018, Google CEO Sundar Pichai claimed that “AI is one of the most important things humanity is working on . . . more pro­found than . . . electricity or fire.” (Less than a year later, Google was forced to admit in a note to investors that products and ser­vices “that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges.”)
 
Others agonize about the potential dangers of AI, often in ways that show a similar disconnect from current reality. One recent non­fiction bestseller by the Oxford philosopher Nick Bostrom grappled with the prospect of superintelligence taking over the world, as if that were a serious threat in the foreseeable future. In the pages of The Atlantic, Henry Kissinger speculated that the risk of AI might be so profound that “human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them.” Elon Musk has warned that working on AI is “summoning the demon” and a danger “worse than nukes,” and the late Stephen Hawking warned that AI could be “the worst event in the history of our civilization.”
 
But what AI, exactly, are they talking about? Back in the real world, current-day robots struggle to turn doorknobs, and Teslas driven in “Autopilot” mode keep rear-ending parked emergency vehi­cles (at least four times in 2018 alone). It’s as if people in the four­teenth century were worrying about traffic accidents, when good hygiene might have been a whole lot more helpful.
 
[ . . . ]

Praise

“Artificial intelligence is among the most consequential issues facing humanity, yet much of today’s commentary has been less than intelligent: awe-struck, credulous, apocalyptic, uncomprehending. Gary Marcus and Ernest Davis, experts in human and machine intelligence, lucidly explain what today’s AI can and cannot do, and point the way to systems that are less A and more I.”
—Steven Pinker, Johnstone Professor of Psychology, Harvard University, and the author of How the Mind Works and The Stuff of Thought
 
“Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough. No matter how smart and useful our intelligent machines are today, they don’t know what really matters. Rebooting AI dares to imagine machine minds that goes far beyond the closed systems of games and movie recommendations to become real partners in every aspect of our lives.” 
—Garry Kasparov, Former World Chess Champion and author of Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins
 
“Finally, a book that says aloud what so many AI experts are really thinking. Every CEO should read it, and everyone else at the company, too. Then they’ll be able to separate the AI wheat from the chaff, and know where we are, how far we have to go, and how to get there.”
—Pedro Domingos, Professor of computer science at the University of Washington and author of The Master Algorithm
 
“A welcome antidote to the hype that has engulfed AI over the past decade and a realistic look at how far AI and robotics still have to go.”
—Rodney Brooks, former director of the MIT Computer Science and Artificial Intelligence Laboratory
 
“AI is achieving superhuman performance in many narrow applications, but the reality is that we are still very far from artificial general intelligence that truly understands the world. Marcus and Davis explain the pitfalls of current approaches with humor and insight, and provide a compelling path toward the kind of robust AI that can earn our trust.”
—Erik Brynjolfsson, Professor at MIT and co-author of The Second Machine Age and Machine | Platform | Crowd

 
Rebooting AI is a blast to read. It's erudite, it's witty, and it neatly unpacks why today's AI has such trouble doing truly smart tasks—and what it'll take to reach that goal.”
—Clive Thompson, Wired magazine columnist and author of Coders: The Making of a New Tribe and the Remaking of the World
 
“Will machines overtake humans in the foreseeable future, or is it just hype? Marcus and Davis lay out their answer with elegant prose and a sure quill, drawing the distinction between today’s deep-learning based narrow, brittle artificial “intelligence” and the ever-elusive artificial general intelligence. Common sense and trust, which are intrinsically human, emerge as grand challenges for the field. If you plan to read one book to keep up with AI—this is an outstanding choice!”
—Oren Etzioni, CEO of Allen institute for AI & Professor of computer science at University of Washington. 
 
“Artificial intelligence is here to stay. What are its achievements, its prospects, its pitfalls and misdirected initiatives—and how might these be remedied and overcome? This lucid and deeply informed account, from a critical but sympathetic perspective, is a valuable guide to developments that will surely have a major impact on the social order and intellectual culture.”
—Noam Chomsky

“When I was a child I saw 2001: A Space Odyssey and then read everything I could about AI. All the smart people said it was twenty years away.  Twenty years later I was an adult and the smart people said that AI was twenty years away. Twenty years after that we passed 2001 and the smart people said it was about twenty years away.  Yup, it’s getting better and better, but it still ain’t HAL. It can tag photos pretty good but on understanding stories my son passed all the AI before he went to his stupid preschool. Now is the time to listen to  smarter people: in Rebooting AI, Gary Marcus and Ernest Davis do a great job separating truth from bullshit to understand why we might not have real A.I. in twenty years and what we can do to get way closer.”
—Penn Jillette, Emmy-winning magician and actor and New York Times best-belling author

“A must-read for anyone who cares about the future of artificial intelligence, filled with masterful storytelling and clear and easy-to-digest examples. Simultaneously puncturing hype and plotting a new course towards toward truly successful AI, Rebooting AI offers the first rational look at what AI can and can’t do, and what it will take to build AI that we can genuinely trust. And it does it in a way that engages the reader and ultimately celebrates both what AI has accomplished and the strengths and power of the human mind.”
—Annie Duke, best-selling author of Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts


Books for Native American Heritage Month

In celebration of Native American Heritage Month this November, Penguin Random House Education is highlighting books that detail the history of Native Americans, and stories that explore Native American culture and experiences. Browse our collections here: Native American Creators Native American History & Culture

Read more

2024 Middle and High School Collections

The Penguin Random House Education Middle School and High School Digital Collections feature outstanding fiction and nonfiction from the children’s, adult, DK, and Grupo Editorial divisions, as well as publishers distributed by Penguin Random House. Peruse online or download these valuable resources to discover great books in specific topic areas such as: English Language Arts,

Read more

PRH Education High School Collections

All reading communities should contain protected time for the sake of reading. Independent reading practices emphasize the process of making meaning through reading, not an end product. The school culture (teachers, administration, etc.) should affirm this daily practice time as inherently important instructional time for all readers. (NCTE, 2019)   The Penguin Random House High

Read more

PRH Education Translanguaging Collections

Translanguaging is a communicative practice of bilinguals and multilinguals, that is, it is a practice whereby bilinguals and multilinguals use their entire linguistic repertoire to communicate and make meaning (García, 2009; García, Ibarra Johnson, & Seltzer, 2017)   It is through that lens that we have partnered with teacher educators and bilingual education experts, Drs.

Read more