Download high-resolution image
Listen to a clip from the audiobook
audio pause button
0:00
0:00

The Smarter Screen

Surprising Ways to Influence and Improve Online Behavior

Contributions by Jonah Lehrer
Listen to a clip from the audiobook
audio pause button
0:00
0:00
A leading behavioral economist reveals the tools that will improve our decision making on screens

Office workers spend the majority of their waking hours staring at screens. Unfortunately, few of us are aware of the visual biases and behavioral patterns that influence our thinking when we’re on our laptops, iPads, smartphones, or smartwatches. The sheer volume of information and choices available online, combined with the ease of tapping "buy," often make for poor decision making on screens.

In The Smarter Screen, behavioral economist Shlomo Benartzi reveals a tool kit of interventions for the digital age. Using engaging reader exercises and provocative case studies, Benartzi shows how digital designs can influence our decision making on screens in all sorts of surprising ways.

For example: 
• You’re more likely to add bacon to your pizza if you order online. 
• If you read this book on a screen, you’re less likely to remember its content. 
• You might buy an item just because it’s located in a screen hot spot, even if better options are available. 
• If you shop using a touch screen, you’ll probably overvalue the product you’re considering. 
• You’re more likely to remember a factoid like this one if it’s displayed in an ugly, difficult-to-read font. 

Drawing on the latest research on digital nudging, Benartzi reveals how we can create an online world that helps us think better, not worse.
Shlomo Benartzi is a professor and a cochair of the Behavioral Decision-Making Group at UCLA's Anderson School of Management. He has advised many government agencies, financial institutions, and advisory boards, and currently serves as the chief behavioral economist for the Allianz Global Investors Center for Behavioral Finance. He co-created (with Richard Thaler) the acclaimed Save More Tomorrow program, which makes it easy for employees to increase their retirement savings rate and has helped millions of people boost their savings. View titles by Shlomo Benartzi

ACKNOWLEDGMENTS

This book would not exist without the help of many people. I benefited from huge amounts of feedback, both during the development of the ideas in this book and during the writing process. First, I was incredibly fortunate to get access to many brilliant academics, scientists, and colleagues, who were kind enough to read various drafts of the chapters and offer their insights and comments. I’m very grateful to Peter Ayton, Maya Bar-Hillel, Tibor Besedes, Saurabh Bhargava, Barbara Fasolo, Gavan Fitzsimons, Craig Fox, Dan Goldstein, Noah Goldstein, Michael Hallsworth, David Halpern, Hal Hershfield, Eric Johnson, Yaron Levi, George Loewenstein, Katy Milkman, Daniel Oppenheimer, Katharina Reinecke, Elena Reutskaja, and Philip Tetlock.

A few friends deserve special mention, as they devoted many hours to the book and improved it in countless ways. If it weren’t for David Faro, John Payne, and Richard Thaler, The Smarter Screen would be far less smart.

I was also lucky enough to get valuable input from many friends in the industry. A big thank you to David Collyer, Udo Frank, Bill Harris, Thomas MacNeil, Charlie Nelson, Cathy Smith, and Matt Stewart. Danny Kalish read every chapter, often more than once, and offered many important insights that led me in new directions.

It was crucial to me that this book be as accurate as possible. Steve Shu, my very good friend, spearheaded the fact-checking process. In addition, I’m very grateful for the time and diligence of Jolie Martin, Amit Runchal, Namika Sagara, and Hadas Sella. They combed through every citation and double-checked every fact and quote. All remaining mistakes are my own.

This book was guided by a skilled team at Portfolio. I owe a big thank you to Adrian Zackheim for seeing the potential in a book about behavioral economics in the digital age. And my editor, Niki Papadopoulos, helped ensure the book was as readable and engaging as possible.

I want to thank my collaborator, Jonah Lehrer, an amazing friend and brilliant writer. We had a great time working on this book together. It could never have been written without him.

Last, but definitely not least, I want to thank my family. Shalom, my dad, and Leah, my mom, for everything they taught me. My wife, Lesli, and Maya, my little girl, put up with many late nights and countless discussions about the material in this book. They inspire me every day.

INTRODUCTION

On October 1, 2013, the United States government launched a new Web site, www.healthcare.gov, that was designed to help people choose health insurance. In essence, the site was a shopping portal, allowing consumers to compare prices and features on all of the insurance plans available in their local area. Because the government hoped to sign up millions of uninsured Americans, they decided to rely on the scale of the Web.

While most of the media coverage of the Web site centered around its glaring technical glitches, very little attention was paid to a potentially far more important issue: Did the Web site actually help consumers find the best insurance plans? Given the reach of Obamacare, even seemingly minor design details could have a huge impact, influencing a key financial decision in the lives of millions of Americans.

Unfortunately, research suggests that most people probably made poor insurance choices on the Web site. A study conducted by Saurabh Bhargava, George Loewenstein, and me demonstrated that the typical subject using a simulated version of healthcare.gov chose a plan that was $888 more expensive than it needed to be.1 This was equivalent to roughly 3 percent of their income. Meanwhile, an earlier study, led by Eric Johnson at Columbia University, found that giving consumers more health care options on sites like healthcare .gov dramatically decreased their ability to find the best plan. In fact, even offering people a modest degree of choice meant that nearly 80 percent of them picked suboptimally.2

Can this problem be fixed? The online world offers us more alternatives than ever before: the average visitor to healthcare.gov was offered forty-seven different insurance plans,3 while Zappos.com features more than twenty-five thousand women’s shoes. But how should Web sites help us choose better?

On the morning of February 21, 2010, an American Predator drone began tracking a pickup truck and two SUVs traveling on a road near the village of Shahidi Hassas in southern Afghanistan. As the drone followed the vehicles, it beamed a live video feed to a crew of analysts based at Creech Air Force Base outside Las Vegas.4

Such intelligence is now a staple of modern warfare. The CIA used drones to gather intel on Osama bin Laden’s hideout; the Israeli Defense Forces flew dozens of unmanned aircraft over Gaza during the recent conflict; the United States Air Force accumulated more than five hundred hours of aerial video footage every single day in Afghanistan and Iraq.5

This flood of information creates an obvious problem: someone has to process it. Unfortunately, the evidence suggests that drone crews are often overwhelmed by the visual data. One study, led by Ryan McKendrick at George Mason University, showed that people simulating the multitasking environment of drone operators performed worse on an air defense task;6 another experiment, which looked at gunners in armored vehicles, found that the soldiers failed to perform their primary task effectively—noticing the bad guys—when a second task was added to the list.7 In experiment after experiment, the surplus of digital information creates blind spots on the screen.8

That’s what happened to the analysts tracking those vehicles in southern Afghanistan. According to an internal military investigation,9 the cubicle warriors in Nevada couldn’t handle all of the available information as they toggled back and forth among the video feed, radio chatter, and numerous instant messages. As a result, they failed to notice that the truck and SUVs were actually filled with civilians. And so the drone operators gave the order to fire, unleashing a barrage of Hellfire missiles and rockets. Twenty-three innocent people were killed in the attack.

How can we make such tragedies less likely to happen? What should the Air Force and CIA do to minimize the risk of blind spots on screens? And how can other organizations, from financial institutions to hospitals, deal with the same problem of digital information overload?

On December 14, 2013, Jessica Seinfeld used the Uber app to drop her children off across town at a bar mitzvah and a sleepover.10 Unfortunately, the ride took place in the midst of a New York City snowstorm, which meant that Uber had put surge pricing into effect. (When demand for drivers is high—say, during a blizzard, or on New Year’s Eve—Uber systematically raises its rates to entice more drivers to enter the marketplace.) During this storm, demand for drivers was so high that some Manhattan customers were charged 8.25 times the normal fare. Although Uber warned its customers about the surcharge before they ordered a ride, the warning clearly wasn’t effective, as social media soon lit up with complaints of price gouging. Jessica Seinfeld, for instance, posted a picture of her $415 Uber bill on Instagram, while many others lamented their crosstown rides that cost more than $150.11 Uber had provided a valuable service—helping people get home in a bad storm—but had also angered a lot of customers. It’s never a good sign when your company is the reason people are tweeting the hashtag #neveragain.

The surge pricing problem is indicative of a more common digital hazard, which is that people often think very fast on screens. Uber customers, of course, benefit from this quick pace, as the streamlined app makes it easy for people to book rides with a few taps of the thumb. However, when surge pricing is in effect that same effortless ease can backfire, since consumers book rides on their phone without realizing how much the rides are going to cost.

How should Uber fix its app? Is there any way to help consumers avoid online decisions they’ll soon regret?

These three stories illustrate a few of the many ways in which the digital revolution is changing the way we live, from the analysis of military intelligence to the booking of taxi rides. They reveal an age in which we have more information and choices than ever before, and are able to act on them with breathtaking speed. But these stories are also a reminder of the profound challenges that remain. We have more choices, but we choose the wrong thing. We have more information, but we somehow miss the most relevant details. We can act quickly, but that often means we act without thinking.

It’s a cliché to complain about these trends. It’s easy to lament all the ways the online world leaves us confused and distracted, forgetful and frazzled.

This book is not about those complaints. It is not about how smartphones make us stupid. It is not a requiem for some predigital paradise.

Instead, this book is about how screens can be designed to make us smarter. It’s a book of behavioral solutions and practical tools that can improve our digital lives. It’s about how the same technological trends that lead people to buy the wrong insurance plan and book a $415 taxi ride can be turned into powerful digital opportunities, rooted in the latest research about how we think and choose on smartphones, tablets, and computers.

Here are three examples of potential solutions. If you want to encourage people to select the best health care plan, or choose the right product on your Web site, then you might want to consider a choice tournament modeled on Wimbledon and March Madness. (Instead of giving people all the options at once, you divide the best options into different rounds—work led by Tibor Besedes shows this dramatically improves decision making.)12

And if you want to help intelligence analysts avoid blind spots, it’s often helpful to zoom out and provide fewer details about the scene. (In a real-world study conducted in Israel, providing less detailed feedback led to big improvements in decision making among investors.13 I bet it would also help drone operators.) This fix is not just about giving people less information—it’s also about using new information compression technologies to help us cope with our limited attention.

Finally, companies like Uber can do a better job of educating their customers—and thus avoiding a mob of angry ones—by carefully deploying ugly fonts on their Web sites and apps.14 (This runs counter to the common belief that information should always be as easy to process as possible.) The same approach can also be used to close the digital reading gap, as many studies suggest that we read significantly worse on screens than we do on paper.15

These are just a few suggestions for how businesses and governments can use the tools and tactics of behavioral science to improve our online behavior. This book is filled with many more examples, as I believe we are on the cusp of a huge opportunity: By taking advantage of this practical research, we can dramatically boost the quality of our digital decisions. We can see better, learn more, and regret less.

So why am I, a behavioral economist, writing this book? I have devoted my career to studying the mistakes people make so that we might learn to avoid them. For example, in my research with Richard Thaler, a behavioral economist and coauthor of the book Nudge,16 we used psychological insights to help four million employees significantly boost their savings rates using the Save More Tomorrow program.17 That’s the good news. The bad news is that it took us fifteen years to reach that many people. What’s worse, there are still tens of millions of Americans whom we failed to help, and who still aren’t saving enough. I have been continually frustrated by the slow pace of this process.

My hope is that we can use the scale of technology to bring more fixes to more people in far less time. After all, if you want to influence citizens and customers in the twenty-first century, you don’t have to knock on their doors, or interrupt them on the street—you can just interact with them online, using the reach of the digital world to quickly contact vast numbers of people with minimal effort. In fact, influencing behavior on screens can be so efficient and effective that I believe we have a chance to help a billion people think smarter and choose wiser. That’s right: billion. With a b.

However, this opportunity comes with an important caveat: in order to take advantage of these digital nudges, I believe we need to tailor them for our new online environment. Although we like to pretend that our brain isn’t altered by technology, new evidence suggests that these splendid inventions are shifting the patterns of our behavior in all sorts of subtle ways. What’s more, these shifts are often predictable, allowing us to anticipate how people will act on a device, and how they will respond to our interventions. (We can even explain some consistent quirks of digital behavior, such as why people value items more when shopping on a tablet,18 or why they will probably get lower scores when taking the SAT on a computer,19 or why they order pizza with more calories when ordering off a Web site.20) The end result is that we need to update our behavioral toolkit for the digital age. This book will give you the tools you need now, at least if you want to nudge people the right way on screens.

Let me be clear: I’m not saying your head has been rewired by your smartphone. (Human nature evolved over millions of years; it’s unlikely to be transformed in a decade or two.) Nevertheless, there are many relevant differences in offline versus online thinking, which should be reflected in the designs of our screens. And since every business is now a digital business, and nearly every consumer is making important decisions on their gadgets, it’s incredibly important that we get these designs right. The medium of information and decision making has changed. So should our interventions and nudges.

Of course, this is all very new research, which means that a few disclaimers are in order. Some of the studies in this book directly compare online and offline behavior, while other studies are more suggestive. When the evidence is more speculative, I will make that clear. In addition, these behavioral tools won’t be able to solve every digital problem. While we can design screens that might make it easier to deal with information overload and choose better insurance, we’re not going to completely eliminate online mistakes or mollify every upset Uber customer.

Technological revolutions provide us with a rare opportunity to fundamentally reimagine how we think and live. Who could have guessed that, one day, many of the most important military decisions would be made on a computer? Or that the layout of a Web site would determine how many millions of Americans will get health care insurance and 401(k) accounts? Or that the smartphone would be the last thing most of us see at night and the first thing we see in the morning?

We are living in a world increasingly made of zeros and ones; more and more of our lives are taking place on screens. This book helps us take advantage of this moment, ensuring that we won’t squander the possibilities of the digital revolution.

Let’s get started.

CHAPTER 1

The Mental Screen

THE FOURTH NIGHT

I’d like to begin with a story. It’s a story that takes place in a time before the Internet, way back in the early 1990s. The story involves a man who wants to book a hotel room in Cleveland. (Great story, huh?) There are a few different ways this booking could happen.

Perhaps the man has a trusted travel agent, and so he calls the agent and tells her what he’d like: a nice three-star hotel, close to the airport. She takes down his preferences, checks her paper files, and then picks up the phone to call the hotel. For her services, she charges the hotel a 10 percent commission.

Alternatively, our intrepid traveler might want to book the room himself. If that’s the case, and if he’s never been to Cleveland before, then he needs a travel guide. A tourist manual. Maybe it’s Fodor’s, or Let’s Go, or the newsprint manuals provided at the local AAA office—there’s no booking without a book.

Fast-forward to the present day. Chances are, our protagonist now relies on the Internet to make his hotel reservations. (The number of travel agents employed by travel agencies has declined by roughly 55 percent in the last fifteen years.)1 He almost certainly begins with Google, searching for a hotel near the Cleveland airport. A few milliseconds later, his screen is filled with results.

If you look closely at the screen, however, you’ll notice something strange, as the top results don’t refer our traveler to actual hotel Web sites. Instead, they send him to a category of Web sites called online travel agents, or OTAs, which have come to dominate the market for hotel reservations. Think here of Booking.com, Kayak, Expedia, or Hotels.com. These sites don’t run hotels or own hotels. They are a middleman, pure and simple, just like human travel agents. All they do is lift photographs and relevant information from hotel Web sites and then organize the listings based on a customer’s preferences. Do you care about location? Price? Star ratings? Pool? A free shuttle to the airport?

Here’s a question: Can you guess how much OTAs charge for commission? Keep in mind that OTAs are primarily aggregators, helping customers search through all the hotels in a given area. While hotel owners have to buy the land, build the hotel, and then pay a large staff to take care of the customers and property, OTAs have none of these expenses. Instead, their costs are dominated by digital advertising, as they seek out ways to grab your attention on a digital device. Of course, once they’ve got your attention—after they secure your gaze and clicks—OTAs can then sell that attention back to the hotels.

So how much do you think your attention is worth? What sort of commission can OTAs get away with charging hotel owners for each booking?

My initial guess was 5 percent, although even that might seem a little high. Human travel agents, after all, have to deal directly with their customers. They need to spend time learning their preferences and finding a suitable hotel. On average, all of this work gets them only a 10 percent commission. OTAs, on the other hand, rely on algorithms to do all the work—no personal touch is required. It seems like a clear example of the online world lowering the cost of business, squeezing out the human middlemen and making the world a more efficient place.

But I was wrong. In fact, I was off by a factor of five. Because here’s the shocking truth: OTAs routinely charge commissions between 20 and 30 percent.2 Think, for a moment, about how remarkable this is—when you book a hotel through Expedia or Priceline or Travelocity, one out of every four nights goes to the Web site. They haven’t changed the sheets, or heated the pool, or restocked the minibar. They don’t pay the mortgage or the staff. And yet, they are taking a fourth of hotel revenues.

How do OTAs get away with this? Why would hotels ever pay such exorbitant commissions to booking Web sites, especially when their own Web sites offer the exact same services?

The answer reveals a very interesting truth about life in the twenty-first century. The extremely lucrative business model of the OTAs is based on taking advantage of the mismatch between all of the information on our physical screens—these digital displays we spend all day staring at—and our mental screen, which is the information we can actually pay attention to. The high commissions of these Web sites might seem ridiculous, but they have identified an opportunity to help people think and choose better online. The point of this book is that such opportunities are everywhere.

But only if you know how to find them.

THE FIRE HOSE

Before the Internet was in your pocket, the challenge of booking a hotel was finding useful information. It wasn’t easy to get the phone number, let alone some relevant pictures of the guest rooms. We were choosing in the dark, which is why we were so reliant on travel agents to make the choice for us.

But now? We are drowning in information. A simple Google search for “Cleveland airport hotel” returns more than five million hits.3 And even if we browse the first few pages of results, there’s still the problem of making a selection. Should we stick with the Holiday Inn? Is the Sheraton worth the extra money? Because there’s no clear answer, we end up perusing the Web sites, comparing pictures, searching for relevant details. We read way too many user reviews. It’s an arduous process, sure to leave us longing for the days of travel agents.

Needless to say, the surfeit of Google results for a Cleveland hotel is only a tiny example of the profound changes unleashed by the information revolution. Here’s a metaphor that has helped me think about the information age. Like all metaphors, it’s an imperfect comparison—it probably underestimates the magnitude of the change we’re living through—but I think it helps us grasp the trade-offs triggered by the digital revolution. Once upon a time, the flow of data was more like the drip of water from a leaky fixture. In fact, the amount of information was so minuscule that most people were thirsty for more; we had excess attentional capacity. But then along came Gutenberg, print culture took off, and, by the middle of the twentieth century, the flow of information was more like the steady stream of water coming out of a kitchen faucet.

Computers changed everything. Starting in the 1980s, the quantity of information began to rise at an exponential pace. If the information in every letter delivered by the U.S. Postal Service in 2010 were added up, the amount would be roughly equal to five petabytes.4 Google processes that much data on a slow afternoon. (According to a recent estimate from IBM data scientists, “90 percent of all the data in the world today has been created in the last two years.”)5 The situation is just as extreme if you look at out personal interactions. According to a study by Martin Hilbert, the quantity of two-way communication done by people daily—that includes phone conversations, e-mail, and text—has gone from the equivalent of two newspaper pages in 1986 to twenty entire newspapers in 2010.6 It’s as if our kitchen faucet was replaced by the highest-pressure fire hose, spraying us in the face with a deluge of data.

The point of the fire hose metaphor is that the presence of more water—a fire hose sprays approximately 125 times more gallons per minute than a kitchen faucet—doesn’t translate into increased drinking capacity. (In fact, it might even lead to less drinking, as we’ll see shortly.) That’s because the human mouth has fixed constraints, and can only open so wide. It doesn’t matter how much water is flowing by our face—we will never be able to gulp more than a few ounces at a time.

The human mind is the same way. When it comes to how much information we can process, the limiting factor is rarely what’s on the screen, for the amount of information on the monitor will almost always exceed the capacity of the mind to take it in. Instead, we are limited by the scarcity of attention, by our inability to focus on more than a few things at the same time.

Herbert Simon, a Nobel Prize–winning psychologist, was one of the first people to understand this. In 1971, back when the information age was just beginning, Simon realized that the growth of information would have massive psychological consequences. “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes,” he wrote.7 For Simon, what information consumed was “rather obvious: it consumes the attention of its recipients.” Furthermore, because attention is a relatively inelastic property—the mind can notice only so many things at once—“a wealth of information creates a poverty of attention,” forcing people to make difficult choices about what to perceive and think about.

Although Simon’s quote is now a catchphrase—there’s lots of talk about our “attention economy”—Simon wrote those lines before computers were commonplace. He was worried about information overload before there was e-mail, or Google, or smartphones, or the Apple Watch. Back when newspapers didn’t refresh their headlines every five minutes. In other words, what Simon considered “a wealth of information” would now feel like a data desert. This means, of course, that the relationship between information and attention has only gotten more extreme. It is a behavioral principle—too much information leads to a scarcity of attention—that’s amplified on screens.

The ironic takeaway is that, in the age of information, we are less able than ever before to process information, since our attention is all used up. (To return to our fire hose metaphor: we are drenched in water, but thirstier than ever.) If you want some evidence of our collective attention deficit, consider what happens when you give people a short test to make sure they’re paying attention. Such “attention filters” are a standard protocol of social science conducted on screens, a way to make sure that subjects are actually reading the text and following instructions. (In my research, I exclude subjects who fail the filter.) Here, for instance, is a slightly modified version of the attention filter pioneered by Daniel Oppenheimer and colleagues in a 2009 paper.8 Please read the text and follow the instructions:

Most modern theories of decision making recognize the fact that decisions do not take place in a vacuum. Individual preferences and knowledge, along with situational variables, can greatly impact the decision process. In order to facilitate our research on decision making we are interested in knowing certain factors about you, the decision maker. Specifically, we are interested in whether you actually take the time to read the directions; if not, then some of our questions will be unclear. So, in order to demonstrate that you have read the instructions, please do not answer any of the questions on the next page. You will begin answering questions again on the following page.

When I eat out I like to try the most unusual items the restaurant serves, even if I am not sure I would like them.

__ Agree

__ Somewhat Agree

__ Somewhat Disagree

__ Disagree

It’s a trick question, of course: if you answered it, then you clearly didn’t pay enough attention to the instructions.

So how many people fail the filter? The results are consistently depressing. When Oppenheimer and colleagues gave 480 subjects a version of the attention filter with pen and paper, anywhere between 14 and 28.7 percent of them failed the test, depending on their level of motivation. However, when the scientists gave a similar attention filter to 213 people on a computer screen, they got a much higher failure rate of 46 percent. In one instance, the scientists could replicate the results of classic behavioral science experiments on a computer only if they disregarded those who failed the attention filter. We are a scatterbrained species, especially when thinking on screens.

Let’s pause, for a moment, to consider the implications of this finding. How many surveys have you filled out on a screen? How many questions have you been asked? I’m guessing the answer is a lot. The attention filter data, of course, suggests that a near majority of the responses come from people who aren’t paying attention. Their answers aren’t that useful because they didn’t take the time to read the questions.

And this brings us back to the exorbitant commissions of OTAs. The Web sites realize that we are probably overwhelmed with information and options. They know we don’t want to scroll through a chaotic list of Google results, or take the time to visit each hotel Web site. And so they make it simple for us: they buy up all the ads at the top of the search results. Try it: Search for any hotel in any city and I guarantee that the top paid result will be an OTA. Booking.com, for instance, is one of the largest spenders on Google, paying $7.68 for every click on a “new york hotel” ad.9 Once we click on an OTA link, the goal of the Web site is to rank our surfeit of travel options based on the one or two variables we think are most important. In short, they are sites for people who feel overwhelmed, who don’t know how to cope with the fire hose of travel information, or who don’t have any idea what to do with five million hits after searching for Cleveland airport hotels.

That certainly sounds nice—I often find OTAs quite useful—but it still doesn’t answer the deeper mystery, which is how OTAs get away with taking every fourth night as a commission. To answer that question, one needs to remember a basic lesson of macroeconomics, which is that money chases scarcity. That’s why diamonds are more valuable than gold, and gold is more valuable than quartz. It’s why the price of any resource—and it doesn’t matter if the resource is plutonium or crude oil—goes up when demand exceeds supply. Fortunes are made from scarcities, and the richest people are those who notice the scarcities first.

So what’s scarce in the twenty-first century? It’s not hotel rooms in Cleveland. And it’s not information about those rooms. Instead, this surplus of possibilities has created a serious scarcity of attention, just as Herbert Simon predicted. OTAs are successful because they help us deal with this scarcity.

The end result is that, in the age of information, established travel brands like InterContinental and Hilton are stuck. They know they are losing lots of money to Booking.com and Expedia and every other OTA. They realize they can’t afford to forfeit 25 percent of their revenues indefinitely, which is why they insist that things will change, that consumers will begin going to their sites to reserve a room. (Most hotel chains now offer a “Best Rate Guarantee,” assuring customers that they will get the cheapest price if they book directly.) And yet, people continue to rely on OTAs: in late 2013, the market share of OTA hotel bookings underwent a sharp increase, growing at nearly 2.5 times the rate of hotel Web site bookings.10 If current trends continue, I wouldn’t be surprised if OTA commissions exceeded 50 percent in the near future. This would mean, of course, that the majority of a hotel room’s price would go to the digital middleman, and that getting your attention would be more valuable than providing you with an actual place to sleep.

The lesson is simple: human attention has become the sweet crude oil of the twenty-first century. If you can control the levers of human attention, then you can essentially charge whatever you’d like.

THE BOUNDED BRAIN

A few months ago, I walked out of a meeting with a good idea. An unusually good idea. One of the best ideas I’d had in months.

But this story is not about that idea. Instead, it’s about what happened afterward. I’d just left a meeting in New York City and I was headed back to my hotel. I’d traveled this route countless times before: I knew how to get to the subway station, where to wait for the train, and when to get off.

And yet, on this afternoon, I made an elementary mistake. While my mind was occupied with my great new idea—I was taking some notes on my iPhone—I somehow wandered over to the wrong side of the station. Although I needed the downtown train, I found myself on the uptown platform. Mildly embarrassed, I left the uptown platform, paid for another fare, and resumed thinking about my new idea.

Here’s the punchline: I then repeated the exact same mistake and found myself, once again, waiting for a train headed in the wrong direction. Because I was so distracted by my new idea, I was forced to buy three subway tickets for the same ride. Not my finest moment.

This is an ordinary failure of cognition. It happens to most of us many times over the course of a day, as we commit small errors while thinking about bigger things. Maybe you’re preoccupied with thoughts of lunch, so you misread an e-mail. Or maybe you’re paying so much attention to your phone that you walk into a wall, or run a stop sign, or get on the wrong subway train. Because the mind has limited thinking capacity—it can pay attention to only so many things at once—we often fail to notice important details of the world.

One of the first psychologists to investigate these innate mental constraints was George Miller. In an influential paper presented in 1956 at a meeting of the Institute of Radio Engineers at MIT, Miller proposed that the brain was actually a bounded machine, deeply constrained by its short-term memory. The title of a published version of his paper—“The Magical Number Seven, Plus or Minus Two”11—says it all, for he insisted that people can remember only about seven pieces of information (+/–2) at any given time. This is why all the relevant numbers in our life, from license plates to phone numbers, are about the same length. If they were any longer, we wouldn’t be able to remember them.

Miller’s short paper revealed the information processing bottlenecks of the human mind. As the years passed, scientists got better at measuring these bottlenecks, outlining all the ways in which the personal computer inside our head was a bounded device. I wish I could think about big ideas and take notes on my phone and navigate the New York City subway at the same time, but I can’t. And if I want to avoid wasting time and money on extra train tickets in the future, then I should begin by admitting my limitations, knowing that my attention is a more limited resource than I’d like to believe. As we’ll soon see, this same mental limitation has critical implications for all sorts of activities, from driving in the digital age to managing patient care.

Here’s an exercise known as the reading span task, which was invented in a 1980 paper published by Meredyth Daneman and Patricia Carpenter.12 I’m going to present you with a series of sentences drawn from their task that I’d like you to read aloud. Please try to remember the final word of each sentence. I’ve boldfaced the words to make it a little easier.


   • When at last his eyes opened, there was no gleam of triumph, no shade of anger.
   • The taxi turned up Michigan Avenue where they had a clear view of the lake.
   • Great things are not done by impulse, but by a series of small things brought together.
   • One exercise of the mind is the habit of forming clear and precise ideas.
   • After playing notes and more notes, it is simplicity that emerges as the crowning reward of art.
   • His parents couldn’t understand why he wanted a tattoo on his right shoulder.
   • The director was very popular, until the employees heard about his affair.

Are you done? Now please turn the page.

I’d like you to write down the last word of each sentence in the order in which it was read.

How’d you do? It’s a hard test, right? While Miller argued that the capacity of short-term memory was about seven items, the reading span task suggests that it’s actually much lower. According to most studies, we’re able to remember only three or four of the final words when we read the sentences out loud.13 And it’s not just the reading span task: As psychologists came up with new ways of measuring the amount of information we can attend to—this is known as working memory—they discovered that Miller’s magical number was way too optimistic. In an influential series of papers, the psychologist Nelson Cowan argued that the true magical number is actually four (plus or minus one), with most tests of working memory showing that we start to miss crucial information whenever the number of bits (letters, words, numbers, colors, whatever) exceeds that amount.14

"This book is a real eye-opener. It describes how we really function in a screen-based society, and how the digital world is changing our thinking about ourselves and others."
—ROBERT SHILLER, winner of the Nobel Prize in Economics; professor of economics at Yale University; author of Finance and the Good Society

"We know that the way we organize a buffet influences the choices people make. What is much less clear is the way online environments influence our decisions. In this useful book, Benartzi and Lehrer give us precise insights about the relationship between what we see on the screen, what we think, and what we choose."
—DAN ARIELY, professor of psychology and behavioral economics at Duke University; author of Predictably Irrational

"Our lives increasingly depend on the interaction of bounded minds with bound-less information on screens. In this insightful book, Benartzi explores the human experience in the digital world and considers the many ways—social, psychological, ethical, and financial—we might make screens serve us better. A fun, important, and revelatory read!"
—ELDAR SHAFIR, professor of psychology and public affairs at Princeton University; coauthor of Scarcity

"What makes businesses such as Uber and Amazon so disruptive is nothing other than the magical power of the screen. Change the interface and our behavior changes. We like to believe that our preferences exist independently of the medium in which we express them. But as this book so brilliantly explains, this is very far from the truth."
—RORY SUTHERLAND, vice chairman, Ogilvy & Mather UK

"If you think you know how screens influence human behavior, think again. This book is a must-read for investors, business owners, and anyone else with a stake in how people make decisions in the digital age."
—BILL HARRIS, CEO of Personal Capital; former CEO of Intuit and PayPal

About

A leading behavioral economist reveals the tools that will improve our decision making on screens

Office workers spend the majority of their waking hours staring at screens. Unfortunately, few of us are aware of the visual biases and behavioral patterns that influence our thinking when we’re on our laptops, iPads, smartphones, or smartwatches. The sheer volume of information and choices available online, combined with the ease of tapping "buy," often make for poor decision making on screens.

In The Smarter Screen, behavioral economist Shlomo Benartzi reveals a tool kit of interventions for the digital age. Using engaging reader exercises and provocative case studies, Benartzi shows how digital designs can influence our decision making on screens in all sorts of surprising ways.

For example: 
• You’re more likely to add bacon to your pizza if you order online. 
• If you read this book on a screen, you’re less likely to remember its content. 
• You might buy an item just because it’s located in a screen hot spot, even if better options are available. 
• If you shop using a touch screen, you’ll probably overvalue the product you’re considering. 
• You’re more likely to remember a factoid like this one if it’s displayed in an ugly, difficult-to-read font. 

Drawing on the latest research on digital nudging, Benartzi reveals how we can create an online world that helps us think better, not worse.

Author

Shlomo Benartzi is a professor and a cochair of the Behavioral Decision-Making Group at UCLA's Anderson School of Management. He has advised many government agencies, financial institutions, and advisory boards, and currently serves as the chief behavioral economist for the Allianz Global Investors Center for Behavioral Finance. He co-created (with Richard Thaler) the acclaimed Save More Tomorrow program, which makes it easy for employees to increase their retirement savings rate and has helped millions of people boost their savings. View titles by Shlomo Benartzi

Excerpt

ACKNOWLEDGMENTS

This book would not exist without the help of many people. I benefited from huge amounts of feedback, both during the development of the ideas in this book and during the writing process. First, I was incredibly fortunate to get access to many brilliant academics, scientists, and colleagues, who were kind enough to read various drafts of the chapters and offer their insights and comments. I’m very grateful to Peter Ayton, Maya Bar-Hillel, Tibor Besedes, Saurabh Bhargava, Barbara Fasolo, Gavan Fitzsimons, Craig Fox, Dan Goldstein, Noah Goldstein, Michael Hallsworth, David Halpern, Hal Hershfield, Eric Johnson, Yaron Levi, George Loewenstein, Katy Milkman, Daniel Oppenheimer, Katharina Reinecke, Elena Reutskaja, and Philip Tetlock.

A few friends deserve special mention, as they devoted many hours to the book and improved it in countless ways. If it weren’t for David Faro, John Payne, and Richard Thaler, The Smarter Screen would be far less smart.

I was also lucky enough to get valuable input from many friends in the industry. A big thank you to David Collyer, Udo Frank, Bill Harris, Thomas MacNeil, Charlie Nelson, Cathy Smith, and Matt Stewart. Danny Kalish read every chapter, often more than once, and offered many important insights that led me in new directions.

It was crucial to me that this book be as accurate as possible. Steve Shu, my very good friend, spearheaded the fact-checking process. In addition, I’m very grateful for the time and diligence of Jolie Martin, Amit Runchal, Namika Sagara, and Hadas Sella. They combed through every citation and double-checked every fact and quote. All remaining mistakes are my own.

This book was guided by a skilled team at Portfolio. I owe a big thank you to Adrian Zackheim for seeing the potential in a book about behavioral economics in the digital age. And my editor, Niki Papadopoulos, helped ensure the book was as readable and engaging as possible.

I want to thank my collaborator, Jonah Lehrer, an amazing friend and brilliant writer. We had a great time working on this book together. It could never have been written without him.

Last, but definitely not least, I want to thank my family. Shalom, my dad, and Leah, my mom, for everything they taught me. My wife, Lesli, and Maya, my little girl, put up with many late nights and countless discussions about the material in this book. They inspire me every day.

INTRODUCTION

On October 1, 2013, the United States government launched a new Web site, www.healthcare.gov, that was designed to help people choose health insurance. In essence, the site was a shopping portal, allowing consumers to compare prices and features on all of the insurance plans available in their local area. Because the government hoped to sign up millions of uninsured Americans, they decided to rely on the scale of the Web.

While most of the media coverage of the Web site centered around its glaring technical glitches, very little attention was paid to a potentially far more important issue: Did the Web site actually help consumers find the best insurance plans? Given the reach of Obamacare, even seemingly minor design details could have a huge impact, influencing a key financial decision in the lives of millions of Americans.

Unfortunately, research suggests that most people probably made poor insurance choices on the Web site. A study conducted by Saurabh Bhargava, George Loewenstein, and me demonstrated that the typical subject using a simulated version of healthcare.gov chose a plan that was $888 more expensive than it needed to be.1 This was equivalent to roughly 3 percent of their income. Meanwhile, an earlier study, led by Eric Johnson at Columbia University, found that giving consumers more health care options on sites like healthcare .gov dramatically decreased their ability to find the best plan. In fact, even offering people a modest degree of choice meant that nearly 80 percent of them picked suboptimally.2

Can this problem be fixed? The online world offers us more alternatives than ever before: the average visitor to healthcare.gov was offered forty-seven different insurance plans,3 while Zappos.com features more than twenty-five thousand women’s shoes. But how should Web sites help us choose better?

On the morning of February 21, 2010, an American Predator drone began tracking a pickup truck and two SUVs traveling on a road near the village of Shahidi Hassas in southern Afghanistan. As the drone followed the vehicles, it beamed a live video feed to a crew of analysts based at Creech Air Force Base outside Las Vegas.4

Such intelligence is now a staple of modern warfare. The CIA used drones to gather intel on Osama bin Laden’s hideout; the Israeli Defense Forces flew dozens of unmanned aircraft over Gaza during the recent conflict; the United States Air Force accumulated more than five hundred hours of aerial video footage every single day in Afghanistan and Iraq.5

This flood of information creates an obvious problem: someone has to process it. Unfortunately, the evidence suggests that drone crews are often overwhelmed by the visual data. One study, led by Ryan McKendrick at George Mason University, showed that people simulating the multitasking environment of drone operators performed worse on an air defense task;6 another experiment, which looked at gunners in armored vehicles, found that the soldiers failed to perform their primary task effectively—noticing the bad guys—when a second task was added to the list.7 In experiment after experiment, the surplus of digital information creates blind spots on the screen.8

That’s what happened to the analysts tracking those vehicles in southern Afghanistan. According to an internal military investigation,9 the cubicle warriors in Nevada couldn’t handle all of the available information as they toggled back and forth among the video feed, radio chatter, and numerous instant messages. As a result, they failed to notice that the truck and SUVs were actually filled with civilians. And so the drone operators gave the order to fire, unleashing a barrage of Hellfire missiles and rockets. Twenty-three innocent people were killed in the attack.

How can we make such tragedies less likely to happen? What should the Air Force and CIA do to minimize the risk of blind spots on screens? And how can other organizations, from financial institutions to hospitals, deal with the same problem of digital information overload?

On December 14, 2013, Jessica Seinfeld used the Uber app to drop her children off across town at a bar mitzvah and a sleepover.10 Unfortunately, the ride took place in the midst of a New York City snowstorm, which meant that Uber had put surge pricing into effect. (When demand for drivers is high—say, during a blizzard, or on New Year’s Eve—Uber systematically raises its rates to entice more drivers to enter the marketplace.) During this storm, demand for drivers was so high that some Manhattan customers were charged 8.25 times the normal fare. Although Uber warned its customers about the surcharge before they ordered a ride, the warning clearly wasn’t effective, as social media soon lit up with complaints of price gouging. Jessica Seinfeld, for instance, posted a picture of her $415 Uber bill on Instagram, while many others lamented their crosstown rides that cost more than $150.11 Uber had provided a valuable service—helping people get home in a bad storm—but had also angered a lot of customers. It’s never a good sign when your company is the reason people are tweeting the hashtag #neveragain.

The surge pricing problem is indicative of a more common digital hazard, which is that people often think very fast on screens. Uber customers, of course, benefit from this quick pace, as the streamlined app makes it easy for people to book rides with a few taps of the thumb. However, when surge pricing is in effect that same effortless ease can backfire, since consumers book rides on their phone without realizing how much the rides are going to cost.

How should Uber fix its app? Is there any way to help consumers avoid online decisions they’ll soon regret?

These three stories illustrate a few of the many ways in which the digital revolution is changing the way we live, from the analysis of military intelligence to the booking of taxi rides. They reveal an age in which we have more information and choices than ever before, and are able to act on them with breathtaking speed. But these stories are also a reminder of the profound challenges that remain. We have more choices, but we choose the wrong thing. We have more information, but we somehow miss the most relevant details. We can act quickly, but that often means we act without thinking.

It’s a cliché to complain about these trends. It’s easy to lament all the ways the online world leaves us confused and distracted, forgetful and frazzled.

This book is not about those complaints. It is not about how smartphones make us stupid. It is not a requiem for some predigital paradise.

Instead, this book is about how screens can be designed to make us smarter. It’s a book of behavioral solutions and practical tools that can improve our digital lives. It’s about how the same technological trends that lead people to buy the wrong insurance plan and book a $415 taxi ride can be turned into powerful digital opportunities, rooted in the latest research about how we think and choose on smartphones, tablets, and computers.

Here are three examples of potential solutions. If you want to encourage people to select the best health care plan, or choose the right product on your Web site, then you might want to consider a choice tournament modeled on Wimbledon and March Madness. (Instead of giving people all the options at once, you divide the best options into different rounds—work led by Tibor Besedes shows this dramatically improves decision making.)12

And if you want to help intelligence analysts avoid blind spots, it’s often helpful to zoom out and provide fewer details about the scene. (In a real-world study conducted in Israel, providing less detailed feedback led to big improvements in decision making among investors.13 I bet it would also help drone operators.) This fix is not just about giving people less information—it’s also about using new information compression technologies to help us cope with our limited attention.

Finally, companies like Uber can do a better job of educating their customers—and thus avoiding a mob of angry ones—by carefully deploying ugly fonts on their Web sites and apps.14 (This runs counter to the common belief that information should always be as easy to process as possible.) The same approach can also be used to close the digital reading gap, as many studies suggest that we read significantly worse on screens than we do on paper.15

These are just a few suggestions for how businesses and governments can use the tools and tactics of behavioral science to improve our online behavior. This book is filled with many more examples, as I believe we are on the cusp of a huge opportunity: By taking advantage of this practical research, we can dramatically boost the quality of our digital decisions. We can see better, learn more, and regret less.

So why am I, a behavioral economist, writing this book? I have devoted my career to studying the mistakes people make so that we might learn to avoid them. For example, in my research with Richard Thaler, a behavioral economist and coauthor of the book Nudge,16 we used psychological insights to help four million employees significantly boost their savings rates using the Save More Tomorrow program.17 That’s the good news. The bad news is that it took us fifteen years to reach that many people. What’s worse, there are still tens of millions of Americans whom we failed to help, and who still aren’t saving enough. I have been continually frustrated by the slow pace of this process.

My hope is that we can use the scale of technology to bring more fixes to more people in far less time. After all, if you want to influence citizens and customers in the twenty-first century, you don’t have to knock on their doors, or interrupt them on the street—you can just interact with them online, using the reach of the digital world to quickly contact vast numbers of people with minimal effort. In fact, influencing behavior on screens can be so efficient and effective that I believe we have a chance to help a billion people think smarter and choose wiser. That’s right: billion. With a b.

However, this opportunity comes with an important caveat: in order to take advantage of these digital nudges, I believe we need to tailor them for our new online environment. Although we like to pretend that our brain isn’t altered by technology, new evidence suggests that these splendid inventions are shifting the patterns of our behavior in all sorts of subtle ways. What’s more, these shifts are often predictable, allowing us to anticipate how people will act on a device, and how they will respond to our interventions. (We can even explain some consistent quirks of digital behavior, such as why people value items more when shopping on a tablet,18 or why they will probably get lower scores when taking the SAT on a computer,19 or why they order pizza with more calories when ordering off a Web site.20) The end result is that we need to update our behavioral toolkit for the digital age. This book will give you the tools you need now, at least if you want to nudge people the right way on screens.

Let me be clear: I’m not saying your head has been rewired by your smartphone. (Human nature evolved over millions of years; it’s unlikely to be transformed in a decade or two.) Nevertheless, there are many relevant differences in offline versus online thinking, which should be reflected in the designs of our screens. And since every business is now a digital business, and nearly every consumer is making important decisions on their gadgets, it’s incredibly important that we get these designs right. The medium of information and decision making has changed. So should our interventions and nudges.

Of course, this is all very new research, which means that a few disclaimers are in order. Some of the studies in this book directly compare online and offline behavior, while other studies are more suggestive. When the evidence is more speculative, I will make that clear. In addition, these behavioral tools won’t be able to solve every digital problem. While we can design screens that might make it easier to deal with information overload and choose better insurance, we’re not going to completely eliminate online mistakes or mollify every upset Uber customer.

Technological revolutions provide us with a rare opportunity to fundamentally reimagine how we think and live. Who could have guessed that, one day, many of the most important military decisions would be made on a computer? Or that the layout of a Web site would determine how many millions of Americans will get health care insurance and 401(k) accounts? Or that the smartphone would be the last thing most of us see at night and the first thing we see in the morning?

We are living in a world increasingly made of zeros and ones; more and more of our lives are taking place on screens. This book helps us take advantage of this moment, ensuring that we won’t squander the possibilities of the digital revolution.

Let’s get started.

CHAPTER 1

The Mental Screen

THE FOURTH NIGHT

I’d like to begin with a story. It’s a story that takes place in a time before the Internet, way back in the early 1990s. The story involves a man who wants to book a hotel room in Cleveland. (Great story, huh?) There are a few different ways this booking could happen.

Perhaps the man has a trusted travel agent, and so he calls the agent and tells her what he’d like: a nice three-star hotel, close to the airport. She takes down his preferences, checks her paper files, and then picks up the phone to call the hotel. For her services, she charges the hotel a 10 percent commission.

Alternatively, our intrepid traveler might want to book the room himself. If that’s the case, and if he’s never been to Cleveland before, then he needs a travel guide. A tourist manual. Maybe it’s Fodor’s, or Let’s Go, or the newsprint manuals provided at the local AAA office—there’s no booking without a book.

Fast-forward to the present day. Chances are, our protagonist now relies on the Internet to make his hotel reservations. (The number of travel agents employed by travel agencies has declined by roughly 55 percent in the last fifteen years.)1 He almost certainly begins with Google, searching for a hotel near the Cleveland airport. A few milliseconds later, his screen is filled with results.

If you look closely at the screen, however, you’ll notice something strange, as the top results don’t refer our traveler to actual hotel Web sites. Instead, they send him to a category of Web sites called online travel agents, or OTAs, which have come to dominate the market for hotel reservations. Think here of Booking.com, Kayak, Expedia, or Hotels.com. These sites don’t run hotels or own hotels. They are a middleman, pure and simple, just like human travel agents. All they do is lift photographs and relevant information from hotel Web sites and then organize the listings based on a customer’s preferences. Do you care about location? Price? Star ratings? Pool? A free shuttle to the airport?

Here’s a question: Can you guess how much OTAs charge for commission? Keep in mind that OTAs are primarily aggregators, helping customers search through all the hotels in a given area. While hotel owners have to buy the land, build the hotel, and then pay a large staff to take care of the customers and property, OTAs have none of these expenses. Instead, their costs are dominated by digital advertising, as they seek out ways to grab your attention on a digital device. Of course, once they’ve got your attention—after they secure your gaze and clicks—OTAs can then sell that attention back to the hotels.

So how much do you think your attention is worth? What sort of commission can OTAs get away with charging hotel owners for each booking?

My initial guess was 5 percent, although even that might seem a little high. Human travel agents, after all, have to deal directly with their customers. They need to spend time learning their preferences and finding a suitable hotel. On average, all of this work gets them only a 10 percent commission. OTAs, on the other hand, rely on algorithms to do all the work—no personal touch is required. It seems like a clear example of the online world lowering the cost of business, squeezing out the human middlemen and making the world a more efficient place.

But I was wrong. In fact, I was off by a factor of five. Because here’s the shocking truth: OTAs routinely charge commissions between 20 and 30 percent.2 Think, for a moment, about how remarkable this is—when you book a hotel through Expedia or Priceline or Travelocity, one out of every four nights goes to the Web site. They haven’t changed the sheets, or heated the pool, or restocked the minibar. They don’t pay the mortgage or the staff. And yet, they are taking a fourth of hotel revenues.

How do OTAs get away with this? Why would hotels ever pay such exorbitant commissions to booking Web sites, especially when their own Web sites offer the exact same services?

The answer reveals a very interesting truth about life in the twenty-first century. The extremely lucrative business model of the OTAs is based on taking advantage of the mismatch between all of the information on our physical screens—these digital displays we spend all day staring at—and our mental screen, which is the information we can actually pay attention to. The high commissions of these Web sites might seem ridiculous, but they have identified an opportunity to help people think and choose better online. The point of this book is that such opportunities are everywhere.

But only if you know how to find them.

THE FIRE HOSE

Before the Internet was in your pocket, the challenge of booking a hotel was finding useful information. It wasn’t easy to get the phone number, let alone some relevant pictures of the guest rooms. We were choosing in the dark, which is why we were so reliant on travel agents to make the choice for us.

But now? We are drowning in information. A simple Google search for “Cleveland airport hotel” returns more than five million hits.3 And even if we browse the first few pages of results, there’s still the problem of making a selection. Should we stick with the Holiday Inn? Is the Sheraton worth the extra money? Because there’s no clear answer, we end up perusing the Web sites, comparing pictures, searching for relevant details. We read way too many user reviews. It’s an arduous process, sure to leave us longing for the days of travel agents.

Needless to say, the surfeit of Google results for a Cleveland hotel is only a tiny example of the profound changes unleashed by the information revolution. Here’s a metaphor that has helped me think about the information age. Like all metaphors, it’s an imperfect comparison—it probably underestimates the magnitude of the change we’re living through—but I think it helps us grasp the trade-offs triggered by the digital revolution. Once upon a time, the flow of data was more like the drip of water from a leaky fixture. In fact, the amount of information was so minuscule that most people were thirsty for more; we had excess attentional capacity. But then along came Gutenberg, print culture took off, and, by the middle of the twentieth century, the flow of information was more like the steady stream of water coming out of a kitchen faucet.

Computers changed everything. Starting in the 1980s, the quantity of information began to rise at an exponential pace. If the information in every letter delivered by the U.S. Postal Service in 2010 were added up, the amount would be roughly equal to five petabytes.4 Google processes that much data on a slow afternoon. (According to a recent estimate from IBM data scientists, “90 percent of all the data in the world today has been created in the last two years.”)5 The situation is just as extreme if you look at out personal interactions. According to a study by Martin Hilbert, the quantity of two-way communication done by people daily—that includes phone conversations, e-mail, and text—has gone from the equivalent of two newspaper pages in 1986 to twenty entire newspapers in 2010.6 It’s as if our kitchen faucet was replaced by the highest-pressure fire hose, spraying us in the face with a deluge of data.

The point of the fire hose metaphor is that the presence of more water—a fire hose sprays approximately 125 times more gallons per minute than a kitchen faucet—doesn’t translate into increased drinking capacity. (In fact, it might even lead to less drinking, as we’ll see shortly.) That’s because the human mouth has fixed constraints, and can only open so wide. It doesn’t matter how much water is flowing by our face—we will never be able to gulp more than a few ounces at a time.

The human mind is the same way. When it comes to how much information we can process, the limiting factor is rarely what’s on the screen, for the amount of information on the monitor will almost always exceed the capacity of the mind to take it in. Instead, we are limited by the scarcity of attention, by our inability to focus on more than a few things at the same time.

Herbert Simon, a Nobel Prize–winning psychologist, was one of the first people to understand this. In 1971, back when the information age was just beginning, Simon realized that the growth of information would have massive psychological consequences. “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes,” he wrote.7 For Simon, what information consumed was “rather obvious: it consumes the attention of its recipients.” Furthermore, because attention is a relatively inelastic property—the mind can notice only so many things at once—“a wealth of information creates a poverty of attention,” forcing people to make difficult choices about what to perceive and think about.

Although Simon’s quote is now a catchphrase—there’s lots of talk about our “attention economy”—Simon wrote those lines before computers were commonplace. He was worried about information overload before there was e-mail, or Google, or smartphones, or the Apple Watch. Back when newspapers didn’t refresh their headlines every five minutes. In other words, what Simon considered “a wealth of information” would now feel like a data desert. This means, of course, that the relationship between information and attention has only gotten more extreme. It is a behavioral principle—too much information leads to a scarcity of attention—that’s amplified on screens.

The ironic takeaway is that, in the age of information, we are less able than ever before to process information, since our attention is all used up. (To return to our fire hose metaphor: we are drenched in water, but thirstier than ever.) If you want some evidence of our collective attention deficit, consider what happens when you give people a short test to make sure they’re paying attention. Such “attention filters” are a standard protocol of social science conducted on screens, a way to make sure that subjects are actually reading the text and following instructions. (In my research, I exclude subjects who fail the filter.) Here, for instance, is a slightly modified version of the attention filter pioneered by Daniel Oppenheimer and colleagues in a 2009 paper.8 Please read the text and follow the instructions:

Most modern theories of decision making recognize the fact that decisions do not take place in a vacuum. Individual preferences and knowledge, along with situational variables, can greatly impact the decision process. In order to facilitate our research on decision making we are interested in knowing certain factors about you, the decision maker. Specifically, we are interested in whether you actually take the time to read the directions; if not, then some of our questions will be unclear. So, in order to demonstrate that you have read the instructions, please do not answer any of the questions on the next page. You will begin answering questions again on the following page.

When I eat out I like to try the most unusual items the restaurant serves, even if I am not sure I would like them.

__ Agree

__ Somewhat Agree

__ Somewhat Disagree

__ Disagree

It’s a trick question, of course: if you answered it, then you clearly didn’t pay enough attention to the instructions.

So how many people fail the filter? The results are consistently depressing. When Oppenheimer and colleagues gave 480 subjects a version of the attention filter with pen and paper, anywhere between 14 and 28.7 percent of them failed the test, depending on their level of motivation. However, when the scientists gave a similar attention filter to 213 people on a computer screen, they got a much higher failure rate of 46 percent. In one instance, the scientists could replicate the results of classic behavioral science experiments on a computer only if they disregarded those who failed the attention filter. We are a scatterbrained species, especially when thinking on screens.

Let’s pause, for a moment, to consider the implications of this finding. How many surveys have you filled out on a screen? How many questions have you been asked? I’m guessing the answer is a lot. The attention filter data, of course, suggests that a near majority of the responses come from people who aren’t paying attention. Their answers aren’t that useful because they didn’t take the time to read the questions.

And this brings us back to the exorbitant commissions of OTAs. The Web sites realize that we are probably overwhelmed with information and options. They know we don’t want to scroll through a chaotic list of Google results, or take the time to visit each hotel Web site. And so they make it simple for us: they buy up all the ads at the top of the search results. Try it: Search for any hotel in any city and I guarantee that the top paid result will be an OTA. Booking.com, for instance, is one of the largest spenders on Google, paying $7.68 for every click on a “new york hotel” ad.9 Once we click on an OTA link, the goal of the Web site is to rank our surfeit of travel options based on the one or two variables we think are most important. In short, they are sites for people who feel overwhelmed, who don’t know how to cope with the fire hose of travel information, or who don’t have any idea what to do with five million hits after searching for Cleveland airport hotels.

That certainly sounds nice—I often find OTAs quite useful—but it still doesn’t answer the deeper mystery, which is how OTAs get away with taking every fourth night as a commission. To answer that question, one needs to remember a basic lesson of macroeconomics, which is that money chases scarcity. That’s why diamonds are more valuable than gold, and gold is more valuable than quartz. It’s why the price of any resource—and it doesn’t matter if the resource is plutonium or crude oil—goes up when demand exceeds supply. Fortunes are made from scarcities, and the richest people are those who notice the scarcities first.

So what’s scarce in the twenty-first century? It’s not hotel rooms in Cleveland. And it’s not information about those rooms. Instead, this surplus of possibilities has created a serious scarcity of attention, just as Herbert Simon predicted. OTAs are successful because they help us deal with this scarcity.

The end result is that, in the age of information, established travel brands like InterContinental and Hilton are stuck. They know they are losing lots of money to Booking.com and Expedia and every other OTA. They realize they can’t afford to forfeit 25 percent of their revenues indefinitely, which is why they insist that things will change, that consumers will begin going to their sites to reserve a room. (Most hotel chains now offer a “Best Rate Guarantee,” assuring customers that they will get the cheapest price if they book directly.) And yet, people continue to rely on OTAs: in late 2013, the market share of OTA hotel bookings underwent a sharp increase, growing at nearly 2.5 times the rate of hotel Web site bookings.10 If current trends continue, I wouldn’t be surprised if OTA commissions exceeded 50 percent in the near future. This would mean, of course, that the majority of a hotel room’s price would go to the digital middleman, and that getting your attention would be more valuable than providing you with an actual place to sleep.

The lesson is simple: human attention has become the sweet crude oil of the twenty-first century. If you can control the levers of human attention, then you can essentially charge whatever you’d like.

THE BOUNDED BRAIN

A few months ago, I walked out of a meeting with a good idea. An unusually good idea. One of the best ideas I’d had in months.

But this story is not about that idea. Instead, it’s about what happened afterward. I’d just left a meeting in New York City and I was headed back to my hotel. I’d traveled this route countless times before: I knew how to get to the subway station, where to wait for the train, and when to get off.

And yet, on this afternoon, I made an elementary mistake. While my mind was occupied with my great new idea—I was taking some notes on my iPhone—I somehow wandered over to the wrong side of the station. Although I needed the downtown train, I found myself on the uptown platform. Mildly embarrassed, I left the uptown platform, paid for another fare, and resumed thinking about my new idea.

Here’s the punchline: I then repeated the exact same mistake and found myself, once again, waiting for a train headed in the wrong direction. Because I was so distracted by my new idea, I was forced to buy three subway tickets for the same ride. Not my finest moment.

This is an ordinary failure of cognition. It happens to most of us many times over the course of a day, as we commit small errors while thinking about bigger things. Maybe you’re preoccupied with thoughts of lunch, so you misread an e-mail. Or maybe you’re paying so much attention to your phone that you walk into a wall, or run a stop sign, or get on the wrong subway train. Because the mind has limited thinking capacity—it can pay attention to only so many things at once—we often fail to notice important details of the world.

One of the first psychologists to investigate these innate mental constraints was George Miller. In an influential paper presented in 1956 at a meeting of the Institute of Radio Engineers at MIT, Miller proposed that the brain was actually a bounded machine, deeply constrained by its short-term memory. The title of a published version of his paper—“The Magical Number Seven, Plus or Minus Two”11—says it all, for he insisted that people can remember only about seven pieces of information (+/–2) at any given time. This is why all the relevant numbers in our life, from license plates to phone numbers, are about the same length. If they were any longer, we wouldn’t be able to remember them.

Miller’s short paper revealed the information processing bottlenecks of the human mind. As the years passed, scientists got better at measuring these bottlenecks, outlining all the ways in which the personal computer inside our head was a bounded device. I wish I could think about big ideas and take notes on my phone and navigate the New York City subway at the same time, but I can’t. And if I want to avoid wasting time and money on extra train tickets in the future, then I should begin by admitting my limitations, knowing that my attention is a more limited resource than I’d like to believe. As we’ll soon see, this same mental limitation has critical implications for all sorts of activities, from driving in the digital age to managing patient care.

Here’s an exercise known as the reading span task, which was invented in a 1980 paper published by Meredyth Daneman and Patricia Carpenter.12 I’m going to present you with a series of sentences drawn from their task that I’d like you to read aloud. Please try to remember the final word of each sentence. I’ve boldfaced the words to make it a little easier.


   • When at last his eyes opened, there was no gleam of triumph, no shade of anger.
   • The taxi turned up Michigan Avenue where they had a clear view of the lake.
   • Great things are not done by impulse, but by a series of small things brought together.
   • One exercise of the mind is the habit of forming clear and precise ideas.
   • After playing notes and more notes, it is simplicity that emerges as the crowning reward of art.
   • His parents couldn’t understand why he wanted a tattoo on his right shoulder.
   • The director was very popular, until the employees heard about his affair.

Are you done? Now please turn the page.

I’d like you to write down the last word of each sentence in the order in which it was read.

How’d you do? It’s a hard test, right? While Miller argued that the capacity of short-term memory was about seven items, the reading span task suggests that it’s actually much lower. According to most studies, we’re able to remember only three or four of the final words when we read the sentences out loud.13 And it’s not just the reading span task: As psychologists came up with new ways of measuring the amount of information we can attend to—this is known as working memory—they discovered that Miller’s magical number was way too optimistic. In an influential series of papers, the psychologist Nelson Cowan argued that the true magical number is actually four (plus or minus one), with most tests of working memory showing that we start to miss crucial information whenever the number of bits (letters, words, numbers, colors, whatever) exceeds that amount.14

Praise

"This book is a real eye-opener. It describes how we really function in a screen-based society, and how the digital world is changing our thinking about ourselves and others."
—ROBERT SHILLER, winner of the Nobel Prize in Economics; professor of economics at Yale University; author of Finance and the Good Society

"We know that the way we organize a buffet influences the choices people make. What is much less clear is the way online environments influence our decisions. In this useful book, Benartzi and Lehrer give us precise insights about the relationship between what we see on the screen, what we think, and what we choose."
—DAN ARIELY, professor of psychology and behavioral economics at Duke University; author of Predictably Irrational

"Our lives increasingly depend on the interaction of bounded minds with bound-less information on screens. In this insightful book, Benartzi explores the human experience in the digital world and considers the many ways—social, psychological, ethical, and financial—we might make screens serve us better. A fun, important, and revelatory read!"
—ELDAR SHAFIR, professor of psychology and public affairs at Princeton University; coauthor of Scarcity

"What makes businesses such as Uber and Amazon so disruptive is nothing other than the magical power of the screen. Change the interface and our behavior changes. We like to believe that our preferences exist independently of the medium in which we express them. But as this book so brilliantly explains, this is very far from the truth."
—RORY SUTHERLAND, vice chairman, Ogilvy & Mather UK

"If you think you know how screens influence human behavior, think again. This book is a must-read for investors, business owners, and anyone else with a stake in how people make decisions in the digital age."
—BILL HARRIS, CEO of Personal Capital; former CEO of Intuit and PayPal

Books for Native American Heritage Month

In celebration of Native American Heritage Month this November, Penguin Random House Education is highlighting books that detail the history of Native Americans, and stories that explore Native American culture and experiences. Browse our collections here: Native American Creators Native American History & Culture

Read more

2024 Middle and High School Collections

The Penguin Random House Education Middle School and High School Digital Collections feature outstanding fiction and nonfiction from the children’s, adult, DK, and Grupo Editorial divisions, as well as publishers distributed by Penguin Random House. Peruse online or download these valuable resources to discover great books in specific topic areas such as: English Language Arts,

Read more

PRH Education High School Collections

All reading communities should contain protected time for the sake of reading. Independent reading practices emphasize the process of making meaning through reading, not an end product. The school culture (teachers, administration, etc.) should affirm this daily practice time as inherently important instructional time for all readers. (NCTE, 2019)   The Penguin Random House High

Read more

PRH Education Translanguaging Collections

Translanguaging is a communicative practice of bilinguals and multilinguals, that is, it is a practice whereby bilinguals and multilinguals use their entire linguistic repertoire to communicate and make meaning (García, 2009; García, Ibarra Johnson, & Seltzer, 2017)   It is through that lens that we have partnered with teacher educators and bilingual education experts, Drs.

Read more