brain drain

Economics is converging with everything these days, from the environment to the grocery store to the bedroom. This time, the playing field is none other than the human brain itself, and the results are less surprising than they are empirically fascinating. Contrary to conventional thinking, it turns out that people won’t always act in the own best interests, and that’s as true for investing and gambling as it is for adultery and employment

MIND GAMES
by JOHN CASSIDY in the New Yorker

What neuroeconomics tells us about money and the brain.

NYSELike many people who have accumulated some savings, I invest in the stock market. Most of my retirement money is invested in mutual funds, but now and again I also buy individual stocks. My holdings include the oil company Royal Dutch Shell, the drug company GlaxoSmithKline, and the phone company British Telecommunication. I like to think that I picked these stocks because I can discern value where others can’t, but my record hardly backs this up. I invested in BT in 2001, shortly after the Nasdaq crashed, when the stock had already fallen substantially, only to watch it slide another fifty per cent. I should have sold out, but I held on, hoping for a rebound. Five years later, the stock is trading well below the price I paid for it, and I still own it. I sometimes wonder what goes on in my head when I make stupid investment decisions. A few weeks ago, I had a chance to find out, when I took part in an experiment at New York University’s Center for Brain Imaging, in a building off Washington Square Park. In the lobby, I met Peter Sokol-Hessner, a twenty-four-year-old graduate student, who escorted me to a control room full of computers. Sokol-Hessner is completing a doctorate in psychology, but he is currently working on a research project in the emerging field of neuroeconomics, which uses state-of-the-art imaging technology to explore the neural bases of economic decision-making.

Sokol-Hessner is particularly interested in “loss aversion,” which is what I was suffering from when I refused to sell my BT stock. During the past decade or so, economists have devised a series of experiments to demonstrate just how much we dislike losing money. If you present people with an even chance of winning a hundred and fifty dollars or losing a hundred dollars, most refuse the gamble, even though it is to their advantage to accept it: if you multiply the odds of winning—fifty per cent—times a hundred and fifty dollars, minus the odds of losing—also fifty per cent—times a hundred dollars, you end up with a gain of twenty-five dollars. If you accepted this bet ten times in a row, you could expect to gain two hundred and fifty dollars. But, when people are presented with it once, a prospective return of a hundred and fifty dollars isn’t enough to compensate them for a possible loss of a hundred dollars. In fact, most people won’t accept the gamble unless the winning stake is raised to two hundred dollars.

Why are we so averse to losses, even at the expense of gains? At the Center for Brain Imaging, I removed my belt and shoes and entered a room containing a big metal box, which measured about six feet by six feet by six feet, with a slim gurney protruding from one side. It was a magnetic-resonance-imaging machine, identical to those hospitals use to scan bodies for lesions and tumors. “As blood pumps through the brain, the oxygen it contains causes small changes in the magnetic field,” Sokol-Hessner explained. “The scanner can pick up on that and tell us where the blood is flowing. We get a picture of which parts of the brain are being used.”

I put on earplugs and lay back on the gurney. Sokol-Hessner and two lab assistants placed some foam around my ears and lowered a plastic grille over my face. In one of my hands they placed a metal console with two buttons on it. I felt my head and shoulders sliding into a long, cylindrical hole about a foot and a half wide. “Take a few deep breaths,” Sokol-Hessner said. There was a crashing noise—the sound of the magnet warming up. Struggling to fend off claustrophobia, I closed my eyes and counted to a hundred while the scanner took a picture. “How are you doing?” asked Sokol-Hessner, who had retreated to the control room. “Fine,” I lied.

My task was to consider a series of investment options that were presented on a small illuminated screen over my head. In each case, one of the options would be a fifty-fifty bet and the other would be a sure thing. The first scenario appeared on the screen: a possible gain of four dollars and a possible loss of two dollars versus a sure thing of zero, meaning that I wouldn’t win or lose anything. I had three seconds to make my selection. Two dollars didn’t seem like a lot to lose, so I pressed a button on the console to accept the bet. Somewhere in the next room, a random number generator was deciding whether I had won or lost. Then this message flashed on the screen: “You won $4.00.”

Sokol-Hessner’s thesis advisers are Elizabeth Phelps, a professor of psychology and neural science at N.Y.U., and Colin Camerer, an economist at Caltech who helped found neuroeconomics. This spring, I visited Camerer at his office in Pasadena, California. He is a stocky man of forty-six, with a large, bald head and blue eyes. His office was cluttered with textbooks and academic journals, and on one wall there was a whiteboard covered with equations. It looked like every other economist’s office I’ve visited, except that on Camerer’s desk there was a plastic model of the human brain.

Headcase on Flickr by jsmjrWhile we were speaking, Camerer picked up the model and gave me a quick tour, starting at the front, with the prefrontal cortex, a structure that helps us perform complicated mental tasks, such as logical reasoning and planning. Then he pointed to the parietal cortex and the temporal lobes, regions that are also involved in deliberative decision-making. All these areas are much larger in humans than in other animals; scientists think that they were the last parts of the brain to evolve.

The model was made of layers of interlocking pieces. Camerer removed a piece from the top layer, exposing the so-called limbic areas beneath, including the insular cortex and the striatum. These structures date to the earliest period of human evolution, and neuroscientists believe that they help us process emotions. Camerer was particularly eager to show me the amygdala, a pair of almond-shaped structures that also play a role in the processing of emotions. “They are in here somewhere,” he said, removing more pieces from the model.

Camerer was a child prodigy. He grew up in Baltimore and entered college at Johns Hopkins at the age of fourteen, majoring in mathematics. He spent a lot of time at a local racetrack, betting on horses, a hobby that got him interested in risk-taking and decision-making. In 1981, when he was twenty-one, he obtained a Ph.D. in economics at the University of Chicago Graduate School of Business. Camerer also found inspiration outside his field.

In 1979, two Israeli psychologists, Daniel Kahneman and Amos Tversky, published a paper in the economics journal Econometrica, describing the concept of loss aversion. At the time, few economists and psychologists talked to one another. In the nineteenth century, their fields had been considered closely related branches of the “moral sciences.” But psychology evolved into an empirical discipline, grounded in close observation of human behavior, while economics became increasingly theoretical—in some ways it resembled a branch of mathematics. Many economists regarded psychology with suspicion, but their preference for abstract models of human behavior came at a cost.

In order to depict economic decisions mathematically, economists needed to assume that human behavior is both rational and predictable. They imagined a representative human, Homo economicus, endowed with consistent preferences, stable moods, and an enviable ability to make only rational decisions. This sleight of hand yielded some theories that had genuine predictive value, but economists were obliged to exclude from their analyses many phenomena that didn’t fit the rational-actor framework, such as stock-market bubbles, drug addiction, and compulsive shopping. Economists continue to study Homo economicus, but many recognize his limitations. Over the past twenty-five years, using methods and insights borrowed from psychology, they have devised a new approach to studying decision-making: behavioral economics.

One of Camerer’s mentors, Richard Thaler, was among the first economists to cite Kahneman and Tversky’s work; beginning in 1987, he published a series of influential articles describing various types of apparently irrational behavior, including loss aversion.

Acknowledging that people don’t always behave rationally was an important, if obvious, first step. Explaining why they don’t has proved much harder, and recently Camerer and other behavioral economists have turned to neuroscience for help. By the mid-nineteen-nineties, neuroscientists, using MRI machines and other advanced imaging techniques, had developed a basic understanding of the roles played by different parts of the brain in the performance of particular tasks, such as recognizing visual patterns, doing mental computations, and reacting to threats. In the mid-nineties, Antonio Damasio, a neurologist at the University of Iowa, and Joseph LeDoux, a neuroscientist at N.Y.U., each published a book for lay readers describing how the brain processes emotions. “We were reading the neuroscience, and it just seemed obvious that there were applications to economics, both in terms of ideas and methods,” said George Loewenstein, an economist and psychologist at Carnegie Mellon who read Damasio’s and LeDoux’s books. “The idea that you can look inside the brain and see what is happening is just so intensely exciting.”

In 1997, Loewenstein and Camerer hosted a two-day conference in Pittsburgh, at which a group of neuroscientists and psychologists gave presentations to about twenty economists, some of whom were inspired to do imaging studies of their own. In the past few years, dozens of papers on neuroeconomics have been published, and the field has attracted some of the most talented young economists, including David Laibson, a forty-year-old Harvard professor who is an expert in consumer behavior. “Natural science has moved ahead by studying progressively smaller units,” Laibson told me. “Physicists started out studying the stars, then they looked at objects, molecules, atoms, subatomic particles, and so on. My sense is that economics is going to follow the same path. Forty years ago, it was mainly about large-scale phenomena, like inflation and unemployment. More recently, there has been a lot of focus on individual decision-making. I think the time has now come to go beyond the individual and look at the inputs to individual decision-making. That is what we do in neuroeconomics.”

Atlas ShruggedWhen people make investments, they weigh the possible outcomes of their decisions and select a portfolio of stocks and bonds that offers the highest possible return at an acceptable level of risk. That is what mainstream economics says, anyway. In fact, people often have only a vague idea of the risks they face. Consider my investment in BT. Back in 2002, there was no way that I could have predicted how much profit the company would make in 2006, let alone in 2010 or 2020. I bought the stock, nonetheless, convinced that it could only increase in value.

As imaging technology gets more sophisticated and easier to use, it may become possible to monitor investors’ brains while they trade stocks at their offices. For now, however, economists are restricted to laboratory experiments, in which they pay volunteers to play simple games designed to imitate situations that people experience in daily life. In one study, Camerer and several colleagues performed brain scans on a group of volunteers while they placed bets on whether the next card drawn from a deck would be red or black. In an initial set of trials, the players were told how many red cards and black cards were in the deck, so that they could calculate the probability of the next card’s being a certain color. Then a second set of trials was held, in which the participants were told only the total number of cards in the deck.

The first scenario corresponds to the theoretical ideal: investors facing a set of known risks. The second setup was more like the real world: the players knew something about what might happen, but not very much. As the researchers expected, the players’ brains reacted to the two scenarios differently. With less information to go on, the players exhibited substantially more activity in the amygdala and in the orbitofrontal cortex, which is believed to modulate activity in the amygdala. “The brain doesn’t like ambiguous situations,” Camerer said to me. “When it can’t figure out what is happening, the amygdala transmits fear to the orbitofrontal cortex.”

The results of the experiment suggested that when people are confronted with ambiguity their emotions can overpower their reasoning, leading them to reject risky propositions. This raises the intriguing possibility that people who are less fearful than others might make better investors, which is precisely what George Loewenstein and four other researchers found when they carried out a series of experiments with a group of patients who had suffered brain damage.

Each of the patients had a lesion in one of three regions of the brain that are central to the processing of emotions: the amygdala, the orbitofrontal cortex, or the right insular cortex. The researchers presented the patients with a series of fifty-fifty gambles, in which they stood to win a dollar-fifty or lose a dollar. This is the type of gamble that people often reject, owing to loss aversion, but the patients with lesions accepted the bets more than eighty per cent of the time, and they ended up making significantly more money than a control group made up of people who had no brain damage. “Clearly, having frontal damage undermines the over-all quality of decision-making,” Loewenstein, Camerer, and Drazen Prelec, a psychologist at M.I.T.’s Sloan School of Management, wrote in the March, 2005, issue of the Journal of Economic Literature. “But there are situations in which frontal damage can result in superior decisions.”

Not long ago, I drove to Princeton University to speak to Jonathan Cohen, a fifty-year-old neuroscientist who is the director of Princeton’s Center for the Study of Brain, Mind, and Behavior. Nine years earlier, while he was teaching at Carnegie Mellon, Cohen attended the conference that Camerer and Loewenstein organized. “I had never taken any economics courses; I had no idea what they did,” he recalled. “I thought it was all about setting interest rates.”

Since then, Cohen has collaborated with economists on several imaging studies. “The key idea in neuroeconomics is that there are multiple systems within the brain,” Cohen said. “Most of the time, these systems coöperate in decision-making, but under some circumstances they compete with one another.”

A good way to illustrate Cohen’s point is to imagine that you and a stranger are sitting on a park bench, when an economist approaches and offers both of you ten dollars. He asks the stranger to suggest how the ten dollars should be divided, and he gives you the right to approve or reject the division. If you accept the stranger’s proposal, the money will be divided between you accordingly; if you refuse it, neither of you gets anything.

How would you react to this situation, which economists refer to as an “ultimatum game,” because one player effectively gives the other an ultimatum? Game theorists say that you should accept any positive offer you receive, even one as low as a dollar, or you will end up with nothing. But most people reject offers of less than three dollars, and some turn down anything less than five dollars.

Cohen and several colleagues organized a series of ultimatum games in which half the players—the respondents—were put in MRI machines. At the beginning of a round, each respondent was shown a photograph of another player, who would make the respondent an offer. The offer then appeared on a screen inside the MRI machine, and the respondent had twelve seconds in which to accept or reject it. The results were the same as in other, similar experiments—low offers were usually vetoed—but the respondents’ brain scans were revealing.

moviestore brain-monster thing on Flickr by xgrayWhen respondents received stingy offers—two dollars for them, say, and eight dollars for the other player—they exhibited substantially more activity in the dorsolateral prefrontal cortex, an area associated with reasoning, and in the bilateral anterior insula, part of the limbic region that is active when people are angry or in distress. The more activity there was in the limbic structure, the more likely the person was to reject the offer. To the researchers, it looked as though the two regions of the brain might be competing to decide what to do, with the prefrontal cortex wanting to accept the offer and the insula wanting to reject it. “These findings suggest that when participants reject an unfair offer, it is not the result of a deliberative thought process,” Cohen wrote in a recent article. “Rather, it appears to be the product of a strong (seemingly negative) emotional response.”

Several explanations have been proposed for people’s visceral reaction to unfair offers. Maybe human beings have an intrinsic preference for fairness, and we get angry when that preference is violated—so angry that we punish the other player even at a cost to ourselves. Or perhaps people reject low offers because they don’t want to appear weak. “We evolved in small communities, where there was a lot of repeated interaction with the same people,” Cohen said. “In such an environment, it makes sense to build up a reputation for toughness, because people will treat you better next time they see you.”

Unfortunately, some of the emotional responses that we developed millennia ago no longer serve us well. As Cohen put it, “Does it make sense to play tough with a person you meet on a street in L.A.? No. For one thing, you will probably never see that person again. For another, he may pull out a gun and shoot you.” Obviously, we can’t alter our brain structures, but it may be possible to influence decision-making by tinkering with brain chemistry. Last year, a group of economists led by Ernst Fehr, of the University of Zurich, demonstrated how this might be done, in an experiment involving what economists call “the trust game.”

Trust plays a key role in many economic transactions, from buying a secondhand car to choosing a college. In the simplest version of the trust game, one player gives some money to another player, who invests it on his behalf and then decides how much to return to him and how much to keep. The more the first player invests, the more he stands to gain, but the more he has to trust the second player. If the players trust each other, both will do well. If they don’t, neither will end up with much money.

Fehr and his collaborators divided a group of student volunteers into two groups. The members of one group were each given six puffs of the nasal spray Syntocinon, which contains oxytocin, a hormone that the brain produces during breast-feeding, sexual intercourse, and other intimate types of social bonding. The members of the other group were given a placebo spray.

Scientists believe that oxytocin is connected to stress reduction, enhanced sociability, and, possibly, falling in love. The researchers hypothesized that oxytocin would make people more trusting, and their results appear to support this claim. Of the twenty-nine students who were given oxytocin, thirteen invested the maximum money allowed, compared with just six out of twenty-nine in the control group. “That’s a pretty remarkable finding,” Camerer told me. “If you asked most economists how they would produce more trust in a game, they would say change the payoffs or get the participants to play the game repeatedly: those are the standard tools. If you said, ‘Try spraying oxytocin in the nostrils,’ they would say, ‘I don’t know what you’re talking about.’ You’re tricking the brain, and it seems to work.”

Economics has always been concerned with social policy. Adam Smith published “The Wealth of Nations,” in 1776, to counter what he viewed as the dangerous spread of mercantilism; John Maynard Keynes wrote “The General Theory of Employment, Interest, and Money” (1936) in part to provide intellectual support for increased government spending during recessions; Milton Friedman’s “Capitalism and Freedom,” which appeared in 1962, was a free-market manifesto. Today, most economists agree that, left alone, people will act in their own best interest, and that the market will coördinate their actions to produce outcomes beneficial to all.

Neuroeconomics potentially challenges both parts of this argument. If emotional responses often trump reason, there can be no presumption that people act in their own best interest. And if markets reflect the decisions that people make when their limbic structures are particularly active, there is little reason to suppose that market outcomes can’t be improved upon.

Consider saving for retirement. Surveys show that up to half of all families end their working lives with almost no financial assets, other than their entitlement to Social Security benefits. Saving money is difficult, because it involves giving up things that we value now—a new car, a vacation, fancy dinners—in order to secure our welfare in the future. All too often, the desire for immediate gratification prevails. “We humans are very committed to our long-term goals, such as eating healthy food and saving for retirement, and yet, in the moment, temptations arise that often trip up our long-term plans,” David Laibson, the Harvard economist, said. “I was planning to give up smoking, but I couldn’t resist another cigarette. I was planning to be faithful to my wife, but I found myself in an adulterous relationship. I was planning to save for retirement, but I spent all my earnings. Understanding this tendency stands at the heart of a lot of big policy debates.”

Laibson has collaborated with Loewenstein, Cohen, and Samuel McClure, another Princeton psychologist, to examine what happens in people’s brains when they are forced to choose between immediate and delayed rewards. For a study the four researchers published in Science, in 2004, they used an MRI machine to scan a group of student volunteers who were asked to choose between receiving a fifteen-dollar Amazon.com gift voucher today and receiving a twenty-dollar Amazon.com gift voucher in two weeks or a month.

The scans showed that both gift options triggered activity in the lateral prefrontal cortex, but that the immediate option also caused disproportionate activity in the limbic areas. Moreover, the greater the activity in the limbic areas the more likely the students were to choose the voucher that was immediately available and less valuable.

The results provide further evidence that reason and emotion often compete inside the brain, and it also helps explain a number of puzzling phenomena, such as the popularity of Christmas savings accounts, which people contribute to throughout the year. “Why would anybody put money into a savings account that offers zero interest and imposes a penalty if you withdraw cash early?” Cohen said. “It simply doesn’t make sense in terms of a traditional, rational economic model. The reason is that there is this limbic system that produces a strong drive. When it sees something it likes, it wants it now. So you need some type of pre-commitment device to make people save.”

Laibson and Brigitte Madrian, an economist at the Wharton School, have studied one such “pre-commitment device” for 401(k) plans, which deduct part of an employee’s earnings each month and invest them in stocks and bonds. Because the plans are often optional, many people fail to join them, even when their employers offer to match a portion of their contributions. Laibson and his colleagues have called for people to be automatically included in the plans unless they choose to opt out. At companies that have adopted such a policy, enrollment rates have increased sharply.

Reforming 401(k) plans is an example of “asymmetric paternalism,” a new political philosophy based on the idea of saving people from the vagaries of their limbic regions. Warning labels on tobacco and potentially harmful foods are similarly intended to keep subcortical structures in check. Neuroeconomists have suggested additional policies, including warning buyers of lottery tickets that their chances of winning are practically nonexistent and imposing mandatory “cooling off” periods before people make big-ticket purchases, such as cars and boats. “Asymmetric paternalism helps those whose rationality is bounded from making a costly mistake and harms more rational folks very little,” Camerer, Loewenstein, and three colleagues wrote in a 2003 issue of the University of Pennsylvania Law Review. “Such policies should appeal to everyone across the political spectrum.”

Some neuroeconomic “findings” aren’t exactly discoveries, of course. In the fourth century B.C., Plato described reason as a charioteer attempting to steer the twin horses of passion and spirit. More recently, Freud wrote about the contest between the ego and the id. “What is new,” Jonathan Cohen wrote in the fall, 2005, issue of the Journal of Economic Perspectives, “is that researchers now have the tools to begin to identify and characterize these systems at the level of their physical implementation in the human brain. Neuroscience gives detailed access to the mechanisms that underlie behavior and thus may allow scientists to answer questions that cannot be answered easily, or at all, by observing behavior alone.”

Smart Squid on Flick by toybreakerMany traditional economists are unimpressed by this argument. In a recent paper, “The Case for Mindless Economics,” Faruk Gul and Wolfgang Pesendorfer, two Princeton economists, wrote, “Neuroscience evidence cannot refute economic models because the latter make no assumptions and draw no conclusions about the physiology of the brain.” Gul and Pesendorfer have a point: neuroeconomics doesn’t tell us whether the neo-Keynesian or the neoclassical model of inflation is correct. But it can provide indirect evidence to reinforce certain theories and discredit others. About ten years ago, David Laibson published a paper on “hyperbolic discounting,” which suggested that people treat immediate rewards differently from the way they treat delayed rewards, preferring the former in a manner that simple rational-choice models can’t explain. Now the results of the Amazon voucher experiment have provided a possible explanation for the behavior that Laibson identified: immediate and delayed rewards stimulate different parts of the brain. “The practical implications of the experiment come from obtaining a better understanding of the human taste for instant gratification,” Laibson said. “If we can understand that, we will be in a much better position to design policies that mitigate what can be self-defeating behavior.”

The biggest challenge facing neuroeconomics comes not from its opponents in the economics profession but from its supposed allies in neuroscience. Many neuroscientists now consider MRI data to be uninformative. Neural activity occurs in milliseconds, on a scale of perhaps 0.1 millimetres. A typical MRI machine, which measures neural firing indirectly, by tracking blood flow, takes a picture every couple of seconds and isn’t able to detect anything less than three millimetres long. Because of these limitations, neuroscientists prefer to track the firing of single neurons by inserting tiny electrodes into the brain. Unfortunately, this is an invasive procedure, and its experimental use has generally been restricted to laboratory animals.

There is also a more fundamental objection to neuroeconomics and the Platonic view of decision-making. “There is no evidence that hidden inside the brain are two fully independent systems, one rational and one irrational,” Paul W. Glimcher, a neuroscientist who is the director of N.Y.U.’s Center for Neuroeconomics, and two of his colleagues, Michael C. Dorris and Hannah M. Bayer, wrote in a recent paper. “There is, for example, no evidence that there is an emotional system, per se, and a rational system, per se, for decision making at the neurobiological level.”

In place of the reason-versus-passion model, Glimcher and his colleagues have adopted a view of decision-making that, paradoxically, bears a striking resemblance to orthodox economics. In one experiment, Glimcher and a colleague trained thirsty monkeys to direct their eyes to one of two illuminated targets, which earned them differing chances of getting juice rewards—a fifty-per-cent chance of getting a full cup of juice for looking right, say, versus a seventy-per-cent chance of getting half a cup of juice for looking left. The game was repeated many times, with the probabilities changing periodically.

The monkeys’ task was to consume as much juice as possible, and they proved very adept at it. Before long, they were dividing their time between the illuminated targets in a way that roughly maximized their payoffs. When the odds favored looking right, they looked right; when the odds favored looking left, they looked left. Glimcher also used electrodes to track neural firing in part of the posterior parietal cortex, an area that is thought to organize signals transmitted by the retina. He discovered that the firing rate was closely related to the rewards the monkeys were likely to receive. “Specifically,” he and his colleagues reported, “the firing rate of a neuron associated with a leftward movement was a linear function of the probability that the leftward movement would yield the juice reward.”

Clearly, monkeys can’t do probability sums. (Many humans struggle with them!) But Glimcher’s experiment implies that their brains act as if they were solving a mathematical problem, which is what economists assume when they depict people as rational agents trying to maximize their well-being, or “utility.” “What seems to be emerging from these early studies is a remarkably economic view of the primate brain,” Glimcher and his colleagues wrote. “The final stages of decision-making seem to reflect something very much like a utility calculation.”

If Glimcher’s results could be demonstrated in human brains, they might undermine a lot of neuroeconomics, and many in the field tend to downplay his work. “Well, monkeys are very interesting, but they are not nearly as rich in their behavior as humans,” George Loewenstein said to me. “Humans have this very well-developed prefrontal cortex, which allows us to look ahead a number of stages, rather than just behaving in a reflexive fashion. Still, it’s wonderful that we have these controversies. Most of us are friends, and we debate these issues. I’ve learned a lot from talking to Paul.”

I was inside the MRI machine for nearly two hours, and I answered more than two hundred and fifty questions, which were organized into two blocks. Sokol-Hessner had instructed me to answer the first set as if each investment were the only one I would make. He told me to treat the second set of gambles as a group, as if I were constructing an investment portfolio. Later, he explained that he wanted to compare my answers to the two blocks of questions. Many people become less loss-averse when they are constructing a portfolio of investments, presumably because they believe that losses in one part of their portfolio will be made up for by gains in others. “Our research has shown that people can alter their own choice behavior in a systematic fashion,” Sokol-Hessner said. “They can make themselves less loss-averse. If loss aversion is mediated by the limbic structures, such as the amygdala, we would expect a big decrease in activity in those areas when you become less loss-averse.”

Tokyo Stock Exchange on Flickr by sehgal asadThe goal of the imaging experiment was to test this hypothesis. I was only the second person to take part in the experiment, but Sokol-Hessner told me that I was an atypical case. Rather than altering my strategy, I answered all the questions in the same way. Whenever the risk-free option was worth more than about five dollars, I accepted it, thinking that I would have been foolish to turn down a sure thing. Occasionally, when the risk-free option was zero, or close to zero, I gambled on the risky option. I’m not sure why I acted in this way—it wasn’t strictly logical—but it made answering the questions easy, and it seemed to pay off: by the end of the experiment, I had won sixty-eight dollars.

My experience illustrated some of the drawbacks of brain scanning. After about an hour inside the machine, I was more concerned about getting out than I was about making a few dollars. (Sokol-Hessner said that I moved my head around so much that my brain scans were unusable.) “That’s the terrible thing about MRIs,” Sokol-Hessner conceded. “You are in a long tube, and you might well feel tired or claustrophobic. There’s definitely other stuff going on in there besides the experiment. We have to be very careful about how we interpret the evidence.”

Economists who have staked their careers on neuroeconomics are mindful of this advice. “It isn’t a wholesale rejection of the traditional methodology,” David Laibson said of his field. “It is just a recognition that decision-making is not always perfect. People try to do the best they can, but they sometimes make mistakes. The idea that a single mechanism maximizes welfare and always gets things right—that concept is on the rocks. But models that I call ‘cousins’ of the rational-actor model will survive.”

The modified theories to which Laibson referred assume that people have two warring sides: the first deliberative and forward-looking, the second impulsive and myopic. Under certain circumstances, the impulsive side prevails, and people succumb to things like drug addiction, overeating, and taking wild gambles in the stock market. For now, the new models await empirical verification, but neuroeconomists are convinced that they’re onto something. “We are not going to falsify all of traditional economics,” Colin Camerer said. “But we are going to point to a whole range of biological variables that traditionally have not been included in the analysis. In economics, that is a big change.”