Daniel Kahneman
Thinking Fast and Slow
(2011)
p. 8--"the availability heuristic"
--"students of policy have noted that the availability heuristic helps explain why some issues are highly salient in the public's mind while others are neglected. People tend to assess the relative importance of issues by the ease with which they are retrieved from memory..."
Relates to my Henry Threadgill Problem.
p. 12--"the affect heuristic, where judgments and decisions are guided directly by feelings of liking and disliking, with little deliberation or reasoning."
p. 25--System 1 "cannot be turned off. If you are shown a word on the screen in a language you know, you will read it—unless your attention is totally focused elsewhere."
p. 27--"Now that you have measured the lines, you—your System 2, the conscious being you call "I"—have a new belief: you know that the lines are equally long. If asked about their length, you will say what you know. But you will still see the bottom line as longer. You have chosen to believe the measurement, but you cannot prevent System 1 from doing its thing; you cannot decide to see the lines as equal, although you know they are. To resist the illusion, there is only one thing you can do: you must learn to mistrust your impressions of the length of lines when fins are attached to them. To implement that rule, you must be able to recognize the illusory pattern and recall what you know about it. If you can do this, you will never again be fooled by the Müller-Lyer illusion. But you will still see one line as longer than the other."
p. 28--"The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to the error. Even when clues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2. As a way to live your life, however, constant vigilance is not necessarily good, and it is certainly impractical. Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations is which mistakes are likely and try harder to avoid mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people's mistakes than our own."
p. 35--"As you become skilled in a task, its demand for energy diminishes. Studies of the brain have shown that the pattern of activity asscociated with an action changes as skill increases, with fewer brain regions involved. Talent has similar effects. Highly intelligent individuals need less effort to solve the same problems, as indicated by both pupil size and brain activity. A general "law of least effort" applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action. In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature."
p. 37--"The most effortful forms of slow thinking are those that require you to think fast."
p. 41--"Baumeister's group has repeatedly found that an effort of will or self-control is tiring; if you have had to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes [42] around. The phenomenon has been named ego depletion. In a typical demonstration, participants who are instructed to stifle their emotional reaction to an emotionally charged film will later perform poorly on a test of physical stamina...
[42]"The list of situations and tasks that are now known to deplete self-control is long and varied. All involve conflict and the need to suppress a natural tendency. They include:
avoiding the thought of white bears
inhibiting the emotional response to a stirring film
making a series of choices that involve conflict
trying to impress others
responding kindly to a partner's bad behavior
interacting with a person of a different race (for prejudiced individuals)
The list of indications of depletion is also highly diverse:
deviating from one's diet
overspending on impulsive purchases
reacting aggressively to provocation
persisting less time in a handgrip task
performing poorly in cognitive tasks and logical decision making"
p. 54--"Those who nodded (a yes gesture) tended to accept the message they heard, but those who shook their head tended to reject it. Again, there was no awareness, just a habitual connection between an attitude of rejection or acceptance and its common physical expression. You can see why the common admonition to "act calm and kind regardless of how you feel" is very good advice: you are likely to be rewarded by actually feeling calm and kind."
p. 55--"The general theme of these findings is that the idea of money primes [56] individualism: a reluctance to be involved with others, to depend on others, or to accept demands from others. ...her [Kathleen Vohs'] findings suggest that living in a culture that surrounds us with reminders of money may shape our behavior and our attitudes in ways that we do not know about and of which we may not be proud. Some cultures provide frequent reminders of respect, others constantly remind their members of God, and some societies prime obedience by large images of the Dear Leader. ...
[56]"The evidence of priming studies suggests that reminding people of their mortality increases the appeal of authoritarian ideas, which may become reassuring in the context of the terror of death. Other experiments have confirmed Freudian insights about the role of symbols and metaphors in unconscious associations."
pp. 60-61--Larry Jacoby on 'Becoming Famous Overnight'
--"Jacoby nicely stated the problem: "The experience of familiarity has a simple but powerful quality of 'pastness' that seems to indicate that it is a direct reflection of prior experience." This quality of pastness is an illusion. The truth is, as Jacoby and many followers have shown, that the name [previously encountered] will look familiar when you see it because you will see it more clearly. Words that you have seen before become easier to see again—you can identify them better than other words when they are shown very briefly or masked by noise, and you will be quicker (by a few hundredths of a second) to read them than to read other words. In short, you experience greater cognitive ease in perceiving a word you have seen earlier, and it is this sense of ease that gives you the impression of familiarity." (61)
p. 62--"...predictable illusions inevitably occur if a judgment is based on an impression of cognitive ease or strain. Anything that makes it easier for the associative machine to run smoothly will also bias beliefs. A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact. But it was psychologists who discovered that you do not have to repeat the entire statement of a fact or idea to make it appear true."
p. 69--"These findings add to the growing evidence that good mood, intuition, creativity, gullibility, and increased reliance on System 1 form a cluster. At the other pole, sadness, vigilance, suspicion, an analytic approach, and increased effort also go together. A happy mood loosens the control of System 2 over performance: when in a good mood, people become more intuitive and more creative but also less vigilant and more prone to logical errors. Here again, as in the mere exposure effect, the connection makes biological sense. A good mood is a signal that things are generally going well, the environment is safe, and it is alright to let one's guard down. A bad mood indicates that things are not going very well, there may be a threat, and vigilance is required. Cognitive ease is both a cause and a consequence of a pleasant feeling."
p. 72--"The main function of System 1 is to maintain and update a model of your personal world, which represents what is normal in it. The model is constructed by associations that link ideas of circumstances, events, actions, and outcomes that co-occur with some regularity, either at the same time or within a relatively short interval. As these links are formed and strengthened, the pattern of associated ideas comes to represent the structure of events in your life, and it determines your interpretation of the present as well as your expectations of the future."
p. 74--"Studies of brain responses have shown that violations of normality are detected with astonishing speed and subtlety."
p. 81--"there is evidence that people are more likely to be influenced by empty persuasive messages, such as commercials, when they are tired and depleted."
p. 84--"To derive the most useful information from multiple sources of evidence, you should always try to make these sources independent of each other. This rule is part of good police procedure. When there are multiple witnesses to an event, they are not allowed to discuss it before giving their testimony. [unless of course they themselves are cops and their union has negotiated this privilege for them] The goal is not only to prevent collusion by hostile witnesses, it is also to prevent unbiased witnesses from influencing each other. Witnesses who exchange their experiences will tend to make similar errors in [85] their testimony, reducing the total value of the information they provide. Eliminating redundancy from your sources of information is always a good idea.
[85]"The principle of independent judgments (and decorrelated errors) has immediate applications for the conduct of meetings, an activity in which executives in organizations spend a great deal of their working days. A simple rule can help: before an issue is discussed, all members of the committee should be asked to write a very brief summary of their position. This procedure makes good use of the value of diversity of knowledge and opinion in the group. The standard practice of open discussion gives too much weight to the opinions of those who speak early and assertively, causing others to line up behind them."
p. 87--"It is the consistency of the information that matters for a good story, not its completeness. Indeed, you will often find that knowing a little makes it easier to fit everything you know into a coherent pattern."
Daniel Kahneman
Thinking Fast and Slow
(2011)
p. 91--"[Alex] Todorov has found that people judge competence by combining the two dimensions of strength and trustworthiness. The faces that exude competence combine a strong chin with a slight confident-appearing smile. There is no evidence that these facial features actually predict how well politicians will perform in office. But studies of the brain's response to winning and losing candidates show that we are biologically predisposed to reject candidates who lack the attributes we value...
"Political scientists followed up on Todorov's initial research by identifying a category of voters for whom the automatic preferences of System 1 are particularly likely to play a large role. They found what they were looking for among politically uninformed voters who watch a great deal of television. As expected, the effect of facial competence on voting is about three times larger for information-poor and TV-prone voters than for others who are better informed and watch less television."
p. 120--"Any number that you are asked to consider as a possible solution to an estimation problem will produce an anchoring effect."
--"The adjustment [away from the anchor] typically ends prematurely, because people stop when they are no longer certain they should move farther."
p. 121--"Nick Epley and Tom Gilovich found evidence that adjustment is a deliberate attempt to find reasons to move away from the anchor: people who are instructed to shake their head when they hear the anchor, as if they rejected it, move farther from the anchor, and people who nod their head show enhanced anchoring. Epley and Gilovich also confirmed that adjustment is an effortful operation. People adjust less (stay closer to the anchor) when their mental resources are depleted..."
p. 124--study of real estate agents' assessments: "They insisted that the listing price had no effect on their responses, but they were wrong: the anchoring effect was 41%. Indeed, the professionals were almost as susceptible to anchoring effects as business school students with no real-estate experience, whose anchoring index was 48%. The only difference between the two groups was that the students conceded that they were influenced by the anchor, while the professionals denied that influence."
p. 125--"a key finding of anchoring research is that anchors that are obviously random can be just as effective as potentially informative anchors."
p. 126--re: anchoring effects: "And of course there are quite a few people who are willing and able to exploit our gullibility." eegg. "arbitrary rationing" at grocery stores, negotiation over home prices
p. 130--"The availability heuristic, like other heuristics of judgment, substitutes one question for another: you wish to estimate the size of a category or the frequency of an event, but you report an impression of the ease with which instances come to mind. Substitution of questions inevitably produces systematic errors. You can discover how the heuristic leads to biases by following a simple procedure: list factors other than frequency that make it easy to come up with instances. Each factor in your list will be a potential source of bias. Here are some examples:
•A salient event that attracts your attention will be easily retrieved from memory. [e.g. the salience of celebrity sex scandals makes it seem like celebrities have more sex scandals than average] ...
•A dramatic event temporarily increases the availability of its category. [e.g. plane crash => fear of flying] ...
•Personal experiences, pictures, and vivid examples are more available than incidents that happened to others, or mere words, or statistics. A judicial error that affects you will undermine your faith in the justice system more than a similar incident you read about in a newspaper."
This last, vis-a-vis "pictures" vs. "mere words," is very relevant to the observations of Mumford, McLuhan, Debord, Postman, Schickel...
p. 131--research of Norbert Schwarz et al: "How will people's impressions of the frequency of category be affected by a requirement to list a specified number of instances?
...
"The request to list twelve [as against six] instances [of times subjects had behaved assertively in life] pits the two determinants against each other. On the one hand, you have just retrieved an impressive number of cases where you were assertive. On the other hand, while the first three or four instances of your own assertiveness probably came easily to you, you almost certainly struggled to come up with the last few to complete a set of twelve; fluency was low. Which will count more—the amount retrieved or the ease and fluency of the retrieval?
"The context yielded a clear-cut winner: people who had just listed twelve instances rated themselves as less assertive than people who had listed only six. ... Self-ratings were dominated by the ease with which examples had come to mind."
p. 133--"The results suggest that the participants make an inference: if I am having so much more trouble than expected coming up with instances of my assertiveness, then I can't be very assertive. Note that this inference rests on a surprise—fluency being worse than expected. The availabilitiy heuristic that the subjects apply is better described as an "unexpected unavailability" heuristic.
"Schwarz and his colleagues reasoned that they could disrupt the heuristic by providing the subjects with an explanation for the fluency of [134] retrieval that they experienced. ... [Indeed] As predicted, participants whose experience of fluency was "explained" [by being told that the background music would affect their performance] did not use it as a heuristic..."
p. 135--"The conclusion is that the ease with which instances come to mind is a System 1 heuristic, which is replaced by a focus on content when System 2 is more engaged. Multiple lines of evidence converge on the conclusion that people who let themselves be guided by System 1 are more strongly susceptible to availability biases than others who are in a higher state of vigilance. The following are some conditions in which people "go with the flow" and are affected more strongly by ease of retrieval than by the content they retrieved:
•when they are engaged in another effortful task at the same time
•when they are in a good mood because they just thought of a happy episode in their life
•if they score low on a depression scale
•if they are knowledegable novices on the topic of the task, in contrast to true experts
•when they score high on a scale of faith in intuition
•if they are or are made to feel powerful"
p. 138--"The world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed."
p. 140--"Differences between experts and the public are explained in part by biases in lay judgments, but [Paul] Slovic draws attention to situations in which the differences reflect a genuine conflict of values. He points out the experts often measure risks by the number of lives (or life-years) lost, while the public draws finer distinctions, for example between "good deaths" and "bad deaths," or between random accidental fatalities and deaths that occur in the course of voluntary activities such a skiing. These legitimate distinctions are often ignored in statistics that merely count cases. Slovic argues from such observations that the public has a richer conception of risk than the experts do. Consequently, he strongly resists the view that the experts should rule, and that their opinions should be accepted without question when they conflict with the opinions and wishes of other citizens. When experts and the public disagree on their priorities, he says, "Each side must respect the insight and intelligence of the other."
This points the way toward the discussion I've been dying to have since COVID hit. No one right now seems to be acknowledging, or not in terms plain enough to capture what is really important about the question, that a "genuine conflict of values" in fact underlies much of the dissent about lockdown measures. That said, are we quite so sure that The People's "richness" of conception gets us any further than does a conceptually narrower but better-informed expert tyranny?
p. 141--Slovic challenges "the idea that risk is objective," "lists nine ways of defining the mortality risk associated with the release of a toxic material into the air"
--"His point is that the evaluation of the risk depends on the choice of a measure—with the obvious possibility that the choice may have been guided by a preference for one outcome or another. He goes on to conclude that "defining risk is thus an exercise in power."
--contrast to Cass Sunstein, who "defends the role of experts as a bulwark against "populist" excesses" and who "has not been persuaded... [142] that risk and its measurement is subjective. ... Lawmakers and regulators may be overly responsive to the irrational concerns of citizens, both because of political sensitivity and because they are prone to the same cognitive biases as other citizens."
p. 150--"People who are asked to assess probability are not stumped [i.e. in defining what this means], because they do not try to judge probability as statisticians and philosophers use the word. A question about probability or likelihood activates a mental shotgun, evoking answers to easier questions. One of the easy answers is an automatic assessment of representativeness—routine in understanding language. The (false) statement that "Elvis Presley's parents wanted him to be a dentist" is mildly funny because the discrepancy between the images of Presley and a dentist is detected automatically. System 1 generates an impression of similarity without intending to do so. The representativeness heuristic is involved when someone says "She will win the election; you can see she is a winner" or "He won't go far as an academic; too many tatoos." We rely on representativeness when we judge the potential leadership of a candidate for office by the shape of his chin of the forcefulness of his speeches."
p. 153--"You surely understand in principle that worthless information should not be treated differently from a complete lack of information, but WYSIATI makes it very difficult to apply that principle. ... There is one thing you can do when you have doubts about the quality of the evidence: let your judgments of probability stay close to the base rate. Don't expect this exercise of discipline to be easy—it requires a significant effort of self-monitoring and self-control."
p. 154--"There are two ideas to keep in mind about Bayesian reasoning and how we tend to mess it up. The first is that base rates matter, even in the presence of evidence about the case at hand. This is often not intuitively obvious. The second is that intuitive impressions of the diagnosticity of evidence are often exaggerated. The combination of WYSIATI and associative coherence tends to make us believe in the stories we spin for ourselves. The essential keys to disciplined Bayesian reasoning can be simply summarized:
•Anchor your judgment of the probability of an outcome on a plausible base rate.
•Question the diagnosticity of your evidence."
p. 158-"conjunction fallacy"="when [people] judge a conjunction of two events...to be more probable than one of the events...in a direct comparison."
p. 160--"a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true."
p. 164--"The Linda problem ["bank teller" vs. "feminist bank teller"] attracted a great deal of attention, but it also became a magnet for critics of our approach to judgment. As we had already done, researchers found combinations of instructions and hints that reduced the incidence of the fallacy; some argued that, in the context of the Linda problem, it is reasonable for subjects to understand the word "probability" [165] as if it means "plausibility." These arguments were sometimes extended to suggest that our entire enterprise was misguided: if one salient cognitive illusion could be weakened or explained away, others could be as well. This reasoning neglects the unique feature of the conjunction fallacy as a case of conflict between intuition and logic. The evidence that we had built up for heuristics from between-subjects experiment (including studies of Linda) was not challenged—it was simply not addressed, and its salience was diminished by the exclusive focus on the conjunction fallacy. The net effect of the Linda problem was an increase in the visibility of our work to the general public, and a small dent in the credibility of our approach among scholars in the field. This was not at all what we had expected.
"If you visit a courtroom you will observe that lawyers apply two styles of criticism: to demolish a case they raise doubts about the strongest arguments that favor it; to discredit a witness, they focus on the weakest part of the testimony. The focus on weaknesses is also normal in political debates. I do not believe it is appropriate in scientific controversies, but I have come to accept as a fact of life that the norms of debate in the social sciences do not prohibit the political style of argument, especially when large issues are at stake—and the prevalence of bias in human judgment is a large issue."
p. 168--"Stereotyping is a bad word in our culture, but in my usage it is neutral. One of the basic characteristics of System 1 is that it represents categories as norms and prototypical exemplars. This is how we think of horses, refrigerators, and New York police officers; we hold in memory a representation of one or more "normal" members of each of these categories. When the categories are social, these representations are called stereotypes. Some stereotypes are perniciously wrong, and hostile stereotyping can have dreadful [169] consequences, but the psychological facts cannot be avoided: stereotypes, both correct and false, are how we think of categories.
[169]"You may note the irony. In the context of the cab problem [percentage of accidents caused by blue or green cabs and the likelihood that any given accident was caused by either], the neglect of base-rate infomation is a cognitive flaw, a failure of Bayesian reasoning, and the reliance on causal base rates is desirable. Stereotyping the Green drivers improves the accuracy of judgment. In other contexts, however, such as hiring and profiling, there is a strong social norm against stereotyping, which is also embedded in the law. This is as it should be. In sensitive social contexts, we do not want to draw possibly erroneous conclusions about the individual from the statistics of the group. We consider it morally desirable for base rates to be treated as statistical facts about the group rather than as presumptive facts about individuals. In other words, we reject causal base rates.
"The social norm against stereotyping, including the opposition to profiling, has been highly beneficial in creating a more civilized and more equal society. It is useful to remember, however, that neglecting valid stereotypes inevitably results in suboptimal judgments. Resistance to stereotyping is a laudable moral position, but the simplistic idea that resistance is costless is wrong. The costs are worth paying to achieve a better society, but denying that the costs exist, while satisfying to the soul and politically correct, is not scientifically defensible. Reliance on the affect heuristic is common in politically charged arguments. The positions we favor here have no cost and those we oppose have no benefits. We should be able to do better."
This is unsatisfying and feels a bit contrived. The notion of "costs" is introduced in order to compartmentalize bird's-eye-view morality from microsocial morality; and then in the same breath the micro and the macro are collided back together in the notion of net gain or loss to society. If by "cost" per se DK means merely "the springing of System 2 into action," then I guess I don't really understand why this passage was necessary. I'm not sure how the imperative to gather enough information to support (or refute) one's diagnosis of another person comes out simply as a "cost." Usually it benefits you too, because you stand a better chance of finding what you're looking for if you actually LOOK for it. Here again, as it seems DK would elsewhere agree, the problem is less one of ability and more one of thinking we know more/better than we actually do in any given singular situation.
p. 182--"the statistician David Freedman used to say that if the topic of regression [to the mean] comes up in a criminal or civil trial, the side that must explain regression to the jury will lose the case."
--"regression to the mean has an explanation but does not have a cause."
p. 191--"The biases we find in predictions that are expressed on a scale, such as GPA or the revenue of a firm, are similar to the biases observed in judging the probabilities of outcomes.
"The corrective procedures are also similar:
•Both contain a baseline prediction, which you would make if you knew nothing about the case at hand. In the categorical case, it was the base rate. In the numerical case, it is the average outcome in the relevant category.
•Both contain an intuitive prediction, which expresses the number that comes to your mind, whether it is a probability or a GPA.
•In both cases, you aim for a prediction that is intermediate between the baseline and your intuitive response.
•In the default case of no useful evidence, you stay with the baseline.
•At the other extreme, you also stay with your initial prediction. This will happen, of course, only if you remain completely confident in [192] your initial prediction after a critical review of the evidence that supports it.
•In most cases you will find some reason to doubt that the correlation between your intuitive judgment and the truth is perfect, and you will end up somewhere between the two poles.
[192]"This procedure is an approximation of the likely results of an appropriate statistical analysis. ... The two procedures are intended to address the same bias: intuitive predictions tend to be overly confident and overly extreme.
"Correcting your intuitive predictions is a task for System 2. Significant effort is required to find the relevant reference category, estimate the baseline prediction, and evaluate the quality of the evidence. The effort is justified only when the stakes are high and when you are particularly keen not to make mistakes. Furthermore, you should know that correcting your intuitions may complicate your life. A characteristic of unbiased predictions is that they permit the prediction of rare or extreme events only when the information is very good. If you expect your predictions to be of modest validity, you will never guess an outcome that is either rare or far from the mean. ...
"The objections to the principle of moderating intuitive predictions must be taken seriously, because absence of bias is not always what matters most. A preference for unbiased predictions is justified if all errors of prediction are treated alike, regardless of their direction. But there are situations when one type of error is much worse than another. ...[eegg.][193]The goal of venture capitalists is to call the extreme cases correctly, even at the cost of overestimating the prospects of many other ventures. For a conservative banker making large loans, the risk of a single banker going bankrupt may outweigh the risk of turning down several would-be clients who would fulfill their obligations."
p. 193--"A rational person will invest a large sum in an enterprise that is most likely to fail if the rewards of success are large enough, without deluding herself about the chances of success. However, we are not all rational and some of us may need the security of distorted estimates to avoid paralysis. If you choose to delude yourself by accepting extreme predictions, however, you will do well to remain aware of your self-indugence."
Possibly relevant to the Echo Chamber.
p. 200--"The ultimate test of an explanation is whether it would have made the event predictable in advance. No story of Google's unlikely success will meet that test, because no story can include the myriad of events that would have caused a different outcome."
Possibly relevant to the literary imperative!
p. 201--"Like watching a skilled rafter avoiding one potential calamity after another as he goes down the rapids, the unfolding of the Google story is thrilling because of the constant risk of disaster. However, there is an instructive difference between the two cases. The skilled rafter has gone down the rapids hundreds of times. He has learned to read the roiling water in front of him and to anticipate obstacles. He has learned to make the tiny adjustments of posture that keep him upright. There are fewer opportunities for young men to learn how to create a giant company, and fewer chances to avoid hidden rocks... Of course there was a great deal of skill in the Google story, but luck played a more important role in the actual event than it does in the telling of it. And the more luck was involved, the less there is to be learned."
This encapsulates, in more decorous language, Taleb's case against reading the newspaper.
p. 204---"Because adherence to standard operating procedures is difficult to second-guess, decision makers who expect to have their decisions scrutinized with hindsight are driven to bureaucratic solutions—and to an extreme reluctance to take risks. As malpractice litigation became more common, physicians changed their procedures in multiple ways: ordered more tests, referred more cases to specialists, applied conventional treatments even when they were unlikely to help. These actions protected the physicians more than they benefitted the patients, creating the potential for conflicts of interest. Increased accountability is a mixed blessing."
Definitely relevant to any number of entries on Accountability.
[posted to ArtTechnics2]
p. 213--"Perfect prices leave no scope for cleverness, but they also protect fools from their own folly. We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match."
p. 217--"Why do investors, both amateur and professional, stubbornly believe that they can do better than the market, contrary to an economic theory that most of them accept, and contrary to what they could learn from a dispassionate evaluation of their personal experience? ...
"The most potent psychological cause of the illusion is certainly that the people who pick stocks are exercising high-level skills. They consult economic data and forecasts, they examine income statments and balance sheets, they evaluate the quality of top management, and they assess the competition. All this is serious work that requires extensive training, and the people who do it have the immediate (and valid) experience of using these skills. Unfortunately, skill in evaluating the business prospects of a firm is not sufficient for successful stock trading, where the key question is whether the information about the firm is already incorporated in the price of its stock. Traders apparently lack the skill to answer this crucial question, but they appear to be ignorant of their ignorance. ...
"...the illusions of validity and skill are supported by a powerful professional culture. We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers. Given the professional culture of the financial community, it is not surprising that large numbers of individuals in that world believe themselves to be among the chosen few who can do what they believe others cannot."
p. 219--Philip Tetlock's work on expert prediction: "Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident. "We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly," Tetlock writes [Expert Political Judgment (2005)]. "In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals...are any better than journalists or attentive readers of The New York Times in 'reading' emerging situations." ... "Experts in demand," he writes, "were more overconfident than their colleagues who eked out existences far from the limelight."
"Tetlock also found that experts resisted admitting that they had been wrong, and when they were compelled to admit error, they had a large collection of excuses: they had been wrong only in their timing, an unforseeable event had intervened, or they had been wrong but for the right reasons. Experts are just human in the end. They are dazzled by their own brilliance and hate to be wrong. Experts are led astray not by what they believe, but by [220] how they think, says Tetlock."
Relevant to topics of hyperspecialization and to what I called the dangerous middle-ground of knowledge-gathering. But it does need to be kept in mind that the topic above is prediction of the future; it is not the creation of art or the day-to-day business of living.
p. 220--"The main point of this chapter is not that people who attempt to predict the future make many errors; that goes without saying. The first lesson is that errors of prediction are inevitable because the world is unpredictable. The second is that high subjective confidence is not to be trusted as an indicator of accuracy (low confidence could be more informative)."
p. 221--"Speaking of Illusory Skill": ""The question is not whether these experts are well-trained. It is whether their world is predictable.""
p. 224--Orley Ashtenfelter's work on predicting wine prices based on a simple, three-factor evaluation of weather/rainfall: "His formula provides accurate price forecasts years and even decades into the future. Indeed, his formula forecasts future prices much more accurately than the current prices of young wines do. This new example of a "Meehl pattern" challenges the abilities of the experts whose opinions help shape the early price. It also challenges economic theory, according to which prices should reflect all the available information, including the weather. Ashenfelter's formula is extremely accurate—the correlation between his predictions and actual prices is above .90.
"Why are experts inferior to algorithms? One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it reduces validity. Simple combinations of features are better. Several studies have shown that human decision makers are inferior to a prediction formula even when they are given the score suggested by the formula! They feel that they can overrule the formula because they have additional information about the case, but they are wrong more often than not."
--"Another reason for the inferiority of expert judgment is that humans are incorrigibly inconsistent in making summary judgments of complex information. When asked to evaluate the same information twice, they fre-[225]quently give different answers. The extent of the inconsistency is often a matter of real concern." [eegg. interpretation of xrays, auditors, "organizational managers," etc.]
p. 225--"The research suggests a surprising conclusion: to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments. In admissions decisions for medical schools, for example, the final determination is often made by the faculty members who interview the candidate. The evidence is fragmentary, but there are solid grounds for a conjecture: conducting an interview is likely to diminish the accuracy of a selection procedure, if the interviewers also make the final admission decisions."
The point seems fair, but this specific example opens quite the consequentialist wormcan: this is pretty much the archetypal situation where the applicants Evolve to exploit the (ostensibly fixed/static) algorithm once it becomes known. Paradoxically, by being inconsistent, human judges limit cynical/malign applicants' ability to talk their way in.
p. 227-"The statistical evidence of clinical inferiority contradicts clinicians' everyday experience of the quality of their judgments. ... Many of these hunches are confirmed [during the treatment period], illustrating the reality of clinical skill.
[228]"The problem is that the correct judgments involve short-term predictions in the context of the therapeutic interview, a skill in which therapists have many years of practice. The tasks at which they fail typically require long-term predictions about the patient's future. These are much more difficult, even the best formulas do only modestly well, and they are also tasks that the clinicians have never had the opportunity to learn properly—they would have to wait years for feedback, instead of receiving the instantaneous feedback of the clinical session. However, the line between what clinicians can do well and what they cannot do at all well is not obvious, and certainly not obvious to them. They know they are skilled, but they don't necessarily know the boundaries of their skill. Not surprisingly, then, the idea that a mechanical combination of a few variables could outperform the subtle complexity of human judgment strikes experienced clinicians as obviously wrong.
"The debate about the virtues of clinical and statistical prediction has always had a moral dimension. The statistical method, Meehl wrote, was criticized by experienced clinicians as "mechanical, atomistic, additive, cut and dried, artificial, unreal, arbitrary, incomplete, dead, pedantic, fractionated, trivial, forced, static, superficial, rigid, sterile, academic, pseudoscientific and blind." The clinical method, on the other hand, was lauded by its proponents as "dynamic, global, meaningful, holistic, subtle, sympathetic, configural, patterned, organized, rich, deep, genuine, sensitive, sophisticated, real, concrete, natural, true to life, and understanding.
"This is an attitude we can all recognize. When a human competes with a machine...our sympathies lie with the fellow human."
p. 229--"The prejudice against algorithms is magnified when the decisions are consequential. ... [The] rational argument [for the algorithms] is compelling, but it runs against a stubborn psychological reality: for most people, the cause of a mistake matters. The story of a child dying because an algorithm made a mistake is more poignant than the story of the same tragedy occurring as a result of human error, and the difference in emotional intensity is readily translated into a moral preference."
p. 230--re: DK's time designing the interview/evaluations for IDF recruits: "I was convinced by [Meehl's] argument that simple, statistical rules are superior to intuitive "clinical" judgments. I concluded that the current interview had failed at least in part because it allowed the interviewers to do what they found most interesting, which was to learn about the dynamics of the interviewee's mental life. Instead, we should use the limited time at our disposal to obtain as much specific information as possible about the interviewee's life in his normal environment. Another lesson I learned from Meehl was that we should abandon the procedure in which the interviewers' global evaluations of the recruit determined the final decision. Meehl's book suggested that such evaluations should not be trusted and that statistical summaries of separately evaluated attributes would achieve greater validity."
This points, more or less, to the wisdom in Fromm and Maccoby's methodology. Put differently: this illustrates why their methodology is not absurd even when it is applied using psychoanalytic theory that is mostly absurd.
p. 232--"If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position... Don't overdo it—six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you will score it...
"... To avoid halo effects, you must collect the information on one trait at a time, scoring each before you move onto the next one. Do not skip around. To evaluate each candidate, add up the six scores. Because you are in charge of the final decision, you should not do a "close your eyes." [i.e. do not synthesize your subjective impressions into a subjective composite/global score] Firmly resolve that you will hire the candidate whose final score is the highest, even [233] if there is another one whom you like better—try to resist your wish to invent broken legs to change the ranking." [refers to Meehl's remark that an assessment of how likely one is to go to the movies that night can change drastically if it is learned that the person has suffered a broken leg]
Again, the goal is not perfection, which is unrealistic here. The goal is to work around the most pernicious of human foibles. This makes a lot of sense.
p. 240--"If subjective confidence is not to be trusted, how can we evaluate the probably validity of an intuitive judgment? When do judgments reflect true expertise? When do they display an illusion of validity? The answer comes from two basic conditions for acquiring a skill:
•an environment that is suffiently regular to be predictable
•an opportunity to learn these regularities through prolonged practice
When both of these conditions are satisfied, intuitions are likely to be skilled. Chess is an extreme example of a regular environment... Physicians, nurses, athletes, and firefighters also face complex but fundamentally orderly situations. ... In contrast, stock pickers and political scientists who make long-term forecasts operate in a zero-validity environment. Their failures reflect the basic unpredictability of the events that they try to forecast.
"Some environments are worse than irregular. Robin Hogarth described "wicked" environments, in which professionals are likely to learn the wrong lessons from experience."
p. 257--"One of the benefits of an optimistic temperament is that it encourages persistence in the face of obstacles. But persistence can be costly. An impressive series of studies by Thomas Åstebro sheds light on what happens when optimists receive bad news. ...
"...persistence after discouraging advice was relatively common among inventors who had a high score on a personality measure of optimism—on which inventors generally scored higher than the general population. ... The evidence suggests that optimism is widespread, stubborn, and costly."
p. 262--"Organizations that take the word of overconfident experts can expect costly consequences. The study of CFOs showed that those who were most confident and optimistic about the S&P index were also overconfident about and optimistic about the prospects of their own firm, which went on to take more risk than others. As Nassim Taleb has argued, inadequate appreciation of the uncertainty of the environment inevitably leads economic agents to take risks they should avoid. However, optimism is highly valued, socially and in the market; people and firms reward the providers of dangerously misleading information more than they reward truth tellers."
In other words, perhaps System 1 is a more adaptive trait in social isolation and a less adaptive trait in complex societies or social settings. In isolation it enables snap assessments which can ensure survival; in social systems, meanwhile, it reacts viscerally against unhappy information personified by other social actors (people) as with the study of professors' "pretentious" vocabulary mentioned on p. 63.
p. 263--"Experts who acknowledge the full extent of their ignorance may expect to be replaced by more confident competition, who are better able to gain the trust of clients. An unbiased appreciation of uncertainty is a cornerstone of rationality—but it is not what people and organizations want."
p. 264--"premortem" (G. Klein): "when the organization has almost come to an important decision but has not formally committed itself, Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5-10 minutes to write a brief history of that disaster.""
--"The premortem has two main advantages: it overcomes the groupthink that af-[265]fects many teams once a decision appears to have been made, and it unleashes the imagination of knowledgeable individuals in a much-needed direction.
[265]"As a team converges on a decision—and especially when the leader tips his or her hand—public doubts about the wisdom of the planned move are gradually suppressed and eventually come to be treated as evidence of flawed loyalty to the team and its leaders. The suppression of doubt contributes to overconfidence in a group where only supporters of the decision have a voice. The main virtue of the premortem is that it legitimizes doubts. Furthermore, it encourages even supporters of the decision to search for possible threats that they had not considered earlier."
***post to Copland on the Tuba..."Kahneman on the Tuba"?***
p. 270--"Simple gambles (such as "40% chance to win $300") are to students of decision making what the fruit fly is to geneticists. Choices between such gambles provide a simple model that shares important features with the more complex decisions that researchers actually aim to understand. Gambles represent the fact that the consequences of choice are never certain. Even ostensibly sure outcomes are uncertain: when you sign the contract to buy an apartment, you do not know the price at which you may later have to sell it, nor do you know that your neighbor's son will soon take up the tuba. Every significant choice we make in life comes with some uncertainty—which is why students of decision making hope that some of the lessons learned in the model situation will be applicable to more interesting everyday problems. But of course the main reason that decision theorists study simple gambles is that this is what other decision theorists do."
p. 293--Thaler and the "endowment effect", e.g. for concert tickets bought for $200: "You are an avid fan and would have been willing to pay up to $500 for the ticket. Now you have your ticket and you learn on the internet that richer or more desperate fans are offering $3000. Would you sell? If you resemble most of the audience at sold-out events you do not sell. Your lowest selling price is above $3000 and your maximum buying price is $500. This is an example of the endowment effect, and a believer in standard economic theory would be puzzled by it."
p. 294--"The starting point for our investigation was that the endowment effect is not universal. If someone asks you to change a $5 bill for five singles, you hand over the five ones without any sense of loss. Nor is there much loss aversion when you shop for shoes. The merchant who gives up the shoes in exchange for money certainly feels no loss. Indeed, the shoes that he hands over have always been, from his point of view, a cumbersome proxy for money that he was hoping to collect from some consumer. Furthermore, you probably do not experience paying the merchant as a loss, because you were effectively holding money as a proxy for the shoes you intended to buy. ...
"What distinguishes these market transactions from Professor R's reluctance to sell his wine, or the reluctance of Super Bowl ticket holders to sell even at a very high price? The distinctive feature is that both the shoes the merchant sells you and the money you spend from your budget for shoes are held "for exchange." They are intended to be traded for other goods. Other goods, such as wine and Super Bowl tickets, are held "for use," to be consumed or otherwise enjoyed. Your leisure time and the standard of living that your income supports are also not intended for sale or exchange."
p. 295--experimental investigation of the endowment effect: "we added to the Sellers and Buyers a third group—Choosers. Unlike the buyers, who had to spend their own money to acquire the good [a mug], the Choosers could [296] receive either a mug or a sum of money, and they indicated the amount of money that was as desirable as receiving the good. These were the results:
Sellers $7.12
Choosers $3.12
Buyers $2.87
The gap between Sellers and Choosers is remarkable, because they actually face the same choice! [i.e. between taking home the mug or taking home some amount of money] ... The high price that Sellers set reflects the reluctance to give up an object that they already own, a reluctance that can be seen in babies who hold on fiercely to a toy and show great agitation when it is taken away. Loss aversion is built into the automatic evaluations of System 1.
[296]"Buyers and Choosers set similar cash values, although the Buyers have to pay for the mug, which is free for the Choosers. This is what we would expect if Buyers do not experience spending money on the mug as a loss. Evidence from brain imaging confirms the difference. Selling goods that one would normally use activates regions of the brain that are associated with disgust and pain. Buying also activates these areas, but only when the prices are perceived as too high...
"The cash value that the Sellers set on the mug is a bit more than twice as high as the value set by Choosers and Buyers. The ratio is very close to the loss aversion coefficient in risky choice, as we might expect if the same value function for gains and losses of money is applied to both riskless and risky decisions. A ratio of about 2:1 has appeared in studies of diverse economic domains, including the response of households to price changes. As economists would predict, customers tend to increase their purchases of eggs, orange juice, or fish when prices drop and to reduce their purchases when prices rise; however, in contrast to the predictions of economic theory, the effect of price increases (losses relative to the reference price) is about twice as large as the effect of gains."
***[on cultural differences]***
p. 298--"We all know people for whom spending is painful, although they are objectively quite well-off. There may also be cultural differences in the attitude toward money, and especially toward the spending of money on whims and minor luxuries... Such a differ-[299]ence may explain the large discrepancy between the results of the "mugs study" in the United States and in the UK."
p. 301--"Some experimenters have reported that an angry face "pops out" of a crowd of happy faces, but a single happy face does not stand out in an angry crowd. The brains of humans and other animals contain a mechanism that is designed to give priority to bad news."
p. 302--"The psychologist Paul Rozin, an expert on disgust, observed that a single cockroach will completely wreck the appeal of a bowl of cherries, but a cherry will do nothing at all for a bowl of cockroaches."
p. 305--"Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals. This conservatism helps keep us stable in our neighborhood, our marriage, and our job; it is the gravitational force that holds our life together near the reference point."
p. 306--studies of fairness in economic behavior, e.g., a hypothetical hardware store which raises its prices after a blizzard; a large majority of study participants rate this as Unfair or Very Unfair: "They evidently viewed the pre-blizzard price as a reference point and the raised price as a loss that the store imposes on its customers, not because it must but simply because it can. A basic rule of fairness, we found, is that the exploitation of market power to impose losses on others is unacceptable."
p. 307--sim. results re: wages and profits: per responses, "The entitlement is personal: the current worker has a right to retain his wage even if market conditions would allow the employer to impose a wage cut. [And yet] The replacement worker has no entitlement to the previous worker's reference wage..."
--"Different rules governed what the firm could do to improve its profits or to avoid reduced profits. ... Of course, our respondents liked a firm better and described it as more fair if it was generous when its profits increased, but they did not brand as unfair a firm that did not share. They showed indignation only when a firm exploited its power to break informal contracts with workers or customers, and to impose a loss on others in order to increase its profit.
p. 308--"More recent research has supported the observations of reference-dependent fairness and has also shown that fairness concerns are economically significant, a fact we had suspected but did not prove. Employers who violate rules of fairness are punished by reduced productivity, and merchants who follow unfair pricing policies can expect to lose sales.
--"Unfairly imposing losses on people can be risky if the victims are in a position to retaliate. Furthermore, experiments have shown that strangers who observe unfair behavior often join in the punishment. ... It appears that maintaining the social order and the rules of fairness in this fashion is its own reward. Altruistic punishment could well be the glue that holds societies together. However, our brains are not designed to reward generosity as reliably as they punish meanness. Here again, we find a marked asymmetry between losses and gains."
The last bit is VERY RELEVANT to internet discourse.
As for the rest, here is an opportunity for the neocons to zig rather than zag and take the anti-essentialist position which they normally oppose: these results reflect culture more than anything, and as such there's no reason why economic actors as yet unborn could not be initiated into a culture wherein the Standard Economic Model has been elevated to a moral imperative. The question then becomes, IS this actually moral? And who would actually want to live in that world? Certainly not me. These reference-based judgments of fairness make perfect sense to my System 1. Moreover, I think it would be very possible to marshal an impressive parade of thinkers in support, probably including some early/classical economic thinkers (Smith?) whose thought has been perverted by the Chicagoans; also Christ, Buddha, Gandhi, et al. Generally we don't "impose losses on others" for sport. It is not only contemporary bleeding hearts who have felt so.
Btw, is there a more profligate breaker of "informal contracts with workers [and] customers" than Disney? PSC broke the formal law enough to get sued...but Disney's emphasis on tradition and values so often prepares a bed in which managers (and maybe higher ups?) are not actually prepared to lie. It is precisely in the "informal" areas rather than the formal ones where lots of animosity seems to fester.
p. 318--two reasons for risk-seeking in the face of sure losses: "First, there is diminishing sensitivity. The sure loss is very aversive because the reaction to a loss of $900 is more than 90% as intense as the reaction to a loss of $1000. The second factor may be even more powerful: the decision weight that corresponds to a probability of 90% is only about 71, much lower than the probability. The result is that when you consider a choice between a sure loss and a gamble with a high probability of a larger loss, diminishing sensitivity makes the sure loss more aversive, and the certainty effect reduces the aversiveness of the gamble. The same two factors enhance the attractiveness of the sure thing and reduce the attractiveness of the gamble when the outcomes are positive."
Incidentally, the generic description which follows is eerily reminiscent of what is currently alleged against the administrators of the AFM pension fund: "Risk taking of this kind often turns manageable failures into disasters." (319)
p. 319--section heading "Gambling In The Shadow Of The Law": "The plaintiff with a strong case is likely to be risk averse." i.e. even with a 95% chance to win, the temptation to settle for a sure payout is strong.
p. 320--Conversely, "A defendant with a weak case is likely to be risk seeking, prepared to gamble rather than accept a very unfavorable settlement. In the face-off between a risk-averse plaintiff and a risk-seeking defendant, the defendant holds the stronger hand. The superior bargaining position of the defendant should be reflected in negotiated settlements, with the plaintiff settling for less than the statistically expected outcome of the trial. This prediction from the fourfold pattern was confirmed by experiments conducted with law school students and practicing judges, and also by analyses of actual negotiations in the shadow of civil trials.
"Now consider "frivolous litigation" ...both know that in a negotiated settlement the plaintiff will get only a small fraction of the amount of the claim. ...the frivolous claim is a lottery ticket for a large prize. Overweighting the small chance of success is natural in this situation... [while] For the defendant, the suit is a nuisance with a small risk of a very bad outcome. ... The shoe is now on the other foot: the plaintiff is willing to gamble and the defendant wants to be safe. Plaintiffs with frivolous claims are likely to obtain a more generous settlement than the statistics of the situation justify."
p. 328--"The story, I believe, is that a rich and vivid representation of the outcome, whether or not it is emotional, reduces the role of probability in the evaluation of an uncertain prospect. This hypothesis suggests a prediction, in which I have reasonably high confidence: adding irrelevant but vivid details to a monetary outcome also disrupts calculation. Compare your cash equivalents for the following outcomes:
21% (or 84%) chance to receive $59 next Monday
21% (or 84%) chance to receive a large blue cardboard envelope containing $59 next Monday
The new hypothesis is that there will be less sensitivity to probability in the second case, because the blue envelope evokes a richer and more fluent representation than the abstract notion of a sum of money. You constructed the event in your mind, and the vivid image of the outcome exists there even if you know that its probability is low. Cognitive ease contributes to the certainty effect as well: when you hold a vivid image of an event, the possibility of its not occurring is also represented vividly, and overweighted."
p. 329--"The idea of denominator neglect helps explain why different ways of communicating risk vary so much in their effects. You read that "a vaccine that protects children from a fatal disease carries a 0.001 risk of permanent disability." The risk appears small. Now consider another description of the same risk: "One of 100,000 vaccinated children will be permanently disabled." The second statement does something to your mind that the first does not: it calls up the image of an individual child who is permanently disabled by a vaccine; the 999,999 safely vaccinated children have faded into the background. As predicted by denominator neglect, low-probability events are much more heavily weighted when described in terms of relative frequencies (how many) than when stated in abstract terms of "chances," "risk," or "probability" (how likely). As we have seen, System 1 is much better at dealing with individuals than categories."
p. 331--"choice from experience" vs. "choice from description": "As expected from prospect theory, choice from description yields a possibility effect—rare outcomes are overweighted relative to their probability. In sharp contrast, overweighting is never observed in choice from experience, and underweighting is common."
p. 333--"When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, this is not good news."
p. 335--"every simple choice formulated in terms of gains and losses can be deconstructed in innumberable ways into a combination of choices, yielding preferences that are likely to be inconsistent."
p. 339--"Closely following daily fluctuations is a losing proposition, because the pain of the frequent small losses exceeds the pleasure of the equally frequent small gains. Once a quarter is enough, and may be more than enough for individual investors."
p. 348--"people expect to have stronger emotional reactions (including regret) to an outcome that is produced by action than to the same outcome when it is produced by inaction."
p. 351--"The taboo tradeoff against accepting any increase in risk [e.g. regarding safety of children] is not an efficient way to use the safety budget. In fact, the resistance may be motivated by a selfish fear of regret more than by a wish to optimize the child's safety."
This is a bit too economistic. A trade off between a safer, more expensive insecticide and a safer car seat?! But the "taboo tradeoff" does seem like a useful coinage. Seems to me this is usually about suffering judgment by other parents, less so about purely internal guilt.
p. 354--"Poignancy (a close cousin of regret) is a counterfactual feeling, which is evoked because the thought "if only he had shopped at his regular store. . ." comes readily to mind. The familiar System 1 mechanisms of substitution and intensity matching translate the strength of the emotional reaction to the story onto a monetary scale, creating a large difference in dollar awards."
p. 361--"As we have seen, rationality is generally served by broader and more comprehensive frames, and joint evaluation is obviously broader than single evaluation. Of course, you should be wary of joint evaluation when someone who controls what you see has a vested interest in what you choose. Salespeople quickly learn that manipulation of the context in which customers see a good can profoundly influence preferences. Except for such cases of deliberate manipulation, there is a presumption that the comparative judgment, which necessarily involves System 2, is more likely to be stable than single evaluations, which often reflect the intensity of emotional responses of System 1. We would expect that any institution that wishes to elicit thoughtful judgments would seek to provide the judges with a broad context for the assessments of individual cases. I was surprised to learn from Cass Sunstein that jurors who are to assess punitive damages are explicitly prohibited from considering other cases. The legal system, contrary to psychological common sense, favors single evaluation."
p. 380--"The hedonimeter totals are computed by an observes from an individual's report of the experience of moments. We call these judgments duration-weighted, because the computation of the "area under the curve" assigns equal weights to all moments: two minutes of pain at level 9 is twice as bad as one minute at the same level of pain. However, the findings of this experiment and others show that the retrospective assessments are insensitive to duration and weight two singlular moments, the peak and the end, much more than the others."
p. 381--listener whose CD was scratched and produced "a shocking sound" towards the end of a long symphony: "he reported that the bad ending "ruined the whole experience." But the experience was not actually ruined, only the memory of it. The experiencing self had an experience that was almost entirely good, and the bad end could not undo it, because it had already happened. ...
"Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self."
***
p. 402--"Measured by life satisfaction 20 years later, the least promising goal that a young person could have was "becoming accomplished in a performing art." Teenagers' goals influence what happens to them, where they end up, and how satisfied they are."
[It seems that this refers to Bowen and Bok, "The Shape of the River"]
***
-"the focusing illusion": "Nothing in life is as important as you think it is when you are thinking about it."
--"The origin of this idea was a family debate about moving from California to Princeton, in which my wife claimed that people are happier in California [403] than on the East Coast."
[403]"As we analyzed the data, it became obvious that I had won the family argument. As expected, the students in the two regions differed greatly in their attitude to their climate: the Californians enjoyed their climate and the Midwesterners despised theirs. But climate was not an important determinant of well-being. Indeed, there was no difference whatsoever between the life satisfaction of students in California and in the Midwest."
Touché. To my credit, though, I never said I was "happier" in California; rather, I wrote this reflection when I realized just how much more *practical* it is for me to live here. Good thing I'm not married, I guess.
p. 404--"When asked about the happiness of Californians, you probably conjure an image of someone attending to a distinctive aspect of the California experience, such as hiking in the summer or admiring the mild winter weather. The focusing illision arises because Californians actually spend little time attending to these aspects of their life. Moreover, long-term Californians are unlikely to be reminded of the climate when asked for a global evaluation of their life. If you have been there all your life and do not travel much, living in California is like having ten toes; nice, but not something one thinks much about."
If on the other hand you lived your first three decades in one of the Midwestern states
that native Californians (and many Midwesterners) so despise, your System 1 does in fact attend almost constantly to your newfound advantages. Or at least mine does.
p. 411--heading "Econs and Humans": "For econonmists and decision theorists, the adjective ["rational"] has an altogether different meaning [than in everyday speech]. The only test of rationality is not whether a person's beliefs and preferences are reasonable, but whether they are internally consistent. A rational person can believe in ghosts so long as all her other beliefs are consistent with the existence of ghosts. ...
"The definition of rationality as coherence is impossibly restrictive; it demands adherence to rules of logic that a finite mind is not able to implement. Reasonable people cannot be rational by that definition, but they should not be branded as irrational for that reason. Irrational is a strong word which connotes impulsivity, emotionality, and a stubborn resistance to reasonable argument. I often cringe when my work with Amos is credited with demonstrating that human choices are irrational, when in fact our research only showed that Humans are not well described by the rational-agent model.
"Although Humans are not irrational, they often need help to make more accurate judgments and better decisions, and in some cases policies and institutions can provide that help. These claims may seem innocuous, but they are in fact quite controversial. As interpreted by the important Chicago school of economics, faith in human rationality is closely linked to an ideology in which it is unnecessary and even immoral to protect people against their choices."
p. 416--"System 1 is indeed the origin of much that we do wrong, but it is also the origin of most of what we do right—which is most of what we do. Our thoughts and actions are routinely guided by System 1 and generally are on the mark. One of the marvels is the rich and detailed model of our world that is maintained in associative memory: it distinguishes surprising from normal events in a fraction of a second, immediately generates an idea of what was expected instead of a surprise, and automatically searches for some causal interpretation of surprises and of events as they take place."
p. 417--"The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcement from System 2. ... Unfortunately, this sensible procedure is least likely to be applied when it is needed most. We would all like to have a warning bell which rings loudly whenever we are about to make a serious error, but no such bell is available, and cognitive illusions are generally more difficult to recognize than perceptual illusions. The voice of reason may be much fainter than the loud and clear voice of an erroneous intuition, and questioning your intuitions is unpleasant when you face the stress of a big decision. More doubt is the last thing you want when you are in trouble. The upshot is that it is much easier to identify a minefield when you observe others wandering into it than when you are about to do so. Observers are less cognitively busy and more open to information than actors. That was my reason for writing a book that is oriented to critics and gossipers rather than to decision makers.
"Organizations are better than individuals when it comes to avoiding [418] errors, because they naturally think more slowly and have the power to impose orderly procedures."
p. 418--"There is a direct link from more precise gossip at the watercooler to better decisions. Decision makers are sometimes better able to imagine the voices of present gossipers and future critics than to hear the hesistant voice of their own doubts. They will make better choices when they trust their critics to be sophisticated and fair, and when they expect their decision to be judged by how it was made, not only by how it turned out."