The most curious aspect of Becker's books is that he writes of "dualism" matter-of-factly. This little term detains him not in the least even as it derails many otherwise sympathetic readers.
It is possible to sum up Becker's dualism very concisely: he is saying of human beings that their eyes are bigger than their stomachs. Presumably even card-carrying materialists have observed this phenomenon (or perhaps evinced it themselves). If they would deny even the "idealism" of going back for a second piece of cake after one's stomach is already full, then materialism indeed cannot explain much at all. If the truism that all cognition is "embodied" is deployed merely to repress awareness of everything that remains confounding about human behavior, then the so-called Cognitive Revolution is not revolutionary but reactionary.
Really Becker's dualism is nothing controversial, and it is both more parsimonious and more incisive than any laboratory psychology. Its only fault is that it arrives stamped with a contested label which it can quite well do without.
Simply put,
That's it. That's Becker's dualism of the "creaturely" and the "symbolic." The theory of desire starts here, embodied in reaching for the Alka-Seltzer, but of course it does not end here.
Suppose that a guest at a dinner party must explain to the host that a spouse or a child suddenly is not feeling well after going back for a second and third dessert. The guest's rhetorical options are constrained by whatever norms prevail among those present. Only in the paradisiacal and mindfully rationalistic milieu of professors, scientists, and public intellectuals could the existentialist version perhaps, maybe, find practical outlet in genteel company.
"After a full evening of recreation, entertainment, libations, and nutriment, a human being may have eaten too much and yet still not lived nearly enough. And indeed, dear Carruthers The Younger, though he has been neither seen nor heard for at least ninety minutes, has indeed seized the opportunity to live all too profligately. As a consequence, he must retire, along with those members of his kinship group presently charged with his custody."
Meanwhile, in most every other milieu, the necessity to avoid meeting generosity with ingratitude, or even to risk misconstrual to this effect, takes priority over all else; and so a genteel metaphor serves the occasion more gracefully.
"I'm so sorry, but we have to go. The kid's eyes were bigger than his stomach."
This ensures that fault is laid on the guest rather than the host, which is what is most important. The quality of the explanation is much less important. And indeed, this formulaic utterance, even if it were somehow literal rather than metaphorical, merely suggests a proximate cause. Rank and Becker propose an ultimate cause: a living thing can (quite easily) have not enough life, but it cannot have too much life. It does not take a laboratory scientist to see that, but it does take a certain tolerance of wounded vanity for a human being to accept, as they utter such a thing about others, that it applies also to themselves.
It is from the inability to tolerate such unhappy realities, from the various efforts to escape or deny them, that the entire absurdity of the human condition follows. That is most of what Becker is trying to say. Whether human life really can be split in two, in any conventional sense, is far less important.
"Man's tragic destiny," writes Becker, is that "he must desperately justify himself as an object of primary value in the universe." Hence, "what the anthropologists call "cultural relativity" is thus really the relativity of hero-systems the world over."
This is the comparative view. The view from the inside of any one culture is of course very different. As Francisco Gil-White observes, ca. 2001,
These days “good” anthropologists do not essentialize groups, and therefore no self-proclaimed essentialists are found in anthropology journals. But ordinary folk are not good anthropologists or sophisticated constructivist scholars. Quite to the contrary, they are naive essentialists, and I will try to explain why.
In other words, as human knowledge marches forward, human beings stay pretty much the same. Every scholarly advance widens the knowledge-cognition gap, even if the advance is modest and its practical applications innocuous. And so "sophisticated constructivist scholars" find the ground shifting beneath their feet. Constructivists easily find themselves on the right side of truth but on the wrong side of justice. Justice, after all, is a "local epistemology." Justice is "constructed" only in a crude metaphorical sense. Really justice is essentialist, the ultimate essentialism perhaps. If justice is the end, relativism cannot be the means. Every relativist conclusion converged upon by advancing knowledge, therefore, is a problem created, whether or not it is also a problem solved.
If it requires dozens of additional pages for an astute scholar to "try to explain why" essentialism persists among the folk, it is easier to explain why they do not become constructivists: because it is very difficult for anyone at all to do so. It has taken many generations and much painstaking litigatation of methodology in order for social science to begin to become constructivist without just imagining itself to be that way. The simplicity and elegance of the best advanced thinking too easily conceals a background process that is anything but simple and elegant. It seems unreasonable, then, to expect everyone else (or anyone else) to simply bootstrap themselves out of whatever "naive" beliefs they have already developed and into the "sophisticated constructivist" truth. This is asking too much of the average person, for one thing; and for another thing, advanced thinking per se appears to guarantee no concurrent advance in morality or consciousness, as Hannah Arendt and Christopher Lasch (among others) have observed.
In his address on "Cargo Cult Science," Richard Feynman reflects upon this delicacy of empirical investigation:
there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.
The question was, how did the rats know,...? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and, still the rats could tell.
He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.
Now, from a scientific standpoint, that is an A‑Number‑1 experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. ...
I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats.
The anecdote itself is evergreen, but there is an ultimate pessimism at its core: "ordinary" human existence cannot proceed this way. Whether it should is moot. It has not and cannot. Still, knowledge advances all the same and seems only to be accelerating its advance. This suggests that whatever possibilities exist for progress can only entail working around essential human frailties, not in trying to change them.
One way of addressing inner human faults, of course, is to simply leave them where they are. This does not mean merely to ignore or deny them, but to leave them on the "inside" of the human being, to render unto Caesar the things that are Caesar's, and to then turn attention to everything that is "outside" of human beings. That is, "constructivist" attention can profitably turn to the literal "construction" of the lived environment and of social institutions. Outer constructions are very responsive to advances in human knowledge whereas the insides of human beings themselves have scarcely changed since the time of the Neanderthals. But this, somehow, this is not what anyone means by "constructivism." This is just the old "essentialism," now trailed by a motley of other impish -isms, and it all gives constructivists the heebie-jeebies. Constructivists know how to build buildings but they would rather build humans. Only one of those aims is coherent.
Human beings have far greater ability to crystallize their knowledge in the form of built environment and institutions than they do to persuade or coerce each other. In this way advancing knowledge can be applied and passed on to large populations without demanding that each individual account intellectually for each advance in all of its granular detail. In other words, knowledge can be transfigured into practice. Knowledge per se thereby stands or falls in relation to some practical aim. Knowledge then need not be operationalized as persuasion or coercion. The pantheon of knowledge need not be a caste construction any longer. Untouchables may become as gods if they can get an important person's car started after the car has died; and of course, when important people cannot even get their own cars started, they are not as important as they think.
Materialists of the dialectical persuasion have plenty of their own slogans about practice. For ardent capitalists, meanwhile, there is the more recent example of Nassim Nicholas Taleb:
you need a name for the color blue when you build a narrative, but not in action—the thinker lacking a word for "blue" is handicapped; not the doer. (I've had a hard time conveying to intellectuals the intellectual superiority of practice.)
As always, Becker has the key to why this works:
one of the most vital facts about all objects is that they have both an inside and an outside... But, says [G.H.] Mead, dawning consciousness has no awareness of this dualism; the organism knows its insides by direct experience, but it can know its outside boundaries only in relation to others. ...
...the self cannot come into being without using the other as a lever. As the noted sociologist Franklin Giddings once put it: It is not that two heads are better than one, but that two heads are needed for one.
Consciousness, then, is fundamentally a social experience... A self-reflexive animal, after all, can only get the full meaning of its acts by observing them after they have happened.
One reason for the "superiority of practice," then, is social: social "practice" entails "using the other as a lever." This is how human beings "give outsides to ourselves, and confer insides upon others." Intellectual learning can help, but only through practice are insides and outsides "conferred."
Moreover, the imperative of the social human being to "get the full meaning of its acts by observing them after they have happened" is fulfilled, obviously, only by engaging in some kind of "acts." This is true not just socially but more generally.
This road, nonetheless, is fraught with danger, as Becker elaborates:
We come into contact with people only with our exteriors—physically and externally; yet each of us walks about with a great wealth of interior life, a private and secret self. ... The child learns very quickly to cultivate this private self because it puts a barrier between him and the demands of the world. ...it seems that the outer world has every right to penetrate into his self and that the parents could automatically do so if they wished... But then he discovers that he can lie and not be found out: it is a great and liberating moment, this anxious first lie—it represents the staking out of his claim to an integral inner self, free from the prying eyes of the world.
By the time we grow up we become masters at dissimulation, at cultivating a self that the world cannot probe. But we pay a price. ...we find that we are hopelessly separated from everyone else. ... We touch people on the outsides of their bodies, and they us, but we cannot get at their insides and cannot reveal our insides to them. This is one of the great tragedies of our interiority—it is utterly personal and unrevealable.
The "superiority of practice" implies an inferiority of learning or of cognition, and the above passage from Becker captures one aspect of this. Human "interiority" is "unrevealable" but cognition-about-interiority gives the illusion of revealing something (or perhaps everything). Practice, meanwhile, transfigures cognition into action, ideas into objects, relationships into institutions. Practice does not "reveal" anything either; what it does, rather, is to save human beings the trouble of really needing to reveal idealistically anything that has not already been revealed materially. From there, certain inscrutable social behavior remains in play, but the problem of the knowledge-cognition gap can at least be moderated. Constructivists and essentialists may disagree about almost everything intellectually and culturally, but they can perfectly well work together to clear a fallen tree, organize and deploy a trading party, or paint a mural.
All of these ideas have been put forward many times before. Occasionally they have found tacit acceptance. But as always, the ground shifts underneath intellectuals' feet and causes a hard-won materialist synthesis to appear as mere toxic idealism. So it goes.
In the twenty-first century there is, to start, the problem that myriad basic social "acts" (arguably the vast majority of them) no longer permit the ascertaining of "meaning" at all. Contemporary life, increasingly conducted "remotely" or "parasocially," is something of a scream into a void, with the result that even mindful, persistent and refined efforts by a human being to locate "outside boundaries" can fail completely. Already in 1979, Christopher Lasch identifies the leading edge of this shift and accounts for it brilliantly through the lens of then-current Freudian theory.
The scream-into-a-void problem is quite transparently a result of the "outside," i.e., of environment and institutions; and so this is what must be changed, at minimum, if humanity is to extricate itself from its present predicament. It is not possible to go directly to changing the "inside" of human beings if everything outside of them remains as it is. "Transhumanism" proposes to make this intervention directly, on the inside, using technology which is younger than the persons who are to intervene upon each other; but this merely evinces the unfreedom of a refusal to accept limits.
As if all of this were not enough, there is a peculiar irony in the course that psychology has taken ever since it pried itself from Freud's cold, dead hands. The research paradigm which took the eschewing of "insides" in favor of "outsides" as its guiding principle, namely the paradigm called Behaviorism, is considered in the twenty-first century discredited and superseded by a subsequent paradigm, called Cognitivism, which tries very hard to figure out what is happening on the inside of human beings. Cognitivism thus chose a very, very strange time to crash the world stage, right when the expansion of the knowledge-cognition gap was becoming self-sustaining.
Not coincidentally, the knowledge-cognition gap became co-extensive with who knows how many other "gaps," not just rhetorically but often in reality; this not so much because of the cognition part (which does not change much anyway) as the knowledge part (which only ever changes into something more and more difficult and expensive and grueling). The most widely-demanded remedy for the gap problem, of course, is to get rid of everything that remotely resembles a gap: all boundaries of practice, of habitation, of knowledge; all borders between countries, between art forms, between businesses, smashed; all real and ideal boundaries, not just some of them, smashed and getting smasheder. The problem with this, though, is that there never was any "boundary" in the "gap," but by this time all the other boundaries have been smashed up trying to close the gap, so no problems are solved this way and many are created, including new gaps which are not boundaries but will be surely be smashed anyway, in due time, without being closed. And now things have become a bit too abstract, so it is time to regroup.
The crown on top of the present human predicament is that human beings have lost even what control they might have over their "outside" because there are too many of them and because they are too connected. As Harari puts it,
When people realise how fast we are rushing towards the great unknown, and that they cannot count even on death to shield them from it, their reaction is to hope that somebody will hit the brakes and slow us down. But we cannot hit the brakes,...
...nobody knows where the brakes are. ... Nobody can absorb all the latest scientific discoveries, nobody can predict how the global economy will look in ten years, and nobody has a clue where we are heading in such a rush. Since no one understands the system any more, no one can stop it.
The bottom line, then, of ever-advancing knowledge, exploding population, and global interconnectedness is that even residents of representative democracies actually are no more self-governing than rats in a maze. They may reflexively slam the footwell with their braking foot, but they will not run into any brake pedals. They may dutifully masquerade as Greeks, but they live as Romans, whether they really want to or realize that they do. They may know that not just culture but time and space too have been relativized, but this does not solve their problem; rather, it is their problem.
Contemporary cognitive science, for all its explanatory and predictive power, cannot penetrate very deeply into existential matters, not even when they are amenable (as the Eye-Stomach problem is) to a cognitivist account. This impasse traces back to an "existential" conflict which is cognitivism's own:
Stated tersely: cognitivism plus progressivism equals a reversion to behaviorism. Try to list the ways that cognitivist findings can be operationalized without being built into the very floors, ceilings and walls of culture, or just built into chips and implanted in people's skulls. Merely explaining to people how to think-about-thinking guarantees nothing and can even be tautological. Coercion is effective only temporarily and hence must ramify into full-blown repression if it is to be sustained. Appealing rhetorically or moralistically to the better angels of our nature fails at scale even if it succeeds in awakening those angels in an elite psycho-minority. And so on down the long list of interventions available to progressivism, from the gentle to the severe, until finally it is tacitly admitted that better floor plans and organizational hierarchies can be created but better people cannot be.
And of course, the kicker: every new piece of cognitivist knowledge also is heaped on the pile of general knowledge, which is already so big that even specialists cannot manage their ration. Specialists, confined even to their speciality, can see straightaway that their stomachs hold only a fraction of what is on their plates, and that there is even more generalism that they ought, ideally, also to digest but cannot even see. And it is the specialists, though they cannot keep up with knowledge either, who see why they cannot keep up: they see the reality of the problem and the terror of it, because they have drawn tight boundaries around themselves.
The broad progressive intervention on behalf of cognitivism, then, ends up as a panoply of imperceptible microinterventions in environment and conditioning, something Skinner could have drawn up, or perhaps precisely what he drew up, depending on how (or if) he is read. This cannot be merely an intervention in knowledge, in morals, or in public opinion. Like dessert and recreation, knowledge and morals are too existentially bound up with life itself to be subject to mere rhetorical persuasion and transactional bargaining. The work of scholars such as William Stephenson and George Lakoff amply shows that even "public opinion" per se also is bound up with nothing less than existence itself. Rank and Becker explain precisely why this is, on which point it is best, by now, to refer the still-skeptical reader to the original texts. So it is also that Rank and Becker have unwittingly explained precisely why progressive psycho-interventions must be behaviorist in application even after they have become cognitivist in principle and in research methodology. They are scholars of the essential as well as of the particular.
Yet another way to put this is that every christening of a new "cognitive bias" is a tacet confession by cognitivists that a certain antisocial behavior is amenable only to behaviorist remedies. Thaler and Sunstein's Nudge is emblematic of this. They are smart enough not to propose that people should try to think differently than people have always thought. They are not the kind of progressives who propose that better people can be created. Rather, they make the humble recommendation that "choice architecture" be expertly designed to favor certain high-level outcomes over others. This is Skinner's brand of progressivism after it has lowered its ambitions and skimmed some cognitivist abstracts.
Now, is this a bad thing? Who is to say? Skinner is a brilliant polymath who demands to be heard. Could it actually be a good thing that all of this has unfolded just as it has? It is a thing, that is all.
It may be objected that all of this is bad, as presented, because it is a jaundiced view. Perhaps so, but there is a purpose to it: it is most important that laypeople direct their skepticism toward everything that happens when any science is politically operationalized. Whether it is a jab or a jolt, it is absolutely necessary to start skeptical and proceed from there, not regarding "the science" per se but rather its operationalization. It is useless to bluster against the research findings themselves. It is better that the findings be accepted rather uncritically than that laypeople should arrogate to counterpose such anecdotes as are available to them, or to join in disputes over methodology which are well beyond lay comprehension. This is not to say, however, that anyone should want anything to do with any science that is younger than they are. That goes for "revolutionary" cognitivism, too.
One can always make what one will of science, as with statistics. The trick, though, is for communities as communities to make the right thing of it. To learn to do this (or, channeling Feynman, to learn what one must learn in order to do this), it is simplest to start with data that is not a moving target. It will move, of course, but there is nothing at all to be done about that.
Much as a work of art is not complete until some audience has received it, and much as "a self-reflexive animal can only get the full meaning of its acts by observing them after they have happened," science is not science until it is operationalized. Science always requires some further elaboration in order to be reconciled with the truly unscientific world of social norms and competing value systems, all the more so for a globalized population of
Billions-with-a-B
whose specialists even cannot hope to keep up with the advance of specialist knowledge. This task of reconciling the scientific and the social is something that philosophy, still, can assist with. Philosophy, though it has been "dead" for a while now,
may nonetheless be the best science-of-science remaining; or perhaps the
only
one.
. . . "creaturely ... symbolic ... man's tragic destiny ... cultural relativity" . . .
Ernest Becker, The Denial of Death (pp. 2-5).
If you took a blind and dumb organism and gave it a self-consciousness and a name, if you made it stand out of nature and know consciously that it was unique, then you would have narcissism. In man, physio-chemical identity and the sense of power and activity have become conscious.
In man a working level of narcissism is inseparable from self-esteem, from a basic sense of self-worth. ... But man is not just a blind glob of idling protoplasm, but a creature with a name who lives in a world of symbols and dreams and not merely matter. His sense of self-worth is constituted symbolically... And this means that man's natural yearning for organismic activity, the pleasures of incorporation and expansion, can be fed limitlessly in the domain of symbols and so into immortality. ...
We like to speak casually about "sibling rivalry," as though it were some kind of byproduct of growing up... But it is too all-absorbing and relentless to be an aberration, it expresses the heart of the creature: the desire to stand out, to be the one in creation. ... An animal who gets his feeling of worth symbolically has to minutely compare himself to those around him, to make sure he doesn't come off second-best. ...it is not that children are viscious, selfish, or domineering. It is that they so openly express man's tragic destiny: he must desperately justify himself as an object of primary value in the universe...
When we appreciate how natural it is for man to strive to be a hero, how deeply it goes in his evolutionary and organismic constitution, how openly he shows it as a child, then it is all the more curious how ignorant most of us are, consciously, of what we really want and need. In our culture anyway, especially in modern times, the heroic seems too big for us. ... We disguise our struggle by piling up figures in a bank book to reflect privately our sense of heroic worth. ... But underneath throbs the ache of cosmic specialness, no matter how we mask it in concerns of smaller scope. Occasionally someone admits that he takes his heroism seriously, which gives most of us a chill... We may shudder at the crassness of earthly heroism, of both Caesar and his imitators, but the fault is not theirs, it is in the way society sets up its hero system and in the people it allows to fill its roles. The urge to heroism is natural, and to admit it honest. For everyone to admit it would probably release such pent-up force as to be devastating to societies as they now are.
The fact is that this is what society is and always has been: a symbolic action system, a structure of statuses and roles, customs and rules for behavior, designed to serve as a vehicle for earthly heroism. Each script is somewhat unique, each culture has a different hero system. What the anthropologists call "cultural relativity" is thus really the relativity of hero-systems the world over. But each cultural system is a dramatization of earthly heroics; each system cuts out roles for performances of various degrees of heroism: from the "high" heroism of a Churchill, a Mao, or a Buddha, to the "low" heroism of the coal miner, the peasant, the simple priest; the plain, everyday, earthy heroism wrought by gnarled hands guiding a family through hunger and disease.
. . . "no self-proclaimed essentialists are found in anthropology journals" . . .
Francisco Gil-White, "Are Ethnic Groups Biological "Species" to the Human Brain? Essentialism in Our Cognition of Some Social Categories" (p. 516).
. . . "Cargo Cult Science". . .
Richard P. Feynman, "Cargo Cult Science: Some remarks on science, pseudoscience, and learning how to not fool yourself. Caltech’s 1974 commencement address."
https://calteches.library.caltech.edu/51/2/CargoCult.htm
. . . "the intellectual superiority of practice" . . .
Nassim Nicholas Taleb, Antifragile (pp. 108-109).
. . . "an inside and an outside" . . .
Ernest Becker, The Birth and Death of Meaning: An Interdisciplinary Perspective on the Problem of Man (pp. 23-24).
. . . "We come into contact with people only with our exteriors" . . .
ibid (pp. 28-29).
. . . "nobody knows where the brakes are" . . .
Yuval Noah Harari, Homo Deus (Ch. 1).
. . . "too existentially bound up with life itself ... Stephenson ... Lakoff" . . .
William Stephenson, The Play Theory of Mass Communication.
George Lakoff, "What Orwell Didn't Know About the Brain, Mind, and Language." In András Szántó, ed., What Orwell Didn't Know: Propaganda and the New Face of American Politics.
Stephenson, who should be widely read but cannot be, was one of the first true "media theorists," and perhaps one of the last. Much of the above work suffers from the same inscutability as does Otto Rank's, and it contains the same flashes of insight.
e.g., From the above work's myriad summative "postulates"
The self is differently involved in conditions of social control and convergent selectivity. I distinguish self from ego. The former is overtly attitudinal, and the latter a matter of mental structure.
Self-attitudes are developed largely in interactions under social control. (The boy who wins a prize at school adds to his self-stature thereby, and almost all that we are in selfhood respects is given to us in relation to social controls.) But the self so put upon us is to a degree false—a façade only. The person has to be what custom or status demands of him.
Convergent selectivity is an opportunity for the individual to exist for himself. Such existence is experienced as enjoyment, contentment, serenity, or the like. Certain free aspects of self are possible outcomes of convergent play.
The mass media, plays, art, and the theater generally offer opportunities for convergent selectivity. The self so involved is enhanced. There is an increase of self-awareness—typical, for example, of the mountain climber. There is no gain in social or material respects but much gain in one's self-existence....
Ordinary life would be impossible without communication, in school, church, business, on the farm, and so on. ... It is important, however, to distinguish between that part of communication supporting social control and that part of it offering opportunities to convergent selectivity.
... Communication in conditions of social control is a "mover" in national and individual development: it informs a nation of its work, its five-year plans; it teaches literacy and technology; it develops industry and extends markets. ...
Mass communication, literature, drama, and the like serve instead for sociability and self-existence. These are vehicles for communication-pleasure—directly in the enjoyment they enjoin, and indirectly in the social conversations they support.... Convergent communication, being communication-pleasure, serves mainly as a "fill" in mass communication. The "important" communication concerns social control matters. The "fill" serves to maintain status quo position, since it serves no "work" purposes. It pleases, entertains, and projects fashions and fads. It is basically aesthetical, and amoral, a-ethical. Its function is not to relieve anxieties but to increase the sum total of self-existing possibilites.
(The "human interest slant" given to popular "news" put the reader in the position of a confidant, reflecting inner-experience, inducing reverie about himself and so on—all pointed toward more existence for oneself.)... Culture develops in play, and play enters into social control and convergent selectivity situations alike. But the play in religious practices, the armed forces, the law courts, in diplomacy, professional practices, is always more or less subject to internalized belief systems; deeply held values, loyalties, needs, and ethical matters are everywhere evident.
The play in convergent selective situations is at best indifferent to such values, needs, and beliefs.
And of course,
The mass media, in much that pertains to social control as well as convergent selectivity, do not communicate truth or reality but only a semblance of it—of a fictional, representational, or charismatic character. Reaching the truth is a matter for science, technology, reason, and work. Charisma, imagery, and fiction are characteristic of convergencies.
But this is not to be despised. On the contrary, reality is so complex that its symbolical representation is essential to give it meanings that ordinary people can appreciate. Politics is conversation about freedom, democracy, liberty...issues which need bear little relation to ongoing real conditions or legislative actions. But all these can be good fun, that is, good communication-pleasure.
(pp. 192-195).
Mostly Stephenson draws boundaries here between the "work" and the "fun," but it is not too hard to see how (and why) those boundaries have been getting blurrier: for one thing, someone has to make or "create" the "content," and that person, per Rank, is undergoing "experience," always, as they do so; and for another, when something is such great fun, it will be sacralized and moralized; and to be sure, few things have so delighted human beings as has The News in both its creation and its reception. Unfortunately, then, this wellspring of delight is also the stalking horse of the darkest hours.
Further on, Stephenson argues that "advertising has been blamed for social effects that belong, instead, to the contrary principles of social control." (203) The reason why
television can sell soap but not, it seems, citizenship...lies in the part played by mediating mechanisms in advertising; in between the advertisement on the one hand and the consumer who reads it there are the facilitating factors of supermarkets, shopping habits, and the ready availability of spending money which make it relatively easy for a consumer to be "sold" a new brand of soap.
(p. 204).
Finally, Stephenson also conducted public opinion research and is not above broaching a certain kind of behaviorist essentialism, or at least skepticism, when it comes to changes of opinion and behavior.
New Yorkers moving to California or Texas want to behave like everyone around them; they do so in terms of the trivia of modern consumer goods—cars, homes, dress, barbecue pits, swimming pools, and the rest—not out of any sense of shame but out of dissonance, followed by self-expansion, self-respect, and self-expression. They change their ways, and their social character follows suit. Whether their deeper value-systems fall in line as well is another matter; our own view is that it would be well to recognize that early internalizations remain untouched.
(p. 83).
This leads to Lakoff, who needs no further introduction.
What are words? Words are neural links between spoken and written expressions and frames, metaphors, and narratives. When we hear the words, not only their immediate frames and metaphors are activated, but also all the high-level worldviews and associated narratives—with their emotions—are activated. Words are not just words—they activate a huge range of brain mechanisms. Moreover, words don't just activate neutral meanings; they are often defined relative to conservative framings. And our most important political words—freedom, equality, fairness, opportunity, security, accountability—name "contested concepts," concepts with a common shared core that is unspecified, which is then extended to most of its cases based on your values. Thus conservative "freedom" is utterly different than progressive "freedom"...
(p. 70).
Perhaps Stephenson also did not "know" this, but he certainly understood it.
. . ."Thaler and Sunstein" . . .
Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness.
To be fair to the authors, their book is well-conceived, well-executed, and chock full of wisdom, homely and sophisticated alike. It is for the better that as many people read it as possible. But, see all above for why this cannot be expected to change very much at all.