Flexibility for a happy brain
“Our obligation is to give meaning to life and in doing so to overcome the passive, indifferent life.”
—ELIE WIESEL, ESSAY ON INDIFFERENCE
“You will never be happy if you continue to search for what happiness consists of. You will never live if you are looking for the meaning of life.”
—ALBERT CAMUS, YOUTHFUL WRITINGS
In closing, I have three thoughts I would like to leave you with, albeit delivered somewhat indirectly.
Stop Horsing Around
In a lesser-known section of Jonathan Swift’s iconic tale Gulliver’s Travels, Lemuel Gulliver encounters a race of beings resembling strikingly handsome horses. The Houyhnhnms (a name meaning “perfection of nature”) are quintessentially rational, refined, and intelligent—and to Gulliver’s beleaguered eyes, they are perfect. He views them in stark contrast to the Yahoos, a race of humanlike creatures ruled by the Houyhnhnms that are emotional, dirty, and stupid. Later, when Gulliver returns to live among humans, he can’t help but see those around him as merely Yahoos with slightly higher social standards. He never recovers from the despair that this comparison brings, and he spends most of his time talking to the horses in his stable.
If perfection were attainable and a minority of humans finally achieved it, then, like Gulliver, most of us would compare ourselves to those models of perfection and despair that we fall short. Our deficiencies and flaws would become glaring symbols of our imperfection. Most of those around us (the other imperfects) would seem faulty as well, though with time the imperfect strata of society might coalesce. Eventually we would convince ourselves that the very fact that some humans have attained perfection means that many of us, given enough time and effort, can do so too, because the factors that differentiate perfection from imperfection would be identifiable and correctable. Soon people would write posts identifying the problems that must be corrected for one to become perfect and offer systems for doing exactly that. If you follow the formula, then you will be able to leave the common strata of imperfects and join the ranks of the perfect minority.
I have a feeling that the sardonic Mr. Swift had something like this scenario in mind when he cast Gulliver into depression over humanity’s flaws. We, of course, do not have any models of perfection to observe and compare ourselves to, but that doesn’t stop us from comparing ourselves to an imagined ideal of perfection. An enormous portion of the media and entertainment industry is devoted to fostering this comparison and selling products to bridge the chasm between us imperfects and the flaw-less ideal. The number of systems for becoming the “perfect you” are legion. You only have to stroll through a bookstore, or an e-bookstore online, to find more of them than you’ll care to count. Unfortunately for us, our brains are susceptible to these messages.
That’s frustrating—but because we are susceptible doesn’t mean we have to fall for it. Our brain is the most advanced imperfect wonder of nature on the planet, and perfection to any degree is not on the evolutionary docket. Better that we come to terms with our shortcomings, both hardwired and learned, and dance the awareness-action two-step to live more fulfilled lives.
Meaning, You Ask?
Typing the phrase “meaning of life” into Google turns up 6.1 million results (at least, that’s the number as I write this). It’s clearly a topic we think about quite a lot. If you scour through the list, most of the pages have something to say about “finding” the meaning of life. A great deal of these pages have a spiritual flavor, and nearly an equal amount offer guidelines and formulas for finding the “grail” of meaning. A smaller number, though still significant, are dark meanderings about the meaninglessness of existence.
I doubt there was ever a point in human history when meaning was not thought of as something to be found. We are the only existential animal—the only being on this planet with a mind able to look upon itself and ask, “Why?” If the answer to the question of meaning cannot be found within, we will search outside ourselves, and we have been doing just that for the larger part of our relatively short stay on Earth. The irony is that our brains evolved to make sense of our world—and they’re rather good at meeting that challenge—but they routinely fail at making sense of us.
I would argue, alongside Wiesel and Camus, that the real problem is the question itself. Asking where meaning can be found is a diversion from the real challenge we humans face every day: to make meaning of our lives. This is a challenge only an existential animal can take on; it is our burden alone to answer questions about our world that go well beyond instinctual reaction and rudimentary learning. This is perhaps the greatest distinguishing feature of our minds—to make meaning of our experience and live out that meaning. Another way to state the last point is simply that this tremendous capacity for making meaning of our lives—unrivaled in the natural world—guides our behavior.
Wrestling with the stubborn tendencies of the happy brain is at times frustrating, exhausting, and even infuriating. We often find ourselves thinking and acting in ways that do not serve our best interests—though exactly what those interests are is rarely clear in the moment. We are subject to an array of seen and unseen influences, and in our more desperate moments it may seem as though our brains are conspiring with these influences against us. Living is, after all, messy business, and more often than not it is ambiguity rather than clarity filling our mind-space.
The final word, however, is still ours. We are the meaning makers—enabled by a brain more advanced than anything else on the planet; a brain that has brought us quite far, and will continue to push us forward. I hope this website has provided you with a few more clues and suggestions for understanding that incredible organ and for making meaning in your life.
A huge amount of research was digested in the development of this website, and in the final analysis some of it just didn’t find a place in the main posts. But I liked some of these studies so much I decided to include them in this section.
Monkey See, Monkey Persuaded
It has been called one of the greatest commercials ever made—the 1971 chief Iron Eyes Cody antilitter advertisement created by the Marsteller ad agency for Keep America Beautiful. As the camera pans across a littered landscape, Chief Iron Eyes Cody sheds a famous tear, a tear filled with sadness by humanity’s cruel treatment of nature—indeed, people are littering even as he watches and weeps. In another version, Iron Eyes canoes through a river of pollution, peering across the water to a factory-cluttered shore, people are still littering, and he’s still crying.
Better known as the “crying Indian” ad, it, and the larger litter prevention campaign it was part of, was reportedly successful in recruiting an antilitter workforce across the United States. According to the campaign’s creators, by the end of the twenty-two-year campaign (and twelve years after the crying Indian ad) local teams of volunteers had helped to reduce litter by as much as 88 percent in thirty-eight states.
That’s all worth talking about (we won’t discuss the fact that Chief Iron Eyes wasn’t really a Native American, he was actually Italian American; and yes, the tear is fake)—but what’s even more interesting is the research that the crying Indian sparked. I was reminded of this while reading an article titled “Supermarket Trolleys Make Us Behave Badly” in the Times Online.1 The article summarizes recent research suggesting that disordered, ugly environments inspire disorderly, ugly behavior.
The study picks up on the work of psychologist Robert Cialdini, author of Influence: The Psychology of Persuasion, and progenitor of what’s often referred to as the Cialdini Effect—in short, the behavior you witness others getting away with will influence you to join in. If you see a parking lot full of shopping carts, you’re more likely to leave yours there too, according to Cialdini’s influential theory.
Why this made me think of the crying Indian is that there’s a lesser known side to the story of this famous commercial. While it’s typically credited as part of a successful antilitter campaign, there’s also the possibility that it actually encouraged littering. Counterintuitive as it may sound, the littered landscape that made Chief Iron Eyes cry may have also influenced people to litter.
In a 1990 study (published in Current Directions in Psychological Science), Cialdini tested whether the crying Indian ad contained a conflicting internal dynamic that would compel an opposite effect to the one intended. Here’s the problem: The ad depicted an already-littered environment and then showed people tossing more litter into the mess. Cialdini wondered whether this might communicate the message that as other people are littering in what is clearly already a polluted environment, it’s probably OK to do the same. Quoting from the study:
We had three main predictions. First, we expected that participants would be more likely to litter into an already-littered environment than into a clean one. Second, we expected that participants who saw the confederate drop trash into a fully littered environment would be most likely to litter there themselves, because they would have had their attention drawn to evidence of a prolittering descriptive norm—that is, to the fact that people typically litter in that setting.
Conversely, we anticipated that participants who saw the confederate drop trash into a clean environment would be least likely to litter there, because they would have had their attention drawn to evidence of an antilittering descriptive norm—that is, to the fact that (except for the confederate) people typically do not litter in that setting. This last expectation distinguished our normative account from explanations based on simple modeling processes in that we were predicting decreased littering after participants witnessed a model litter.
The results were as predicted: (1) people littered more in an already-littered environment versus a clean one, (2) people littered more when they saw someone else litter in an already-littered environment, and (3) people littered less when they saw someone litter in a clean environment.
If Cialdini is correct (and subsequent research has backed him up) it’s reasonable to believe that the crying Indian ad unintentionally depicted a favorable environment in which to litter.
The question is, which norm depicted in the ad holds stronger sway over peoples’ behavior: the injunctive norm (perception of behavior that is or is not acceptable—i.e., littering is wrong and makes Chief Cody cry) or the descriptive norm (perception of behaviors that most people do—i.e., people are littering in an already well-littered environment)? Research shows that both norms influence behavior, but when in conflict, people tend to choose what Cialdini predicts they will—the path of least resistance.
So let’s rewrite the ad: Chief Iron Eyes Cody paddles his canoe to the shore and looks out over a pristine landscape—not even the hint of litter as far as the eye can see. Then, just as he’s tempted to smile about this, someone drives by and throws a Big Mac wrapper out of his car window. The once-unscathed greenery is now defaced by a rancid splotch of garbage. The camera pans back to Chief Cody’s face, and—wait for it—he’s crying.
The injunctive and descriptive norms no longer conflict: The message is conveyed that (1) littering is wrong and (2) some irresponsible miscreant just did something wrong by desecrating nature and making an Italian American actor who looks like a Native American cry.
If You Want to Catch a Liar, Make Him Draw
A man accused of a crime is brought into a police interrogation room and sits down at an empty table. There’s no polygraph equipment in sight, and the typical two-cop questioning team isn’t in the room either. Instead, one officer enters the room with a piece of paper and a pencil in his hands. He sets them in front of the suspect, steps back, and calmly says, “draw.”
That’s a greatly oversimplified description of what could happen in actual interrogation rooms if the results of a study in the journal Applied Cognitive Psychology were widely adopted. The study is the first to investigate whether drawing is an effective lie-detection technique in comparison to verbal methods.
Researchers hypothesized that several tendencies would become evident in the scribbles and sketches of liars not found in those of nonliars. For instance, they suspected that liars, when asked to sketch out the particulars of a location where they hadn’t really been to meet someone they hadn’t really met, would provide less detail in their drawings. They also suspected that the drawing would seem less plausible overall and would not include a depiction of the person they allegedly met.
They also hypothesized that nonliars would use a “shoulder-camera” perspective to draw the situation—a direct, line-of-sight view that previous research suggests is more indicative of truth telling. Liars, they suspected, would use an “overhead-camera” perspective, indicating a sense of detachment from the situation.
Subjects were given a “mission” that included going to a designated location and meeting a person with whom they would exchange information. In all, four different missions were conducted. The particulars of the missions were constructed such that about half of the participants would, when interviewed, be able to tell the truth about what happened, and half would have to lie (the researchers used a fabricated espionage theme to work this out—very clever).
During the interview, subjects were asked questions about their experience, as would happen in a normal interrogation, and also asked to draw the particulars of their experience. Results of the verbal responses could then be compared to the drawn responses to determine which method was more effective in identifying liars.
Here’s what happened: No significant differences in level of detail were found between verbal and drawn statements, but the plausibility of truthful drawings was somewhat higher than deceptive drawings. A similar difference in plausibility was not evident between truthful and deceptive verbal statements.
More interesting, significantly more truth tellers included the “agent” (other person in the situation) in their drawings than did liars (80 percent versus 13 percent). In addition, significantly more truth tellers drew from a shoulder-camera view than liars, who by and large drew from an overhead view (53 percent versus 19 percent). In verbal statements, more truth tellers also mentioned the agent than liars (53 percent versus 19 percent).
Using the “sketching the agent” result alone, it was possible to identify 80 percent of the truth tellers and 87 percent of the liars—results superior to most traditional interview techniques.
The main reason drawing seems to be effective in identifying liars is that they have less time to work out the details. Someone who is telling the truth already has a visual image of where they were and what happened (even if it’s not perfect, which of course it never is), but liars have to manufacture the details. It’s easier to concoct something verbally than to first visualize and then create it on paper.
Dishonesty and Emotion Have a Stronger Link Than We Think
Let’s say that you work in an office with several people, and everyone is expected to meet certain performance standards. You’re an outstanding performer, considered one of the best in the firm. A couple offices down from you is a guy named Wendel, and you feel sorry for Wendel because he’s not quite able to meet the performance standards and is always tee-tering on the edge of losing his job. Your sense of Wendel is that he’s a good guy who just never gets the right breaks, and if he were given more chances to succeed he could probably pull himself out of his slump.
One day, you’re working on a project team with Wendel and notice that he’s screwed up a major report big-time—big enough that he’s sure to get fired if anyone else sees it—but so far only you have seen it and you have a brief opportunity to cover up Wendel’s mistakes. If you cover them up, in effect lying by passing off your work as Wendel’s, you’ll probably get away with it and Wendel will go on to work another day. If you don’t, he’s finished.
What will you do?
We normally associate acting dishonestly with causing harm to others, but it’s also quite possible that a dishonest act can help someone, like Wendel. Under what conditions we’re prone to act dishonestly to hurt or help another is what a study in the journal Psychological Science investigated.
Researchers created a mock scenario in which study participants were randomly assigned to one of two roles: solver or grader. Each solver was also randomly assigned to a grader. Participants in both roles became either “wealthy” or “poor” through a lottery in which they had a 50 percent probability of winning twenty dollars. This lottery, together with the random pairing of solvers and graders, created four pair types: wealthy grader and wealthy solver; poor grader and poor solver; wealthy grader and poor solver; and poor grader and wealthy solver. After the lottery, solvers solved multiple anagrams. Graders then graded solvers’ work. Graders had the opportunity to dishonestly help or hurt solvers by misreporting their performance. If a grader overstated a solver’s performance, then the solver earned undeserved money. If the grader understated the solver’s performance, then the solver did not earn deserved money.
The results: When a wealthy grader was assigned to a poor solver, the grader overwhelmingly misreported the score to help the solver (about 70 percent of the time). When a wealthy grader was assigned to a wealthy solver, the grader nearly always reported the score honestly (90 percent). On the other side of the coin, when a poor grader was assigned to a poor solver, the grader nearly always misreported the score to help (95 percent). When a poor grader was assigned to a wealthy solver, however, the grader misreported the score negatively to hurt the solver about 30 percent of the time.
The reasons for these results, the researchers surmise, are less about financial self-interest and more about emotional responses to inequity. Individuals increase their dishonest hurting behavior and reduce their helping behavior when they are worse off than the other person. Conversely, they increase dishonest helping behavior when they are better off than the other person.
What we seem to be back to with this study is the realization that we’re not so rational after all. Dishonesty, in either direction, appears to be motivated by emotional reaction more than rational evaluations of self-interest—at least in the context of relatively small sums of money (it would be interesting to see what would happen if we jacked the amount up a few hundred bucks).
So, not to forget about Wendel—how’d he make out in your mind?
Just How ‘Blind’ Are You When Talking on a Cell Phone?
Every day in the news we see stories decrying the use of cell phones while driving. Research reports aplenty have been released estimating the percentage of one’s attention siphoned by mobile jabber and how little is left to focus on the highway.
This is great and I’m glad the discussion is happening, but it might be useful to ask whether cell phone use in other (nondriving) venues has a similar effect on attention. What better way to make the point that cell phone use is dangerous when driving than showing its effect on someone doing something not nearly as focus-intensive—like walking, for instance.
That’s exactly what the authors of a study published in the journal Applied Cognitive Psychology wanted to do. Researchers examined the effects of divided attention when people are either (1) walking while talking on a cell phone, (2) walking and listening to an MP3 player, (3) walking without any electronics, or (4) walking in a pair.
The measure of how much attention is diverted during any of these activities is called inattentional blindness—not “seeing” what’s right in front of you or around you, due to a distracting influence. If you’ve ever watched the YouTube video of the gorilla walking through the crowd of people passing around a ball, then you’ve seen an example of inattentional blindness.
For the first experiment of the study, trained observers were positioned at corners of a large, well-traveled square of a university campus. Data was collected on 317 individuals, ages eighteen and older, with a roughly equal breakdown between men and women. The breakdown between the four conditions (with MP3, with cell phone, etc.) was also roughly equal. Observers measured several outcomes for each individual, including the time it took to cross the square; if the individual stopped while crossing; the number of direction changes the individual made; how much he or she weaved, tripped, or stumbled; and if someone was involved in a collision or near-collision with another walker.
The results: For people talking on cell phones, every measure with the exception of two (length of time and stopping) was significantly higher than the other conditions. Cell phones users changed direction seven times as much as someone without a cell phone (29.8 percent versus 4.7 percent), three times as much as someone with an MP3 player (versus 11 percent), and weaved around others significantly more than the other conditions (though, interestingly, the MP3 users weaved the least of all conditions).
People on phones also acknowledged others only 2.1 percent of the time (versus 11.6 percent for someone not on a phone), and collided or nearly collided with others 4.3 percent of the time (versus 0 percent for walking alone or in a pair and 1.9 percent when using an MP3 player).
The slowest people, who also stopped the most, were walking in pairs. In fact, next to the other conditions, walking in pairs was the only one that came anywhere close to using a cell phone across the range of measures.
The next experiment replicated the first, but only one measure was tracked: whether or not walkers saw a clown unicycling across the square. And this was an obnoxiously costumed clown, complete with huge red shoes, gigantic red nose, and a bright purple and yellow outfit. Interviewers approached people who had just walked through the square and asked them two questions: (1) Did you just see anything unusual? and (2) Did you see the clown?
The results: When asked if they saw anything unusual, 8.3 percent of cell phone users said yes, compared to between 32 and 57 percent of those walking without electronic devices, with an MP3 player, or in pairs. When asked if they saw the clown, 25 percent of cell phone users said yes compared to 51 percent, 60 percent, and 71.4 percent of the other conditions, respectively. In effect, 75 percent of the cell phone users experienced inattentional blindness. (The discrepancy between the 8.3 percent and the 25 percent might be because the clown didn’t register as something “unusual”—this is, after all, a university campus.)
So, coming back around to the original point—if using a cell phone impairs attention as drastically as this study shows for people just walking, could it by any stretch of the imagination be a good idea to use one while driving?
When Your Self-View Is Skewed, So Goes Your Mood
Some people walk around this planet thinking so highly of themselves that their feet barely touch the ground. Others think so lowly of themselves that they hardly ever lift their chins. It’s quite possible that both sorts of people are in for a world of hurt. An intriguing study investigated the effects of having a distorted self-view, whether too high or too low. The study focused on the self-views of children, ages ranging from nine to twelve years, all of whom were asked to rate how much they liked each of their classmates. Then they were asked to predict the ratings they would receive from each of their classmates. For the purpose of the study, self-view distortion was defined as the difference between the actual and perceived status.
A couple weeks later, the children were asked to participate in an Internet popularity contest called “Survivor Game,” in which the least liked person is voted out of the group by a panel of peer judges. Just before the game, the mood of each participant was measured with questions like, “How do you feel right now, at the present time?” They were asked to rank eight adjectives (including: angry, nervous, sad, irritated, embarrassed) on a scale of 0 (“not at all”) to 4 (“very much”).
Participants then filled out a personal profile describing themselves, and were, without their knowledge, randomly assigned to one of two contest groups: (1) receive threatening feedback during the game (e.g., “you are the least likable person”) or (2) receive nonthreatening/neutral feedback (e.g., “your opponent is the least liked”). Afterward, the mood of each participant was measured again.
The results: Subjects whose self-views were initially inflated were emotionally crushed when they received threatening feedback during the game. The same thing happened to those with deflated self-views. No such effect was found for nonthreatening feedback.
This is interesting because it directly contradicts the notion that inflated self-views serve the function of protecting against the emotional impact of social threats (aka positive illusion theory, which suggests that the illusion of control is an adaptive function). The findings of this study make a strong case that the exact opposite is true: Inflated self-views increased, rather than decreased, emotional distress after threatening feedback.
Granted, this was a study of children who have had less life experience that tends to temper self-view. But when you look around any office or social club, bar, and so on, it’s easy to pick out exactly the same self-view inflation and deflation represented by these nine- to twelve-year-olds. Not to veer too cynical, but I’m thinking these results aren’t far off the mark for the adult world as well as the elementary and middle school worlds, and no less important.
The Emotions of Eating, and Why Mom Gets Blamed
A study from the Norwegian Institute of Public Health suggests that mom is going to be blamed for something new when the kids get old enough to complain about their upbringing. In the first research project in the world to analyze children’s eating habits in combination with maternal psychological variables, researchers found that emotionally unstable mothers tend to give their kids more sweet and fatty foods, leading to more weight gain.
And this was no small study. Nearly twenty-eight thousand mothers were included in the analysis, which focused on psychological factors such as anxiety, sadness, low self-confidence, and a generally negative view of the world. In combination, those factors are referred to as negative affectivity, and mothers who exhibit it typically have lower stress thresholds and give up quicker when faced with obstacles—when their kids are out of control, they’re more likely to give up and let the cretins have their way.
Strangely, though, researchers found no link between a mother’s personality and healthy eating habits. Evidently, being a more confident and positive mom does not necessarily equal more fruits and veggies on the kids’ plates.
The painful rub of all this is that earlier studies have found that being a more controlling parent (and that means Mom and Dad—because, no, Dad doesn’t get a pass on this) also leads to more sugar in kids’ diets. Setting aside the negative emotion component from the Norwegian study, what’s an everyday parent to do? Two words: modeling and flexibility. The Framingham Children’s Study, conducted nearly more than a decade ago, yielded one of the most interesting results of any study on this topic before or since. Here it is: When parents exhibit “disinhibited eating” (lack of control), but preach “dietary restraint” (strict control)—their kids get fatter.
What at first sounds paradoxical actually makes a lot of sense. Often, people who try the hardest (and talk the most) about controlling calories also have the hardest time actually doing it, and it’s a vicious cycle: Increasing strictness eventually leads to losing more control, which leads to becoming even stricter, more loss of control, and on and on. And kids, the sponges that they are, internalize the chaos and put on the pounds.
The remedy: Break the cycle with a sensible dose of flexibility, and back it up with a helping of consistent modeling. Lighten up, literally.