Monthly Archives: November 2009

Do coalitionists lose the “big game” in high school gyms?

People believe cheap is bad – see the discussion buried below in the post from the Frontal Cortex as well as the previous one on wine expertise.

Bad quality, bad service, bad process; these are the attributes that you and I see when we know or suspect that we’re dealing with cheaper goods.

So what do public involvement and other coalition practitioners frequently do as they launch and manage initiatives that require public judgments of the quality and success of processes used and outcomes achieved.

Pick the cheapest engagement options.

We reduce communication frequency. Use the cheapest materials. Host meetings in high school gyms and other free or inexpensive locations.

We do it out of habit. Out of a desire to be good financial stewards. To cope with tight budgets. To fit in with our organizations’ cultures.

All good reasons. Likely there are many more.

But is it all a false economy when the trappings used prime our stakeholders to expect substandard experiences yielding inferior results?

And are we distorting the results in ways that harm the discussion – and the reputations and effectiveness of our organizations – with the basic engagement ethos of simpler, cheaper?

Dopamine and Future Forecasting:

Ed Yong has a typically excellent post on a new paper that looks at how manipulating dopamine levels in the brain can change our predictions of future pleasure:

Tali Sharot from University College London found that if volunteers had more dopamine in their brains as they thought about events in their future, they would imagine those events to be more gratifying. It’s the first direct evidence that dopamine influences how happy we expect ourselves to be.

Sharot recruited 61 volunteers and asked them to say how happy they’d feel if they visited one of 80 holiday destinations, from Greece to Thailand. All of the recruits were given a vitamin C supplement as a placebo and 40 minutes later, they had to imagine themselves on holiday at half of the possible locations. After this bout of fanciful daydreaming, they had to take another pill but this time, half of them were given L-DOPA instead of the placebo. Again, they had to imagine themselves in various holiday spots.

The next day, Sharot brought the volunteers back. By this time, they would have broken down all the L-DOPA in their system. She asked them to choose which of two destinations they’d like to go to, from the set that they had thought about the day before. Finally, they rated each destination again.

By the end of the experiments, they perceived their imaginary holidays to be more enjoyable if they had previously thought about the locations under the influence of L-DOPA (while vitamin C, as predicted, had no effect). The implication is clear: think about the future with more dopamine in the noggin and you’ll imagine that you have a better time.

As I’ve noted before, the popular caricature of dopamine – it’s the hedonistic molecule in the brain, activated by sex, drugs and rock and roll – is slightly misleading. Dopamine neurons, it turns out, don’t care about pleasure per se – they’re much more interested in predicting pleasure, and then comparing our predictions to the actual event. The transactions of dopamine are largely about learning – finding a way to maximize our rewards – and not about mere decadence.

What I find so interesting about this experiment is that it neatly confirmed this theory of computational neuroscience. After all, the subjects didn’t feel happier after popping a pill of L-DOPA – boosting dopamine levels didn’t lead to instant gratification, like Huxley’s soma. Instead, it merely altered their predictions of future happiness.

But here’s the funny thing about those predictions: they tend to correlate pretty accurately with our actual experience. If you think you’re going to have a good time on vacation, then you probably will, just as we tend to enjoy foods and beverages and products that we expect to enjoy. (This is the consumer version of the placebo effect.) Here’s how I described similar phenomena in How We Decide:

Baba Shiv, a neuroeconomist at Stanford, supplied a group of people with Sobe Adrenaline Rush, an ‘energy’ drink that was supposed to make them feel more alert and energetic. (The drink contained a potent brew of sugar and caffeine which, the bottle promised, would impart ‘superior functionality’). Some participants paid full price for the drinks, while others were offered a discount. The participants were then asked to solve a series of word puzzles. Shiv found that people who paid discounted prices consistently solved about thirty percent fewer puzzles than the people who paid full price for the drinks. The subjects were convinced that the stuff on sale was much less potent, even though all the drinks were identical. ‘We ran the study again and again, not sure if what we got had happened by chance or fluke,’ Shiv says. ‘But every time we ran it we got the same results.’

Why did the cheaper energy drink prove less effective? According to Shiv, consumers typically suffer from a version of the placebo effect. Since we expect cheaper goods to be less effective, they generally are less effective, even if they are identical to more expensive products. This is why brand-name aspirin works better than generic aspirin, or why Coke tastes better than cheaper colas, even if most consumers can’t tell the difference in blind taste tests. ‘We have these general beliefs about the world⎯for example, that cheaper products are of lower quality⎯and they translate into specific expectations about specific products,’ said Shiv. ‘Then, once these expectations are activated, they start to really impact our behavior.

So the next time you buy something on sale, pop a pill of L-DOPA. It will increase your pleasure, if only because you expect it to.

Read the comments on this post…

(Via The Frontal Cortex.)

Advertisements

Without Comment: The Unreliability of Expertise?

(Via The Frontal Cortex.)

The WSJ discovers the unreliability of wine critics, citing the fascinating statistical work of Robert Hodgson:

In his first study, each year, for four years, Mr. Hodgson served actual panels of California State Fair Wine Competition judges–some 70 judges each year–about 100 wines over a two-day period. He employed the same blind tasting process as the actual competition. In Mr. Hodgson’s study, however, every wine was presented to each judge three different times, each time drawn from the same bottle.

The results astonished Mr. Hodgson. The judges’ wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.

Mr. Hodgson also found that the judges whose ratings were most consistent in any given year landed in the middle of the pack in other years, suggesting that their consistent performance that year had simply been due to chance.

It’s easy to pick on wine critics, as I certainly have in the past. Wine is a complex and intoxicating substance, and the tongue is a crude sensory muscle. While I’ve argued that the consistent inconsistency of oenophiles teaches us something interesting about the mind – expectations warp reality – they are merely part of a larger category of experts vastly overselling their predictive powers.

Look, for instance, at mutual fund managers. They take absurdly huge fees from our retirement savings, but the vast majority of mutual funds in any given year will underperform the S&P 500 and other passive benchmarks. (Between 1982 and 2003, there have only been three years in which more than 50 percent of mutual funds beat the market.) Even those funds that do manage to outperform the market rarely do so for long. Their models work haphazardly; their success is inconsistent.

Or look at political experts. In the early 1980s, Philip Tetlock at UC Berkeley picked two hundred and eighty-four people who made their living ‘commenting or offering advice on political and economic trends’ and began asking them to make predictions about future events. He had a long list of pertinent questions. Would George Bush be re-elected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought process, so that he could better understand how they made up their minds. By the end of the study, Tetlock had quantified 82,361 different predictions.

After Tetlock tallied up the data, the predictive failures of the pundits became obvious. Although they were paid for their keen insights into world affairs, they tended to perform worse than random chance. Most of Tetlock’s questions had three possible answers; the pundits, on average, selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals. Tetlock also found that the most famous pundits in Tetlock’s study tended to be the least accurate, consistently churning out overblown and overconfident forecasts. Eminence was a handicap.

But here’s the worst part: even terrible expert advice can reliably tamp down activity in brain regions (like the anterior cingulate cortex) that are supposed to monitor mistakes and errors. It’s as if the brain is intimidated by credentials, bullied by bravado. The perverse result is that we fail to skeptically check the very people making mistakes with our money. I think one of the core challenges in fixing our economy is to make sure we design incentive systems to reward real expertise, and not faux-experts with no track record of success. We need to fund scientists, not mutual fund managers.

Read the comments on this post…

Without Comment: Average Internet User Now Spends 68 Hours Per Month Online

(Via Mashable!.)

stats-generic

The Nielsen Company issued a report on the top U.S. web brands and Internet usage in the U.S. As expected, Google is the #1 web brand based on unique audience.

The statistic that really jumped out for us, however, was that in September 2009, the average U.S. Internet user spent an estimated 68 hours online (both at home and at work).

Although that still trails television usage by a significant margin, it’s clear that the Internet is carving out a greater and greater role in our lives each month.

nielsen-net-usage-sept09

In addition to spending an average of 68 hours online, the average user visits nearly 2700 websites and averages 57 seconds per site.

nielsen-web-brands-sept09

For the larger web brands, users spend an average of 1 hour 53 minutes a month on Google, 3 hours 8 minutes on Yahoo and 5 hours 24 minutes on Facebook. The usage study compliments another Nielsen report issued yesterday that reported a 25% increase in online video viewing year-over-year.