the what the hell effect.

This inspiring post from the frog design mind.

It reviews Genevieve Bell’s presentation at SXSW. Genevieve is a researcher at Intel, currently exploring the intersection of honesty and machine intelligence. Additionally, she compares some of her own work with ideas and experiments of Dan Ariely, to help us identify what-we-know-we-don’t-know about modeling ‘contextual awareness’, in operating systems and applications for so-called smart devices. 

For Genevieve the trick is in knowing how (humans cope) with the telling and receiving of little white lies.

 

“Giving Up on Being Honest

Can “smart devices” ever understand our intent in the range of ways with communicate with others? Can they understand when we are trying to be communal, rather than be an authority? And can they communicate in a manner that feels communal?

Genevieve noted in her talk that as human beings, we tell 2 to 200 lies a day. And while most of them are insignificant, the lies are often what smooth over friction in human relations.

But what kind of lies are these? Dan Ariely, in a somewhat unrehearsed session today with Sarah Szalavitz, walked the audience through his ongoing research into human dishonesty.

What he uncovered is that humans have a “fudge factor,” a level of dishonesty we’re willing to engage in and still consider ourselves honest. His insight into the behaviour isn’t huge, as we’ve all been caught in white lies (perhaps more in our lives than we’d care to admit). Instead, it’s rooted in what’s considered acceptable based on context and consequence.

In one example, he ran an experiment where people were given a test with a ton of questions, but only five minutes to solve them all. In the provided time period, it would be impossible to answer them all. When time was up, the people would grade their own tests, run them through a shredder in the back of the room, then tell the facilitator how many answers they got right.

The shredder was designed, however, to not shred the test. They could compare what people said to how many were actually right.

From this experiment, they saw that most people only lied just a little—if they only solved four problems, they’d say six. Makes sense, right?

But in an separate experiment, Dan saw if people would cheat with regard to remembering the 10 Commandments. In that case, no one cheated. One finding that came out of that research was that when we are reminded about own morality, we become more honest. But the honor code must come before we engage in an activity, not after it. Otherwise, we will be tempted to cheat.

But the third experiment he related was the following: You see two empty boxes, and then a couple of dots flash on the screen within those boxes. You are asked the question, “Are there more dots on the right or the left?” You receive 10 cents if you say right and one dollar if you say left, in all cases. This is repeated a hundred times with each research subject.

In the lab, they saw that people cheat a little bit through the process. But at some point, 80% of the people lose it, and they start cheating all the time. Different people switch at different points, depending on the context.

Dan called this the “what the hell” effect. In people’s minds, they’re saying: “I’m a cheat, I might as well enjoy it.”

via How Honest Should Smart Devices Be? | Blog | design mind.

Advertisements

1 Comment

Filed under Uncategorized

One response to “the what the hell effect.

  1. Darryl

    The idea we’re moving towards context and consequence aware technologies is intriguing but also a little frightening on some levels. I immediately think of the recent news surrounding the iPhone embedded location tracking scheme.
    I think as designers begin to embrace things like anticipatory and predictive qualities as necessary device attributes, we will need to ensure these so-called ‘smart’ capabilities and UX enhancements do not breach our personal privacy or undermine the security of our data.
    At what point do the anticipatory and predictive qualities of devices become intrusive? Who, ultimately, determines the moral and ethical boundaries of such algorithmic protocols?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s