David Evans, Ph.D. is a Senior Manager of Customer Research at Microsoft. He will give a talk titled, “The Causes and Consequences of Fake News: A Consumer Psychology Perspective.” at the Insights Leadership Conference, September 26-28 in Palm Beach. His presentation will trace how attentional and memory bottlenecks among consumers gave rise to in-context advertising, in which paid promotions masqueraded as news and other content. From there, we will understand how perceptual process make them difficult to distinguish, and how attitudes can become extremitized after repeated exposure. Finally, we will explore ways to modify the user-experience to take the oxygen out of the room, and help restore trolling to constructive discourse. David’s remarks will be based on his book, Bottlenecks: Aligning UX Design with User Psychology, (2017) Apress/Springer, an excerpt appears below with permission:

Group Polarization

The uninhibited expression of negativity and antisocial sentiment began early on the internet and is getting worse. As early as 1984, psychologist Sara Kiesler of Carnegie Mellon University and her colleagues recognized that computer-mediated communication lacked cues to regulate normal interaction. By 2014, the Pew Research Center reported that over 70% of us had witnessed online harassment and 40% had been the target of it. This ranged from name calling and efforts to purposefully embarrass someone all the way to sustained sexual harassment and stalking.

When these attacks are targeted at those of you trying to create and market your ideas and inventions, the damage goes from personal to professional. Unchecked negative comments drive down star ratings, and in turn, traffic and sales. Investigating that in 2011, econometrician Michael Luca compared the Yelp star ratings of 3,500 restaurants in Seattle to their revenue as reported to the Washington State DOR. He found that on average, a drop of 1 star-rating out of 5 resulted in a 9% drop in revenue.

In the 90s, this was known as flaming. In the 00s, it was cyber-bullying. Today, it’s called trolling. But in the terminology of social psychology dating back to 60s, it is known as group polarization.

Let’s understand group polarization as it was described by Moscovici and Zavalloni in 1969. In the typical research paradigm, there are three steps: first, individuals must express attitudes about an issue in isolation; second, we must discuss the issue in a group; third, we must express our attitudes a second time. What happens is that our attitudes measured after the discussion are more extreme or polarized than they were before the discussion.

Unfortunately, there are but a few simple social-situational ingredients that, when combined, put us on the path of expressing ourselves in a more hyperbolic, extreme, and unsocial way. We only need an issue that is not perfectly balanced between pro and con (and few issues are). And we need a group discussion. In almost every case, our opinions will drift toward the crazy. The worst part is, there are not just a few trolls out there for you to avoid, but instead, any one of us can be turned into a troll simply by joining the conversation.

Decades of research on polarization have confirmed which situational risk factors make it worse. We know that our attitudes and comments are more likely to become extremitized when…

  • We lack immediacy—we are physically or psychologically distanced from one another
  • We are predisposed—we are already leaning for or against an issue
  • We discuss a topic at length—the longer an online conversation goes on, the more polarized it gets
  • Societal groups are salient—our belongingness to an ingroup and differences from an outgroup are emphasized
  • Arguments are imbalanced—we only hear one-sided arguments not two-sided arguments
  • Competition is encouraged—we are encouraged to be extreme or prototypical of our group
  • We are deindividuated—we are difficult to personally identify

As you can see reading that list, the conditions for polarization could not be more ripe in online environments. First of all, the low immediacy, or lack of psychological nearness between your trolls and you, is assumed in any computer-mediated conversation.

In regard to predisposition, the success of search engines like Google have also allowed us to self-select to topics and slants that match our own views. For example, searching on Donald Trump in 2016 produced a list of headlines we could choose to read on the basis of whether or not they seemed to support the presidential candidate. By that point in history, most Americans also knew the slant of news outlets like CNN and Fox News (not to mention NPR and much of AM radio) and were accustomed choosing the channel they found most comfortable and the best match to their predisposed opinions. Two studies of Twitter have demonstrated this kind of self-segregation: we tend to retweet (Conover and colleagues, 2011) and reply to (Yardi and Boyd, 2010) only those tweets that match our party affiliation and views on extreme political positions.

Early in a series of online comments, you rarely see polarized opinions. Instead, they tend to appear later in the conversation. The NYT closes comments 24 hours after an article is published or “when we feel the discussion has run its course and there is nothing substantial to gain from having more comments on the article.”

Formal societal groups are not required for polarization, but nonetheless, when any well-known “us and them” enters the conversation, the trolls really come out. This can be anything from football teams to religious groups to political parties to race. Any references to group identity in this way are like throwing gasoline on the fire of polarization.

When you take time to actually read online comments, you’ll notice they are often imbalanced in the sense of being one-sided rather than two-sided. When we want to make a serious effort to convince people of something, we will anticipate opposition and raise counterarguments ourselves so we can refute them rationally or find common ground. These are two-sided arguments. But online this occurs far too seldom, and the preponderance of one-sided arguments has the effect of pushing everyone’s discourse to become more and more extreme.

The next point in our list about competition being encouraged occurs when commenters see some advantage or status to being more extreme. One such clue is when the online comments that are replied to most are sorted to the top or in some other way prominently displayed. This poor design decision that makes the troll’s choice easy: say something preposterous, stupid, or racist, and it will be like catnip to all the reasonable people out there who reply with their rejoinders, elevating the original snarky statement from obscurity to fame.

If you understand polarization, we believe that you can curb it. Despite how it might appear, haters are not, in fact, taking over the web. Don’t forget, the same web has produced truth (Wikipedia), trust (Uber, Airbnb), and altruism (the ALS ice-bucket challenge).

No, we believe that the expression of hate is largely the result of interface design flaws. If it is worse online than in real life, you have a chance to bring it under control by changing the UX design, and we believe you must. Not just for the survival of your meme through this bottleneck of social hyperbole (a gauntlet really), but because none of us can give up yet on the dream of a productive, democratic, global conversation that the web once promised.

References

Kiesler, S., Siegel, J., & McGuire, T. W. (1984). Social psychological aspects of computer-mediated communication. American Psychologist, 39(10), 1123.

Duggan, M. (2014, October 22). Online harassment. Pew Research Center: Internet, Science & Technology. Retrieved from http://www.pewinternet.org/2014/10/22/ online-harassment/.

Luca, M. (2011). Reviews, reputation, and revenue: the case of Yelp.com. Harvard Business School Working Papers. Retrieved from http://www.hbs.edu/research/pdf/12-016.pdf.

Moscovici, S., & Zavalloni, M. (1969). The group as a polarizer of attitudes. Journal of Personality and Social Psychology, 12(2), 125.

Yardi, S., & Boyd, D. (2010). “Dynamic debates: An analysis of group polarization over time on Twitter.” Bulletin of Science, Technology & Society 30.5, 316–327.

Conover, M., Ratkiewicz, J., Francisco, M. R., Gonçalves, B., Menczer, F., & Flammini, A. (2011). Political polarization  on Twitter. ICWSM, 133, 89–96.

Spayd, L. (2012, October 15). Questions and answers on how the Times handles online comments from readers. The New York Times. Retrieved from https://publiceditor.blogs.nytimes.com/2012/10/15/questions-and-answers-on-how-the-times-handles-online-comments-from-readers/