Want to Win a Political Debate? Try Making a Weaker Argument
August 23, 2013• ByEric Horowitz
Gun control? Abortion? The new social science behind why you're never able to convince friends or foes to even consider things from your side.
If all of American politics could be epitomized by a single emotion, it would be the frustration of watching an ignorant politician maniacally disregard the proof that your own position is correct. Professional politicians are dogmatic in part so they can remain "pure" for re-election, but even average citizens talking policy with their friends are rarely swayed by each other's arguments.
Lately, there’s been a growing emphasis on psychological explanations for such intransigence. There could be an entire book of syndicated newspaper columns that discuss "motivated reasoning"—the tendency to interpret information in a way that confirms your existing beliefs. But research on human motivation also hints at a simpler and somewhat startling reason for the lack of flip-flopping: Nobody makes the type of arguments that are likely to change minds.
And there's nothing illogical about it.
The arguments people make are those that appear the strongest to themselves and the people who already agree with them. But such arguments tend to be meaningless to people who disagree.
How does this happen?
It starts with the universal desire to protect against threats to your self-image or self-worth. People are driven to view themselves in a positive light, and they will interpret information and take action in ways that preserve that view. The need to maintain self-worth is one reason we attribute our failures to external factors (bad luck), but our success to internal factors (skill.)
The arguments that are most threatening to opponents are viewed as the strongest and cited most often.Liberals are baby-killers while conservatives won't let women control their own body.
Because political beliefs are connected to deeply held values, information about politics can be very threatening to your self-image.Imagine coming across information that contradicts everything you've ever believed about the efficacy of Medicare, for example. If you're wrong about such an important policy, what else might you be wrong about? And if you're wrong about a bunch of things, you're obviously not as smart or as good or as worthwhile a person as you previously believed. These are painful thoughts, and so we evaluate information in ways that will help us to avoid them.
It follows that our openness to information depends on how it affects self-worth, and a number of studies bear this out. One line of research has found that self-affirmation—a mental exercise that increases feelings of self-worth—makes people more willing to accept threatening information. The idea is that by raising or "affirming" your self-worth, you can then encounter things that lower your self-worth without a net decrease. The affirmation and the threat effectively cancel each other out, and a positive image is maintained.
A 2006studyled by Geoff Cohen, for example, found that when pro-choice people had their partisan identities made salient, affirmation made them more likely to compromise and make concessions on abortion restrictions.Similarly, astudyby Joshua Correll found that affirmation led people to process threatening political arguments in a less biased way. More recently, research by Brendan Nyhan and Jason Reifler (PDF) found that self-affirmation made people who supported withdrawing from Iraq more likely to agree that the Iraq troop surge of 2007 saved lives, and made strong Republicans more likely to agree that climate change is real. The takeaway from all three studies is that information is more likely to have the desired effect if, on net, it doesn't lower a person's self-worth.
Research by Nyhan and Reifler on what they've termed the "backfire effect" also suggests that the more a piece of information lowers self-worth, the less likely it is to have the desired impact.Specifically, they have found that when people are presented with corrective information that runs counter to their ideology, those who most strongly identify with the ideology will intensify their incorrect beliefs.
When conservatives read that the CBO claimed the Bush tax cuts did not increase government revenue, for example, they becamemore likelyto believe that the tax cuts had indeed increased revenue (PDF).
In anotherstudyby Nyhan, Reifler, and Peter Ubel, politically knowledgeable Sarah Palin supporters became more likely to believe that death panels were real when they were presented with information demonstrating that death panels were a myth. The researchers' favored explanation is that the information is so threatening it causes people to create counterarguments, even to the point that they overcompensate and become more convinced of their original view. The overall story is the same as in the self-affirmation research: When information presents a greater threat, it's less likely to have an impact.
How does all of this play out in a real-world policy debate? Imagine you're a dedicated social liberal who is attempting to show a conservative friend the joys of gun control. You put your trump card on the table right away: Gun control saves lives. All evidence fromaround the worldandwithin the U.S.points to that conclusion. You smirk, knowing that there's no way somebody can deny that argument.
But things appear different to your friend. The upshot of your argument is that he has spent years supporting a set of policies that kill people. And yet he knows there's no way that could be true because he's a good person who wants what's best for the world. So what you're sayinghasto be false.It’s not even worth considering.
This plays out over and over in politics.The arguments that are most threatening to opponents are viewed as the strongest and cited most often.Liberals are baby-killers while conservatives won't let women control their own body. Gun control is against the constitution, but a lack of gun control leads to innocent deaths. Each argument is game-set-match for those already partial to it, but too threatening to those who aren't. We argue like boxers wildly throwing powerful haymakers thathave no chance of landing. What if instead we threw carefully planned jabs that were weaker but stood a good chance of connecting?
Imagine that instead of arguing about the quantity of gun deaths, for example, you make the case that universal background checks will allow a mom with two young kids to feel less nervous about the strange, reclusive man who lives down the street. Now your point is much less threatening. People will never believe they help bring about the deaths of innocents, but they can believe they failed to consider the peace of mind of some person they don't know. The argument is objectively weaker, but it's more likely to be below the threat threshold that leads to automatic rejection. It might actually be considered.
None of this is to say that when somebody is unknowledgeable or uncommitted it isn't best to use your most powerful argument. And for political parties the priority is often driving activism rather than changing minds, and thus threatening arguments may be a better choice.But if you're trying to convince a friend to change his views, it might be worthwhile to go against your instincts and hit him with all your weakest points.