Rationally, your political values shouldn't affect your factual beliefs. Nevertheless, that often happens. Many factual issues are politically controversial - typically because the true answer makes a certain political course of action more plausible - and on those issues, many partisans tend to disregard politically uncomfortable evidence.

This sort of political bias has been demonstrated in a large number of psychological studies. For instance, Yale professor Dan Kahan and his collaborators showed in a fascinating experiment that on politically controversial questions, people are quite likely to commit mathematical mistakes that help them retain their beliefs, but much less likely to commit mistakes that would force them to give up those belies. Examples like this abound in the literature.

Political bias is likely to be a major cause of misguided policies in democracies (even the main one according to economist Bryan Caplan). If they don’t have any special reason not to, people without special knowledge defer to the scientific consensus on technical issues. Thus, they do not interfere with the experts, who normally get things right. On politically controversial issues, however, they often let their political bias win over science and evidence, which means they’ll end up with false beliefs. And, in a democracy voters having systematically false beliefs obviously more often than not translates into misguided policy.

Can we reduce this kind of political bias? I’m fairly hopeful. One reason for optimism is that debiasing generally seems to be possible to at least some extent. This optimism of mine was strengthened by participating in a CFAR workshop last year. Political bias seems not to be fundamentally different from other kinds of biases and should thus be reducible too. But obviously one could argue against this view of mine. I’m happy to discuss this issue further.

Another reason for optimism is that it seems that the level of political bias is actually lower today than it was historically. People are better at judging politically controversial issues in a detached, scientific way today than they were in, say, the 14th century. This shows that progress is possible. There seems to be no reason to believe it couldn’t continue.

A third reason for optimism is that there seems to be a strong norm against political bias. Few people are consciously and intentionally politically biased. Instead most people seem to believe themselves to be politically rational, and hold that as a very important value (or so I believe). They fail to see their own biases due to the bias blind spot (which disables us from seeing our own biases).

Thus if you could somehow make it salient to people that they are biased, they would actually want to change. And if others saw how biased they are, the incentives to debias would be even stronger.

There are many ways in which you could make political bias salient. For instance, you could meticulously go through political debaters’ arguments and point out fallacies, like I have done on my blog. I will post more about that later. Here I want to focus on another method, however, namely a political bias test which I have constructed with ClearerThinking, run by EA-member Spencer Greenberg. Since learning how the test works might make you answer a bit differently, I will not explain how the test works here, but instead refer either to the explanatory sections of the test, or to Jess Whittlestone’s (also an EA member) Vox.com-article.

Our hope is of course that people taking the test might start thinking more both about their own biases, and about the problem of political bias in general. We want this important topic to be discussed more. Our test is produced for the American market, but hopefully, it could work as a generic template for bias tests in other countries (akin to the Political Compass or Voting Advice Applications).

Here is a guide for making new bias tests (where the main criticisms of our test are also discussed). Also, we hope that the test could inspire academic psychologists and political scientists to construct full-blown scientific political bias tests.

This does not mean, however, that we think that such bias tests in themselves will get rid of the problem of political bias. We need to attack the problem of political bias from many other angles as well. I will return to this problem in later posts.

4

0
0

Reactions

0
0

More posts like this

Comments7


Sorted by Click to highlight new comments since:

Interesting test. I scored quite low in terms of political bias, but there's certainly a temptation to correct or over-correct for your biases when you're finding it very hard to choose between the options.

Great resource. The google doc guide is a really nice touch - very helpful. Two comments:

1) Re: 'don't know' answers. A confidence/credence slider may help. This would allow people to give more fine-grained responses. Scores could be modified to some version of credence*correctness. Then you don't punish those who don't know they don't know whether it was a 2% or 1% renewables increase (ignorant, not biased), and increase the punishment for those who are 100% sure that the wrong answer is correct (biased). This would retain the punishment for always leaning the same way. An example here: http://www.2pih.com/caltest/

2) The questions must be tough to phrase, but I found a couple ambiguous.

Q5 on foreign aid: 'minor reason' and 'barely a reason' sounded synonymous to me. I picked 'minor' because any aid has an opportunity cost to domestic spending. I know the source can't really help with this.

Q6 on emissions. It may help to clarify that this is production, not consumption emissions. (Having briefly looked into it, however, this doesn't seem to make much of a difference).

Q17 on emigration destination popularity. I read this as 'US compared to [ALL COMBINED] other countries', as opposed to 'US compared to [INDIVIDUAL] other countries', which changes the answer. Silly on my part, but might be worth clarifying.

This test, though it has good intentions, has several possible flaws:

  1. I suppose it's looking for a concordance between people's political beliefs and their views on factual issues? But this type of concordance isn't necessarily indicative of political bias: if someone believes certain facts, they're going to base their political beliefs on those facts. For example, it would be bizarre if someone believed that the welfare state increases poverty but also identified as a Democrat. Like, "You do realize what the Democrats support, right?"

  2. The "facts" in this test are not necessarily the product of widespread consensus. For example, did the stimulus package reduce unemployment compared to the counterfactual? I don't think that's a settled issue.

  3. This test is easy to game, just as it's easy to get a diagnosis of ADHD by answering "strongly agree" to statements like "I have trouble paying attention". Many questions are clearly trick questions, which everyone knows how to answer.

  4. There's little opportunity to say "I don't know." Did renewable energy increase by 1 or 2 percent? Hell if I know. Is that considered a big difference? I just had to guess.

1) This question is discussed at length in the sections after the test.

2) It is according to our source. But some of the questions could have been better phrased. We will update them.

3) I wouldn't say it's easy to game. In fact, saying that it is a bias test had little effect in our Mechanical Turk pre-tests, which suggests that most people don't try to game it. That said, it is possible to game. It is very hard to construct a test like this which is impossible to game. Other similar tests are far easier to game (see, e.g. Hans Rosling's test of global progress which in effect is a bias test).

The test obviously isn't going to be a reliable measure of bias if people try to game it - if they try to be more unbiased than they normally are. Still, taking the test could make these people think why they don't normally adjust for their biases in this way. Hence the test could to some extent fulfill its ultimate purpose - getting people to think more about their biases and of the problem of political bias - even when it isn't accurate as a measure of political bias.

4) True, but if you consistently guess in a direction that favours your political opinions, that suggests bias. That said, ideally it should be possible to indicate how strong your confidence in your answers is.

This is a very interesting approach, which I praise. However, the test has some ceiling effects (Sally Murray noted that the percentiles for a given score have been plummeting). It might help to have more and harder questions, or to use continuous variables to pick up finer distinctions.

Thanks, Carl!

Yes, you're right about the ceiling effects. We started out with questions where science hasn't established an answer, but where we guesstimated that the probability that the conservative/liberal answer is right was 50 % on average. That system wouldn't have had these ceiling effects. We ran a pre-test of that test on Mechanical Turk, but the results were very askew (conservatives came out as much less biased than liberals, which was probably due to poorly constructed questions). I then decided to abandon that strategy for this one.

Here are two other posts I've written on this general strategy of inferring bias from belief structures. If you have any ideas of how to devise smarter ways to develop bias tests, I'd be very interested in that.

Hi Stefan, It is interesting to consider the possibility of people making more rational decisions when voting if they could learn to be more self-aware of their own personal biases. I hope that was a fair summary. I have a number of questions to throw at this concept: 1 - Does it really matter what the voters think anyway (is it a true democracy)? http://www.commondreams.org/views/2014/04/14/us-oligarchy-not-democracy-says-scientific-study 2 - is it reasonable to expect that people would welcome their world views be challenged and debunked? (Or is this intended to be enforced?) 3 - What actually causes this problem? Can it be "prevented" instead of "cured"?

Keep the ideas coming. Great job.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche
Relevant opportunities