One of the reasons highly useful projects don't get discovered quickly is that they are in under explored spaces. Certain areas are systematically under explored due to biases in peoples' search heuristics. Several examples of such biases are:

1) Schlep blindness: named by Paul Graham, posits that difficult projects are under explored.

2) Low-status blindness: projects which are not predicted to bring the project lead prestige are under explored.

3) High-variance blindness: projects which are unlikely to succeed but that have a positive expected value anyway are under explored.

4) Already invented blindness: projects that cover areas that have already been explored by others are assumed to have been competently explored.

5) Not obviously scalable blindness: projects that don't have an obvious route to scaling are under explored.

Are there other biases in what EAs pay attention to?

I believe this is useful because a project checking a lot of boxes in this space is *some* evidence that it is worth pursuit: few others will be interested in it giving you a comparative advantage.

14

0
0

Reactions

0
0
Comments17
Sorted by Click to highlight new comments since: Today at 7:37 PM

I think a very good heuristic is to look out for current social taboos. Some examples that come to mind:

  • Psychopharmacology. There is still a huge taboo against handing out drugs that make people feel better because of fears of abuse or the simple 'immorality' of the idea. Many highly effective drug development leads might also not be pursued because of fear of abuse.

  • End-of-life suffering, effective palliative medicine and assisted suicide. A lot of extreme suffering might be concentrated around the last months and years of life, both in developing and in developed nations. Most people prefer not to think about it too hard, and the topic is very loaded with religious concerns.

I'm not sure I have as good of a handle on the broader EA ecosystem as others, so consider my thoughts provisional, but I'd suggest adding

  • A special subset of low-status blindness: there's a bias toward more conventional projects that are easy to understand, since it's easier to get affirmation from others if they understand what you're working on. (Lifted from Jaan Taallinn's Singularity Summit 2011 talk)

  • I suspect EAs may prefer going down the nonprofit route, which seems very noble, but more overall long-term utility may often be produced by starting a for-profit business. E.g., Elon Musk is one of the most effective EAs on the planet because he did decide to go the capitalist route.

I'm not sure whether to add basic research stuff or not- the QALY is a pretty creaky foundation, but I grant there's a lot of uncertainty as to how to improve it.

Several of these might be summed up under the heading "high risk." There is a notion that this is exactly what philanthropy (as opposed to governments) ought to be doing.

One area I think hits many of these: global income inequality.

I don't blame governments for not pursuing such things. I've never thought of philanthropy, or how others think of philanthropy, to be about pursuing high-risk altruism. I've alwasy thought of philanthropy as wealthy people with big hearts trying to help people in a way that tugged at their heartstrings, patronizing something they're passionate about, such as research to cure a particular disease or works of fine art they enjoy, or to signal their magnaminity, i.e., giving for the sake of conspicuity.

How common do you think this notion is that philanthropy ought to be pursuing high-risk altruism? Effective charity is more risk-averse than other charity due to its very nature. However, some within effective altruism are moe risk-averse than others. Existential risk reduction such as funding the Machine Intelligence Research Institutute (MIRI) in that telling whether MIRI's research wil ultimately lead to saftey archictectures for A.I. which make it through all the bottlenecks of actually being implemented is so difficult. Why bother funding something which seems like it could have a low likelihood of succeeding, and you don't even know what to assess to improve your estimate of its success? This is how I feel about MIRI.

The only thing I update my opinion on of MIRI's potential success is other effective altruists who seem to be correct about many things also believing MIRI has a decent chance of success at their mission. That is, outsiders from MIRI who favor other cause areas in the first place being bullish on MIRI indicates to me they're perceiving something I'm not, and I'm humble enough to accept just because I don't understand how the case for MIRI works doesn't mean it can't work. Of course, this is just evidence via informational social influence. I don't know how to rate that relative to other evidence, which I expect is stronger but I don't know how to asses either, so my updates on MIRI's proposed efficacy typically round down to zero. Really, such updates are only sufficient to justify spending further time invesitgating MIRI and the field of A. I. risk, which is what the Open Philanthropy Project (Open Phil) is doing now.

With other risk reduction, such as climate change, it's also a high-risk bet in the sense that funding one climate change intervention hedges against the ability to fund any other intervention with the same money, and it seems impossible to tell which conventional climate change intervention is or will be the most effective. Effective altruism is willing to take high-risk bets when the expected value is sufficiently great and positive. However, there are a couple ways in which we doubt the credence of expected value calculations as a sole tool, all else equal, for evaluating effectiveness.

The first is to doubt that in any given expected value calcuation whether the factors selected in the calcuation are sufficient, and whether the estimated values assigned to each factor are well-calibrated. I myself believe this is a healthy skepticism to take towards any stand-alone expected value calcuation. The second way to approach an expected value calculation with skepticism is to doubt that expected value calculations in the first place, even if as meticulously constructed as possible, would alone be suficient even in theory to bet on a high-risk intervention. This seems an attitude more common to Givewell and Charity Science. Their ratioanle for this is I believe laid out in a blog post called "Why we can't take expected value calcuations literally, even when they're explicit". I haven't read it.

I believe that everything ultimately would be about idealized expected value calcuations. However, we can't have ideal expected value calcuations. I believe there are too many factors in any expected value calculation, especially for more specific interventions, for anyone to ever capture them, and we don't have enough resolute ability to gain information about the factors of expected value to assign reliable or sufficiently precise values to them. For example, Givewell in assessing the effectiveness of a charity will take into account the competence and personal fit of the team working at a given organization. I think that's a level of sensititivty that would definitely be a factor in an idealized expected value calcuation, but one which I doubt is taken into account by effective altruists in the expected value calcuations they actually use.

In Doing Good Better, Will MacAskill writes about how he and his team at 80,000 Hours (80k) coached one mentee about how expected value calculations, once personal fit and all the rest were taken into account, were the dominant factor in her decision to pursue a career in politics. She was in a reference class of PPE graduates from Oxford, which are disproportionately represented in the U.K. Parliament, and seemed otherwise competent enough to be an apt politician. Further, effective altruism as a whole follows the field of economics and its high confidence across the whole profession in certain policies as being improvements over the status quo as a sufficient indicator these would make beter policies. So, for an 80k member to justify pursuing a career in politics, for which we already have so much good information about for estimating the expected value, as long as the candidate in question stands a good chance of becoming an MP and can vote in a way that will increase the likely implementation of very effective but unpopular or unnoticed policy initiatives, her personal characteristics don't matter as much.

This isn't true for the Against Malaria Foundation (AMF) or MIRI. Givewell uses a cluster-thinking approach using as many heuristical and empirical approaches to assessing a charity as they can to minimize the chance they get something wrong. So, there is not prior track record, or frame of reference, for how to bulid the next AMF, or an effectivef existential risk reduction organization. We don't have a table of prior probabilities to estimate the value of a factor based on the characteristics of an item relative to other itesm in its reference class. So, Givewell is forced to use methods other than expected value, because otherwise they'll always fall short of the standards they aspire to. If it's a significant factor Rob Mather is the executive director of AMF in its mission to prevent malaria cases leading to deaths, however many millions, among all possible global health inteventions, than it matters even more who the executive director of MIRI is to save the lives of billions of living people and the countless human population of the future. Michael Dickens is an effective altruist who exemplifies this, as he values animals highly, and he recently stated he is now substantially more likely to donate to MIRI now that their current executive director, Nate Soares, values nonhuman animals and the effect future technologies will have on their welfare, whereas MIRI's previous executive director, Luke Muehlhauser, does not. Perhaps MIRI should have multiple competitors, each with different stauff, pursuing the same ultimate goals in their techincal research, but otherwise running their organizations quite differently, to minimize the dependence on one organization to save the world. And yet, these are only a couple factors in assessing high-risk, high-return, far-off, and empirically sensitive scenarios. It's not worth it to nitpick my example of MIRI, its staff, or A.I. risk, because I just wanted to provide one vivid example of how intractable and insufficient expected value calucations are as a lone tool, or even as a primary tool among many, even if we think we're not wrong in how robust they are.

So, effectively, there is little or no difference between the evaluation approach Givewell uses and the one I'd ideally endorse, because their way of attacking a problem from so many different angles is a giant algorithm which, while not as simple as we might want, better approximates what the output of a perfect expected value calculation would be better than any EV calucation we'll actually use would.

Perhaps MIRI should have multiple competitors, each with different stauff, pursuing the same ultimate goals in their techincal research, but otherwise running their organizations quite differently, to minimize the dependence on one organization to save the world.

AFAICT, this was target #5 of MIRI's summer fundraiser. As is, MIRI probably lacks the funding to do this.

Another bias seems to be an orientation toward interventions that help a single individual have significantly more QALYs, and under-weighing the systematic benefit of interventions that cause many to have slightly more QALYs.

Another bias is an orientation toward easily measurable interventions, and under-weighing the benefits of less easily measurable interventions.

Is there a term for the first one? I generally refer to as the concentration of benefits and harms problem.

WRT the second: That reminds me that improving the metrics (such as QALYs) could be very high impact, but I think Stanford METRICS is already working on this?

Interesting, didn't hear about Stanford METRICS working on this, is there a link you can provide?

Not sure about the term for the first one, would be nice to come up with one :-) I think your term might be good, but something that would signal systemic intervention would be good, maybe something about meta-interventions? Also maybe something related to these LW posts, which I'm sure you're well familiar with: http://lesswrong.com/lw/kn/torture_vs_dust_specks/ and http://lesswrong.com/lw/n3/circular_altruism/

Glancing through their website it doesn't look like they're working on anything related so I must be mistaken.

I think the solution here is to be more comfortable making a Fermi Estimate of QALYs for hard-to-measure interventions, as opposed to trying to be very exact about QALYs.

Small clarification: schlep is more about people not doing boring projects rather than difficult ones.

Good point. Could schlep blindness be 'rational' in that it actually is unwise to pursue many projects that a person will most likely not be able to maintain motivation for over the course of the project?

I think this has more to do with personality styles than anything else, as "foxes" tend to thrive on many different projects, and "hedgehogs" tend to thrive in focusing on one big single project. More on these terms here.

As Paul Graham says, a good heuristic for getting over schlep blindness is to ask "What problem do I wish someone else would solve?" This might also work for low-status and high-variance blindness.

What solutions would I be interested in if they were anonymous? What would I work on if I had a massive safety net?

This post is the sort of thing I would expect Crux - the Crucial Considerations Institute we are forming in a few months - to output on a regular basis.

Can you tell me more about Crux? I'm curious about it. My email is gleb@intentionalinsights.org

Curated and popular this week
Relevant opportunities