Comment author: Ben_Todd 22 November 2017 06:06:57AM 1 point [-]

Great post, like all the data, and would be keen to see more work like this. I've added a link to it here: https://80000hours.org/articles/cause-selection/

Some more questions I'd be interested in:

1) I'd be interested to see more on how you think it compares to other EA causes all considered, especially the most similar one, global health. I'd start with taking a short-term DALYs and economic perspective, but would also be interested in what a long-term perspective might say about the comparison (I've almost seen nothing about this)

2) I'd be interested to see firmer recommendations for people who have already decided they want to focus on mental health - what are your thoughts on the most promising interventions and career paths?

Comment author: Arepo 06 November 2017 07:36:07PM *  0 points [-]

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.

On reflection I don't think I even believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.

Comment author: Ben_Todd 06 November 2017 10:35:08PM 0 points [-]

Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren't strategically selecting.

Comment author: caspar42 02 November 2017 09:48:43PM 4 points [-]

A few of the points made in this piece are similar to the points I make here: https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/

For example, the linked piece also argues that returns may diminish in a variety of different ways. In particular, it also argues that the returns diminish more slowly if the problem is big and that clustered value problems only produce benefits once the whole problem is solved.

Comment author: Ben_Todd 05 November 2017 03:36:27AM 1 point [-]

That's a good piece - we thought about many of these issues when working on the framework, and I agree it's not all clearly explained on the page.

Comment author: Ben_Todd 05 November 2017 03:33:09AM 0 points [-]

These data are helpful, but they somewhat miss what we really care about.

First, we care more about growth in impact on top causes than the numbers of people involved etc.

I think the key story here is that Open Phil has ramped up donations from $30m to over $120m (4-fold growth in a year), and is expected to increase that several times more in the next few years, but this would be easy to miss in the presentation above.

I'd also prefer to try to look at even more fundamental measures of progress in the top causes, though this is hard since they'll be qualitative. For instance, I think taking AI risk mainstream as a field 2015-2017 has been a huge success.

When it comes to measures of the number of people engaging in EA, we also have to be careful. Many in the community have stepped back from increasing the number of people involved & media outreach, towards increasing the quality of engagement, as reflected in our recent survey: https://80000hours.org/2017/11/talent-gaps-survey-2017/

See some defence of this idea here: https://www.centreforeffectivealtruism.org/blog/the-fidelity-model-of-spreading-ideas/

This means some of these declines in web hits and so on might actually be intentional. We could have easily driven up visits to the EA Forum or wikipedia page if we had tried.

That said, unfortunately, we don't yet have a good measure of the quality of engagement, so it's hard to know if that strategy is working either (though I feel like we're making good progress within 80k at least).

Comment author: Ben_Todd 03 November 2017 09:31:12PM 6 points [-]

Hey Joey,

Just a very quick comment to say that I largely agree and 80,000 Hours, as you note, is already moving heavily in this direction.

For instance, we track 'impact-adjusted' plan changes, to try to proxy the fact that some plan changes are far bigger than others. Recently, we've been focused only on growing the biggest changes in our priority paths ("upgrading"), since we think this is where the biggest bottlenecks lie in the community, and we've tilted our content and coaching in that direction.

We've also started to hire cause specialist coaches, to focus on resolving the most pressing talent bottlenecks in each cause area.

Along the lines of some of your other suggestions, we also set up 80000hours.org/job-board/ and are looking into an EA internship and scholarship scheme.

I can't speak for them, but my impression is that CEA is taking similar steps. Open Phil is also doing more of this kind of work, such as with the AI Fellowship.

Comment author: Nick_Robinson 05 October 2017 10:13:44AM 4 points [-]

Hi Kyle, I think that it's worth us all putting effort into being friendly and polite on this forum, especially when we disagree with one another. I didn't find your first comment informative or polite, and just commented to explain why I down-voted it.

Comment author: Ben_Todd 05 October 2017 10:54:31AM 4 points [-]
Comment author: Ben_Todd 19 August 2017 06:50:53PM 6 points [-]

Another big area you didn't mention is Superforecasting, prediction markets and that kind of thing.

Comment author: Ben_Todd 19 August 2017 06:49:48PM 2 points [-]

Open Phil doesn't appear to do this (they don't mention that often in their public facing docs.)

They do, though it's usually not the first thing they mention e.g. here: https://docs.google.com/document/d/1DTl4TYaTPMAtwQTju9PZmxKhZTCh6nmi-Vh8cnSgYak/edit

Comment author: Ben_Todd 17 July 2017 04:47:34AM 18 points [-]

Hey Kaj,

I agree with a lot of these points. I just want to throw some counter-points out there for consideration. I'm not necessarily endorsing them, and don't intend them as a direct response, but thought they might be interesting. It's all very rough and quickly written.

1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they're already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.

It becomes particularly difficult because the leaders, who do the broad outreach, want to focus on high-level EA. It's more transparent and open to pitch high-level EA directly.

There are probably ways you could implement a division without incurring these problems, but it would need some careful thought.

2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It's what makes us different from simply "competent" do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.

3) Often the best way to promote general ideas is to live them. With your example of promoting science, people often seem to think the Royal Society was important in building the scientific culture in the UK. It was an elite group of scientists who just got about the business of doing science. Early members included Newton and Boyle. The society brought likeminded people together, and helped them to be more successful, ultimately spreading the scientific mindset.

Another example is Y Combinator, which has helped to spread norms about how to run startups, encourage younger people to do them, reduce the power of VCs, and have other significant effects on the ecosystem. The partners often say they became famous and influential due to reddit -> dropbox -> airbnb, so much of their general impact was due to having a couple of concrete successes.

Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact - finding the "airbnb of EA" or the "Newton of EA".

Comment author: JanBrauner 13 July 2017 06:20:44PM *  5 points [-]

With regards to "following your comparative advantage":

Key statement : While "following your comparative advantage" is beneficial as a community norm, it might be less relevant as individual advice.

Imagine 2 people Ann and Ben. Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network. Ben has very good career capital to contribute to cause Y. Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.

Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around. Both consider retraining for the cause they think is more urgent.

From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner's dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann's and Ben's goals was if each continued in their respective areas of expertise. (Let's assume they could be motivated to do so)

However, from a personal perspective, let's look at Ann's situation: In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: "I keep contributing to cause X, if you continue contributing to cause Y"

So these might be Ann's thoughts: "I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm"

So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.

I see one consideration why Ann should continue working towards cause X: If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X. However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.

So it seems that - it clearly makes sense (for CEA/ 80000 hours/ ...) to promote such a norm - it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.

All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, ...)

Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D

Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.

Comment author: Ben_Todd 14 July 2017 01:03:01AM 1 point [-]

Hi there,

I think basically you're right, in that people should care about comparative advantage to the degree that the community is responsive to your choices, and they're value-aligned with typical people in the community. If no-one is going to change their career in response to your choice, then you default back to whatever looks highest-impact in general.

I have a more detailed post about this, but I conclude that people should consider all of role impact, personal fit and comparative advantage, where you put or less emphasis on comparative advantage compared to personal fit given certain conditions.

View more: Next