Comment author: brianwang712 24 March 2018 10:24:57PM 0 points [-]

I wonder how much the "spend 1 year choosing and 4 years relentless pursuing a project" rule of thumb applies to having a high-impact career. Certain career paths might rely on building a lot of career capital before you can have high-impact, and career capital may not be easily transferable between domains. For example, if you first decide to relentlessly pursue a career in advancing clean meat technology for four years, and then re-evaluate and decide that influencing policymakers with regards to AI safety is the highest-value thing for you to do, it's probably going to be difficult to pivot. There's a sense in which you might be "locked in" to a career after you spend enough time in it. My sense is that, for career-building in the face of uncertainty, it might be best to prioritize keeping options open (e.g., by building transferable career capital) and/or spending more time on the choosing phase.

Comment author: Joey 25 March 2018 08:35:00PM 4 points [-]

I am more skeptical about transferable career capital. I tend to see people doing impressive things even in unrelated fields as providing a lot of career capital. E.g. A lot of EAs would hire someone who had done a successful project in another EA cause vs just doing something less related but more transferable. E.g. going into consulting.

Also generally in line with the argument above, I tend to see that doing great focused work leads to better outcomes than “building generalized career capital” with the idea of eventually using it in a high impact direction. The most common outcome I see with EAs doing that is them spending a bunch of time saving/building career capital and then them leaving the EA movement, having caused pretty minimal good in the world. Additionally, doing impressive things in the EA movement is a way to both build career capital and do good at the same time.

That being said, I think it’s somewhat a different question of what to factor in. You might decide after one year that the best thing to do is X (e.g. get a degree) which sets you up better for your next plan revaluation point 4 years later with minimal re-evaluation until you have gotten your degree.

Comment author: Michael_Wulfsohn 25 March 2018 12:52:05AM 3 points [-]

I have another possible reason why focusing on one project might be better than dividing one's time between many projects. There may be returns to density of time spent. That is, an hour you spend on a project is more productive if you've just spent many hours on that project. For example, when I come back to a task after a few days, the details of it aren't as fresh in my mind. I have to spend time getting back up to speed, and I miss insights that I wouldn't have missed.

I haven't seen much evidence about this, just my own experience. There might also be countervailing effects, like time required for concepts to "sink in", and synergies, or insights for one project gleaned from involvement in another. It probably varies by task. My impression is that research projects feature very high returns to density of time spent.

Comment author: Joey 25 March 2018 08:32:43PM 0 points [-]

Returns on density of time seems pretty plausible to me and particularly for cognitively intensive projects. Regarding sink in effects, I suspect many of these benefits can be accomplished by working on different aspects within the same overall project. E.g. working on hiring to take a break from cost-effectiveness analysis work when founding a charity.

Comment author: RandomEA 25 March 2018 01:50:13AM 2 points [-]

Why is "spreading one's time over a wide range of projects and getting large amounts of benefit from minimal amounts of time" called "the 90/10 approach"?

Comment author: Joey 25 March 2018 08:31:01PM 2 points [-]

I did not coin the term. I have heard quite a few EAs talk about the 90/10 principle. I was using it in that context. The idea is you can get 90% of the benefits of many projects with only 10% of the effort.


When to focus and when to re-evaluate

We live in a complex world with ever changing variables. When it comes to doing the highest impact activity it can seem impossible to be confident in one plan over another. Additionally, new information that comes from further research or even just time passing can make plan A seem better... Read More

Why we should be doing more systematic research

One of the big differences between Effective Altruism and other communities is our use of research to inform decisions. However, not all research is created equal. Systematic research can often lead to very different conclusions than more typical research approaches. Some areas of EA have been systematic in their research.... Read More
Comment author: Joey 08 March 2018 06:52:35PM 13 points [-]

Personally I am not really a fan of job postings being put on this forum. Between all the different EA organizations it would be pretty easy for every second post to be a job ad, and I think that would weaken the forum content for most users. The "Effective Altruism Job Postings" group does a pretty good job at consolidating all jobs that are EA relevant in a central space without cluttering up a space like this.

Comment author: DavidMoss 30 January 2018 06:33:33PM 3 points [-]

I didn't read the post as meaning either "scale is bad if it is the only metric that is used" or "Scale, neglectedness, solvability is only one model for prioritisation. It's useful to have multiple different models...."

When looking at scale in a scale, neglectedness, tractability, framework, it's true that the other factors can offset the influence of scale. e.g. if something is large in scale but intractable, the intractability counts against the cause being considered and at least somewhat offsets the consideration that the cause is large in scale. But this doesn't touch on the point this post makes, which is that looking at scale itself as a consideration, the 'total scale' may be of little or no relevance to the evaluation of the cause, and rather 'scale' is only of value up to a given bottleneck and of no value beyond that. I almost never see people talking of scale in this way in the context of a scale, neglectedness, tractability, framework: dividing up the total scale into tractable bits, less tractable bits and totally intractable bits. Rather, I more typically see people assigning some points for scale, evaluating tractability independently and assigning some points for that and evaluating neglectedness independently and assigning some points for that.

Comment author: Joey 04 February 2018 09:08:38PM 2 points [-]

Thanks, David. Your interpretation is indeed what I was trying to get across.


How scale is often misused as a metric and how to fix it

One of the big criteria used for cause area selection is scale and importance of the issue. This has been used by 80,000 Hours and OpenPhil amongst others. This is often defined as the size and intensity of the problem. For example, if an issue affects 100,000 people deeply, that... Read More
Comment author: Joey 29 January 2018 08:24:18PM 5 points [-]

Some thoughts on a few of these. I think that EA social safety nets already exist for many people, but it’s not formal in the way you laid out. It’s more based on specific connections and accomplishments. More or less each individual organization and donor has an implicit reputation system that is based on complex and organization/donor specific criteria. Some people will end up fitting multiple positive reputation systems which will give them more safety nets than someone who falls into a few or none. The system is of course dynamic so if you did fall into a category but then your reputation lowered (say by not doing anything that impressive over a long period of time) you could lose a safety net. Additional factors that affect how many safety nets you have in EA also relate to cost. It’s easy to provide a $10k safety net than a $100k safety net. You can imagine why donors/orgs would generally like this system better instead of funding an EA who scores high enough on community agreed on criteria. They could directly fund/support/loan etc someone who directly does well on their criteria.

To put this into a more practical perspective, I would expect that an EA with a strong enough reputation would be able to get support for a while doing entrepreneurship without getting insta-VCed. Likewise, the bar would be lower for a low/no interest loan. I think a case could be made that the system is ineffective or in-groupy and misses people who it should not. However, I think it’s worth acknowledging that informal systems like this definitely exist, so it’s more about the marginal cases who would not get supported by an informal system but would get supported by a formal one. For folks who want to do high risk projects but do not currently have informal connections/safety nets it seems worth considering just building up your reputation. This can be done with pretty low risk things like volunteering part time for an organization.

I agree with the career trajectory points and know that some but definitely not all EAs take this into consideration when determining salary.

Comment author: MichaelPlant 12 January 2018 12:11:13AM *  8 points [-]

I worry you've missed the most important part of the analysis. If we think what it means for a "new cause to be accepted by the effective altruism movement" that would proably be either:

  1. It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that's end goal you need to achieve; writing good arguments is a means to that end.

  2. you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that's who people look to for confirmation a new cause has been vetted. This isn't necessarily stupid on the part of individual EAs to defer to expert judgement: they might think "Oh, well if so and so aren't convinced about X, there's probably a reason for it".

This seems as good as time as any to re-plug the stuff I've done. I think these mostly meet your criteria, but fail in some key ways.

I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it's important new cause suggestions have 'shovel ready' project.

I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.

Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.

Comment author: Joey 15 January 2018 01:51:43AM 3 points [-]

So I think we agree on some things and disagree on others. I think that getting large EA organizations to adopt the cause definitely helps but is but is not necessary. Animal rights as a whole, for example, is not mentioned at all on GiveWell or GWWC and it’s listed as a 2nd tier area by 80,000 Hours (, but it is still pretty clearly endorsed by EA as a whole. If by EA orgs you mean EA orgs of any size, I do think that most cause areas that are accepted by the EA movement will get organizations started in it in time. I think that causes like wild animal suffering and positive psychology are decent examples of causes that have gotten some traction without major pre-existing organizations endorsing them. It might also come down to disagreements about definitions of “in EA”.

I almost put your blogs into this post as a positive example of what I wish people would do, but I wanted to keep the post to a lower length. In general, I think your efforts on mental health have updated more than a few EAs in positive directions towards it, including myself. There has been some related external content and research on this topic in part because of your posts and I would put a nontrivial chance on some EAs in the next 1-5 years focusing exclusively on this cause area and starting something in it. In general, I would expect adoption to new causes to be fairly slow and start with small numbers of people and maybe one organization before expanding to be on the standard go-to EA list.

I think if I were to guess what is holding back mental health / positive psych as a cause area it would be having a really strong concrete charity to donate to. By strong charity, I mean strong CEA but also focus on narrow set of interventions, decent evidence base/track record, strong M&E, and decently investigated by an external EA party (would not have to be an org. Could be an individual.) Something like Strong Minds might be a good fit for this.

View more: Next