Hide table of contents

The ideas of high risk, high reward projects, value in the tails, etc. are quite common EA positions now. People are usually reminded that they have a low probability of success and that they should expect to fail most of the time. However, most people I know/have heard of who started ambitious EA projects are doing quite well. Examples would be SBF, Anthropic, Alvea, and many more. 

My question, therefore, is: Is the risk of failure lower than we expected, or do I just not know the failures? Do I just know the selection of people who succeeded? Is it too early to tell if a project truly succeeded? If so, what are concrete examples of EAs or EA orgs not meeting high expectations despite trying really hard? Is it possible that we just underestimate how successful someone with an EA mindset and the right support can be when they try really hard?

New Answer
New Comment

9 Answers sorted by

Some past example that come to mind. Kudos to all of the people mentioned for trying ambitious things, and writing up the retrospectives:

  1. Not strictly speaking "EA", but an early effort from folks in the rationality community started an evidence-based medicine organization called MetaMed

Zvi Mowshowitz's post-mortem: https://thezvi.wordpress.com/2015/06/30/the-thing-and-the-symbolic-representation-of-the-thing/

Sarah Constantin's post-mortem: https://docs.google.com/document/d/1HzZd3jsG9YMU4DqHc62mMqKWtRer_KqFpiaeN-Q1rlI/edit

  1. Michael Plant has a post-mortem of his mental health app, Hippo

  2. Looking at around, I also found this list

Some other posts are the Good Technology Projects' postmortema postmortem of a mental health app by Michael Plant, organisations discuss their learnings in retrospectives like Fish Welfare Initiative or in posts announcing decisions to shut down like Students for High Impact Charities. In the Rationalist community, there was the Arbital Postmortem. You can see more examples on the Forum postmortems and retrospectives tag, and examples from the LessWrong community in their analogous tag.

I have failed to do any meaningful work on recommender systems alignment. We launched an association, YouTube acknowledged the problem with disinformation when we talked to them privately (about COVID, for example, coming from Russia, for example), but said they will not do anything, with or without us. We worked alone, I was the single developer. I burned out to the point of being angry and alienating people around me (I understand what Timnit Gebru has went through, because Russia (my home country) is an aggressor country, and there is a war in Tigray as well, which is related to Ethiopia, her home country). I have sent many angry/confusing emails that made perfect sense for me at the time... I went through homelessness and unemployment after internships at CHAI Berkeley and Google and a degree from a prestigious European university. I felt really bad for not being able to explain the importance of the problem and stop Putin before it was too late... Our colleagues' papers on the topic were silenced by their employers. Now I'm slowly recovering and feel I want to write about all that, some sort of a guide / personal experience on aligning real systems / organizations, and that real change comes really, really hard.

Thank you for sharing, this seems like an incredibly important and valuable effort and story. 

Another issue is unemployment and homelessness. 

This outcome doesn't seem acceptable for people with the motivations, efforts and experiences described in your account.

Thanks for sharing. 
I think writing up some of these experiences might be really really valuable, both for your own closure and for others to learn.  I can understand, though, that this is a very tough ask in your current position. 

I've failed a few times. My social instincts tried to get me not to post this comment, in case it makes it more likely that I fail again, and failing hurts. I suspect there's really strong survivorship bias here.

In 2017 I quit my job and spent a significant amount of time self studying ML, roughly following a curriculum that Dario Amodei laid out in an 80k podcast. I ran this plan past a few different people, including in an 80k career advising session, but after a year, I didn't get a job offer from any of the AI Safety orgs I'd applied to (Ought, OpenAI, maybe a couple of others) and was quite burned out and demotivated. I didn't even feel up to trying to interview for an ML focused job. Instead I went back to web development (although it was with a startup that did suggest I'd be able to do some ML work eventually, but that job ultimately wasn't a great fit, and I moved on to my current role... as a senior web dev.) 

I think there are a bunch of lessons I learned from this exercise, but overall I consider it one of my failures. 

Would you be open to sharing the lessons you learned? I just started doing something similar - quit my job a few weeks ago, planning to spend a year exploring career options more oriented towards doing good, with a lean into data but nothing very specific - so I'd love to know what I can learn from experiences like yours.

1
Randomized, Controlled
Apply for funding ASAP. Do not burn to much of your savings. Read about Financial Independence (google FIRE / Financial Independence Retire Early) -- I heard somewhere that you have about 78, 657 hours in your career; if you have a wealth engine that can cover your basic living expenses, then you can devote a much larger fraction of that career to risky EA moves. Even if you're good at self-study, you probably a social cohort for what you're doing, especially if your goal is vague. Set yourself a hard deadline to return to the labor market by (I'd suggest less than four to six months, definitely less than 12) if you haven't made some substantive progress that someone is excited about. DO STUFF THAT ISN'T JUST EA OR TECHNICAL -- you've just opened up a massive amount of slack for yourself, take advantage of it to explore some other aspects of life. I started dancing contact improv during the year I was off, and it was BY FAR the most positive thing I've ever done for my mental and physical health and ability to access joy.  Take all this with rock salt. This is just my experiences. I did this in a very different EA scene and a very different economy. 
1
Mo Putera
Thanks for the considered reply, I appreciate it. I've known about FIRE for a while, it's the main reason my savings and investments rate is much higher than most peers in my income band. The advice to do non-EA/technical stuff is very well taken, it's an area of my life I'm admittedly lacking personal development in.  Also given I live in a non-EA-hub middle income country far away from most of the action, it's probably best to assume the default outcome of my career bet is that it won't pan out and I'll have to return to the labor market, so I do appreciate the hard deadline advice too, even if I don't really like contemplating it (my corporate job pays well and is relatively comfortable, just that the stuff Gavin Leech talks about here resonated a bit too much)
1
Randomized, Controlled
yeah, in particular, maybe we're in a short timelines world and savings rates are  [much??] less important. Personally, I'm stressing about savings rate less currently, both because my life has shifted in significant ways that just make things more expensive, but also because I'm taking short timelines somewhat more seriously.  Maybe one way I can summarize the update is: still avoid doing things that I would generally tend to disapprove of longer timeline worlds, but also try to enjoy life a little more.  So, I mostly still don't eat junk food. But I am going on dates and dancing and going to festivals and trying to channel joy while living in an expensive western city and thinking a lot about <15 year timelines.  It's occurred to me that maybe I should start rolling a dice at some low probability (say 2%) and let that decide for me when I should actually have dessert. 

And presumably, if no one has failed, then people aren't trying things that are ambitious enough. Kat woods I think can speak to some incubations from Charity Entrepreneurship that didn't take off.

CE has been incubating around 5 charities per year (with plans to scale in the future), so far the success rate is as follow:

  • 2/5 estimated to reach or exceed the cost-effectiveness of the strongest charities in their fields 
  •  2/5 make progress, but remain small-scale or have an unclear cost-effectiveness
  •  1/5 shut down in their first 24 months without having a significant impact

I spoke about it briefly in this post and would love to find the time to elaborate more. 


 

[comment deleted]1
0
0

I can say that I failed at what I would consider a high risk, high reward project. I was a member of a charity entrepreneurship cohort and worked on an a nonprofit idea focused on advocacy for a pigouvian tax but unfortunately couldn't really get things off the ground for a few reasons. That said, I still highly recommend trying something ambitious. That failure taught me a lot and got me more into the policy realm which helped pave the way for my current work doing policy advisory in Congress which I think is relatively high impact. 

I think some of the worst failures are mediocre projects that go sort-of okay and therefore continue to eat up talent for a much longer time than needed; cases where ambitious projects fail to "fail fast". It takes a lot of judgment ability and self-honesty to tell that it's a failure relative to what one could have worked on otherwise.

One example is Raising for Effective Giving, a poker fundraising project that I helped found and run. It showed a lot of promise in terms of $ raised per $ spent over the years it was operating, and actually raised $25m for EA charities. But it looks a lot less high-impact once you draw comparisons to GWWC and Longview, or once you account for the small market size of the poker industry, lack of scalability, the expected future funding inflows into EA, and compensation from top Earning To Give opportunities. $25 million is really not much compared to the billions others raised through billionaire fundraising and entrepreneurship.

I personally failed to admit to myself that the project was showing mediocre but not amazing results, and only my successor (Stefan) then discontinued the project, which in hindsight seems like the correct judgment call.

I think it is fair to call EA Ventures, the Pareto Fellowship, and EA Grants failed projects. I’ve written evaluations of each of them here. I also discuss the EA Librarian project (though I don’t conduct a full evaluation) which also seems to have failed. My post also discusses numerous instances where EA projects that didn't fail still had significant and repeated problems. As such, I think it's relevant to your questions of: "what are concrete examples of EAs or EA orgs not meeting high expectations despite trying really hard? Is it possible that we just underestimate how successful someone with an EA mindset and the right support can be when they try really hard?"

Comments3
Sorted by Click to highlight new comments since:

I think it would be great to have some directory of attempted but failed projects. Often I've thought "Oh I think X is a cool idea, but I bet someone more qualified has already tried it, and if it doesn't exist publicly then it must have failed" but I don't think this is often true (also see this shortform about the failure of the efficient market hypothesis for EA projects). Having a list of attempted but shut down (for whatever reason) projects might encourage people to start more projects, as we can really see how little of the idea space has been explored in practice.

 

There's a few helpful write-ups (e.g. shutting down the longtermist incubator) but in addition to detailed post-mortems, I would be keen to see a low-effort directory (AirTable or even Google Sheets?) of attempted projects, who tried, contact details (with permission), why it stopped, etc. If people are interested in this, I can make some preliminary spreadsheet that we can start populating, but other recommendations are of course welcome.

I would love to see this!

Beyond asking about projects in a vague, general sense, it could also be interesting to compare the probabilities of success grantmakers in EA assign to their grantees' projects, to the fraction of them that actually succeed.

Curated and popular this week
Relevant opportunities