Comment author: Joey 19 May 2018 06:01:54PM 3 points [-]

I have written about this topic before

Personally, what I would find most useful is an up to date spreadsheet list that is sortable by how much the company matches donations (this seems to be the most significant thing companies do on the charity front), so that when I am talking to a job seeker I can send them it and they can easily see what companies offer say 10k+ of matching. You can see from my post in 2013 quite a few offer that or more.

On the broader note of building an evidence based fundraising wiki, is the plan for it to be publicly available and widely shared or more aimed at just the EA community?

Comment author: Joey 14 May 2018 05:20:20AM 5 points [-]

A way to frame this question is how do we get the best predictions per least amount of effort, with different strategies having different levels of effort/accuracy of output. A strategy would be considered dominated if a different strategy required both less effort and gave better accuracy. I think a pretty good case can be made for “teams of forecasters working together with their results extremized” cleary requiring less effort and being possibly more accurate or in the same ballpark as prediction markets. If that is the case, I think the argument for setting up/using prediction markets is greatly weakened. It seems like if someone did systematic research into the highest value/least resource consumption predictions, prediction markets would not score at the top of many overall rankings given its high cost. Also some evidence about the high resource cost might be that EAs, although quite excited, driven and intelligent, cannot get a prediction market going with more than a few bets on a given question.

Comment author: MichaelPlant 13 May 2018 11:05:26PM 13 points [-]

I appreciate the write up and think founding charities could be a really effective thing to do.

I do wonder if this might be an overly rosey picture for a couple of reasons.

  1. Are there any stories of EAs failing to start charities? If there aren't, that would be a bit strange and I'd want to know why there were no failures. If there are, what happened and why didn't they work? I'm a bit worried about a survivorship effect making it falsely look like starting charities is easy. (On a somewhat releated note, your post may prompt me to finally write up something about my own unsuccessful attempt to start a start up)

  2. One is that some of the charities you mention are offshoots/sister charities of each other - GWWC and 80k, Charity Science Health and Fortify Health. This suggests to me it might be easier to found a second charity than a first one. OPP and GiveWell also fit this mold.

  3. Including AMF is, in some sense a bit odd, because it wasn't (I gather) founded with the intention of being the most effective charity. I say it's odd because, if it hadn't existed, the EA world would have found another charity that it deemed to be the most effective. Unless AMF thought they would be the most effective, they sort of 'got lucky' in that regard.

Comment author: Joey 14 May 2018 05:19:01AM 3 points [-]

Would be keen to hear your story as I am working to develop better models around what makes projects have success (particularly nonprofits, but I think all data can be helpful).

1) I think this is fair. I have another post in the works on something along these lines. Super long story short though, a lot of the failures are small projects or at an earlier stage vs more like full scale charities. I think that is a problem/concern in its own right, and I think a pretty good case can be made that established charities should be shut down and considered failures more often.

2) I do think a case can be made that second charities are easier to start than first ones (although I would put Fortify Health as quite distinctive from CSH, as my involvement was quite modest in terms of hours). I still think however, there are lots of examples of first time successes.

3) My understanding of AMF from talking to them is that when making the decision that eventually lead to them choosing bednets, Rob M considered that it had to be 1) really big problems 2) really need help 3) might be fixable, as well as some other connected criteria like not tons of other people working on it. From my understanding, quite a few different interventions were considered (e.g. TB, freshwater, landmines). I do not get the sense it was like GiveWell-style shallow reports, but the concept of doing more good was definitely a big part of the decision making.

Comment author: Denise_Melchin 13 May 2018 08:36:41AM 8 points [-]

I'm arguing against prediction markets being the best alternative in many situations contemplated by EAs, which is something I have heard said or implied by a lot of EAs in conversations I've had with them. Most notably, I think a lot of EAs are unaware of the arguments I make in the post and I wanted to have them written up for future reference.

Comment author: Joey 13 May 2018 09:38:26PM 2 points [-]

I have had a lot of EAs say this to me in person as well.

Comment author: JoshP 07 May 2018 01:49:16PM 0 points [-]

Good article in lots of ways. I'm perhaps slightly put off by the sheer amount of info here- I don't feel like I can input all of this easily, given my own laziness and number of goals which I feel like I prioritise. Not sure there's an easy solution to that (maybe some sort of two three top suggestions?), but feel like this is a bit of an information overload. Thanks for writing it though Darius, I enjoyed it :)

Comment author: Joey 07 May 2018 03:18:04PM 2 points [-]

Personally, if I were to simplify this post down to top 2 pieces of advice 1) focus on doing good now 2) surround yourself with people who will keep encouraging you to do good long term.

Comment author: ThomasSittler 06 May 2018 05:20:21PM *  7 points [-]

Thanks for the post. I'm sceptical of lock-in (or, more Homerically, tie-yourself-to-the-mast) strategies. It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

I know you said your post just aims to provide ideas and tools for how you can avoid value drift if you want to do so. But even so, in the spirit of compromise between your time-slices, solutions that destroy less option value are preferable.

Comment author: Joey 06 May 2018 06:11:43PM 3 points [-]

Say a person could check a box and commit to being vegan for the rest of their lives, do you think that would be a ethical/good thing for someone to do? Given what we know about average recidivism in vegans?

Comment author: Denise_Melchin 06 May 2018 09:15:30AM *  16 points [-]

I’m curious what kind of experiences people in the dedicated group actually had that put them off if you could elaborate on that.

I share the impression that dedication is less encouraged in EA these days than five years ago. I’m also personally very disappointed by that since high dedication felt like a major asset I could bring to EA. Now I feel more like it doesn’t matter which is discouraging.

My guess is that this is because high dedication is a trait of youth movements and the age of the median and perhaps more importantly the most influential EAs has gone up in the mean time. EA has lost its youth movement-y vibe.

I’m also interested whether the other movements you’re comparing EA to are youth movements?

Comment author: Joey 06 May 2018 06:11:33PM 2 points [-]

"I’m also personally very disappointed by that since high dedication felt like a major asset I could bring to EA. Now I feel more like it doesn’t matter which is discouraging." It’s still very helpful to other dedicated people to know people like you :)

The main movement I am comparing EA to is its younger self, but I think the AR movement also came to mind a lot while writing this post.

I agree that age seems to play a pretty noticeable role, with older movements being wiser but less energetic. I think there might just be some biological mechanism at play, but I also think that in many movements people do "what they can get away with". If I can work 30 hours and my organization is still successful, it’s less motivating to work 60 than if that 30 extra hours will be the make more break. Wisdom gives me more ability to slack on energeticness.

Comment author: oagr 06 May 2018 06:17:55AM 1 point [-]

Another point: living in the bay is pretty expensive and is becoming more so. I don't see any solutions to this on the horizon. Having a bunch of people all live & work here seems pretty efficient, at least until internet communication becomes a decent amount better.

Rent + taxes + health expenses (gym memberships, healthy food), etc, can add up pretty quickly.

Comment author: Joey 06 May 2018 06:10:43PM 1 point [-]

I think living in an EA city is one of the strongest cases for spending more money in terms of increasing impact per $ spent. I think it’s the more marginal stuff I am generally careful about (e.g. eating at restaurants).

Comment author: oagr 06 May 2018 01:52:48AM *  4 points [-]

I think I'm mostly in agreement here. This thinking can lead to cult-ish groups, but my guess is deliberate decision-making could lead to very productive and safe outcomes.

I also think that it's nice to be able to have groups of people to aspire to. This is obviously a fictional example, but I think the fact that the Jedi of Star Wars lived such strict routines made them more admirable; as opposed to being seen as "holier-than-though" individuals that others would shame for setting too good an example.

One point I'd press back against is the line: "Salaries at EA orgs have gone up significantly over time as well as more frequent retreats and other staff benefits. " My impression is that there is a lot of money out there (for groups that OpenPhil is comfortable funding), so the cost of paying these employees is relatively low. It seems to me like making more money should help your productivity and allow you to be more intense. I would be a lot more focused on total output per person than I would be financial stinginess.

My model of a very intense person is similar to one of the intense entrepreneurs here; hopefully, they have a lot of resources available to them, but they do a lot with those resources.

Comment author: Joey 06 May 2018 05:48:12AM 1 point [-]

So I am not sure the focus on total output per person vs financial stinginess is so clear. To stick with the Open Phil example, it's not just the max they are willing to fund, it’s the counterfactual of their last donated dollar. For example, if one AR charity takes say x2 what it could run off (focusing on output per person vs frugalness) you would have to factor that counterfactual 50% of the donation going to the last charity that Lewis ends up funding with Open Phil (or maybe the last in that given year). In either of those situations the counterfactuals are definitely not 0. For example, say I personally would be 25% more effective if I was paid 50k vs 100k (x2 salary). I would have to assume my project is x4 better than the counterfactual project Lewis donates to otherwise. This could be true for one AR charity vs another, but I would say it's far from obvious and I will also note I would be quite surprised if the personal gains are that high in most cases of increased salaries, but would be super keen on more data on this.

Comment author: RandomEA 23 April 2018 02:19:34PM 6 points [-]

What percent of those who drifted from the 50% category ended up in the 10% category instead of out of the movement entirely?

And would the graph of the number of people remaining in the 50% category over time look roughly linear or was drifting concentrated at the beginning or near the end? What about for the 10% category?

Comment author: Joey 26 April 2018 05:08:25PM 3 points [-]

I did not break down the data that way when I made it, but a quick look would suggest ~75% moved from 50% to 10% and drifting was mildly concentrated at the beginning.

View more: Next