Comment author: Halstead 27 May 2018 09:08:17AM 0 points [-]

I don't think the reasoning here is correct. It is possible and normal for the sum of the counterfactual impact of individual actors to exceed the counterfactual impact of the sum of individual actors. I will write something up on this.

Comment author: Peter_Hurford  (EA Profile) 27 May 2018 10:15:09AM *  5 points [-]

I'd be curious what you're reasoning is. My understanding is that the technical solution here is to calculate Shapely value.

Comment author: MichaelPlant 25 May 2018 12:26:27PM 0 points [-]

Can you elaborate? I'm not really sure what you have in mind. 'Quick self-published book' sounds like an oxymoron. I'd like to publish a book on happiness at some point, but would hope for it not to be self-published.

Comment author: Peter_Hurford  (EA Profile) 25 May 2018 02:55:21PM 2 points [-]

Basically, you can sell an e-book for a small amount, like $0-5. The e-book is way more detailed than a typical blog post, but is way less detailed than a full book you'd buy for ~$30 at your local bookstore. You get a small amount of money and can also collect emails and use them to build a list for further marketing. This is a pretty common thing a lot of online entrepreneurs seem to do.

Comment author: MichaelPlant 24 May 2018 10:18:05PM *  0 points [-]

Ah, I wondered if anyone was going to spot this Easter egg! Yeah, the list isn't public. This might sound outrageously petty, but having spend so long compiling it, I feel strange about giving it away or making it freely available for other people to copy.

I've trying to work out what to do with it and the rest of the algorithm I designed. If I wasn't so un-enthused about start ups I'd want to build something that just randomly gave you one of the suggestions (the suggestions are just text) as that seems to be the easiest version to do. Maybe that will happen at some point. Honestly I'm not sure what to do.

Comment author: Peter_Hurford  (EA Profile) 25 May 2018 01:42:19AM 0 points [-]

Maybe turn it into a quick self-published book or something?

Comment author: Peter_Hurford  (EA Profile) 23 May 2018 10:27:28PM *  8 points [-]

This pattern of slow decay with punctuated bursts of enthusiasm is pretty typical of how every failed project I've been on has gone. Thanks for a very useful case study.

Comment author: JoshP 08 May 2018 09:56:43PM 0 points [-]

Haven't read all of it- but I believe there's an error in the first line, which says this is the "second of three parts", I think it means third. Sorry my engagement isn't more interesting :P

Comment author: Peter_Hurford  (EA Profile) 08 May 2018 10:46:33PM *  2 points [-]

Fixed. Thanks!

6

Cost-Effectiveness of Vaccines: Appendices and Endnotes

This essay was jointly written by Peter Hurford and Marcus A. Davis. Note that because of technical length restrictions on the EA Forum, this essay is the third part of three parts:  Part 1 , Part 2 , and Part 3. To see all three parts in one part, you can  view... Read More
8

Cost-Effectiveness of Vaccines: Exploring Model Uncertainty and Takeaways

This essay was jointly written by Peter Hurford and Marcus A. Davis. Note that because of technical length restrictions on the EA Forum, this essay is the second part of three parts: Part 1 , Part 2, and Part 3 . To see all three parts in one part, you can ... Read More
7

What is the cost-effectiveness of researching vaccines?

This essay was jointly written by Peter Hurford and Marcus A. Davis. Note that because of technical length restrictions on the EA Forum, this essay is broken up into three parts: Part 1, Part 2 , and Part 3 . To see all three parts in one part, you can view... Read More
Comment author: Peter_Hurford  (EA Profile) 04 May 2018 01:40:32AM *  21 points [-]

I find it so interesting that people on the EA Facebook page have been a lot more generally critical about the content than people here on the EA Forum -- here it's all just typos and formatting issues.

I'll admit that I was one of the people who saw this here on the EA Forum first and was disappointed, but chose not to say anything out of a desire to not rock the boat. But now that I see others are concerned, I will echo my concerns too and magnify them here -- I don't feel like this handbook represents EA as I understand it.

By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.

I feel like I get it. I recognize that CEA and 80K have a right to have strong opinions about cause prioritization. I also recognize that they've worked hard to become such a strong central pillar of EA as they have. I also recognize that a lot of people that CEA and 80K are familiar with agree with them. But now I can't personally help but feel like CEA is using their position of relative strength to essentially dominate the conversation and claim it is the community consensus.

I agree the definition of "EA" here is itself the area of concern. It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare. But naturally I'd be biased toward using these results, and I'm definitely sympathetic to the idea that EA should be considered more narrowly, or we should weight the opinions of people working on it full-time more heavily. So I'm unsure. Even my opinions here are circular, by my own admission.

But I think if we're going to be claiming in a community space to talk about the community, we should be more thoughtful about who's opinions we're including and excluding. It seems pretty inexpensive to re-weigh the handbook to emphasize AI risk just as much without being as clearly jarring about it (e.g., dedicating three chapters instead of one or slanting so clearly toward AI risk throughout the "reasons not to prioritize this cause" sections).

Based on this, and the general sentiment, I'd echo Scott Weather's comment on the Facebook group that it’s pretty disingenuous to represent CEA’s views as the views of the entire community writ large, however you want to define that. I agree I would have preferred it called “CEA’s Guide to Effective Altruism” or something similar.

Comment author: Gregory_Lewis 02 May 2018 06:10:23PM 4 points [-]

Thanks for the even-handed explication of an interesting idea.

I appreciate the example you gave was more meant as illustration than proposal. I nonetheless wonder whether further examination of the underlying problem might lead to ideas drawn tighter to the proposed limitations.

You note this set of challenges:

  1. Open Phil targets larger grantees
  2. EA funds/grants have limited evaluation capacity
  3. Peripheral EAs tend to channel funding to more central groups
  4. Core groups may have trouble evaluating people, which is often an important factor in whether to fund projects.

The result is a good person (but not known to the right people) with a good small idea is nonetheless left out in the cold.

I'm less sure about #2 - or rather, whether this is the key limitation. Max Dalton wrote on one of the FB threads linked.

In the first round of EA Grants, we were somewhat limited by staff time and funding, but we were also limited by the number of projects we were excited about funding. For instance, time constraints were not the main limiting factor on the percentage of people we interviewed. We are currently hiring for a part-time grants evaluator to help us to run EA Grants this year[...]

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects. More relevantly, I don't review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.

If not, then the idea here (in essence, of crowd-sourcing evaluation to respected people in the community) could help. Yet it doesn't seem to address #3 or #4.

If most of the money (even from the community) ends up going through the 'core' funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.

More importantly, if funders generally want to 'find good people', the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.

Per Michael, I'm not sure what this idea has over (say) posting a 'pitch' on this forum, doing a kickstarter, etc.

Comment author: Peter_Hurford  (EA Profile) 03 May 2018 04:31:46AM 0 points [-]

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects.

I'd be curious to see the reject list for EA Grants.

I think EA Grants is a great idea for essentially crowdsourcing projects, but it would be nice to have more transparency around how the funding decisions are made, as well as maybe the opportunity for people with different approaches to see and fund rejected grants.

View more: Prev | Next