Comment author: EricHerboso  (EA Profile) 13 January 2017 07:30:50PM *  6 points [-]

I agree: it is indeed reasonable for people to have read our estimates the way they did. But when I said that we don't want others to "get the wrong idea", I'm not claiming that the readers were at fault. I'm claiming that the ACE communications staff was at fault.

Internally, the ACE research team was fairly clear about what we thought about leafleting in 2014. But the communications staff (and, in particular, I) failed to adequately get across these concerns at the time.

Later, in 2015 and 2016, I feel that whenever an issue like leafleting came up publicly, ACE was good about clearly expressing our reservations. But we neglected to update the older 2014 page with the same kind of language that we now use when talking about these things. We are now doing what we can to remedy this, first by including a disclaimer at the top of the older leafleting pages, and second by planning a full update of the leafleting intervention page in the near future.

Per your concern about cost-effectiveness estimates, I do want to say that our research team will be making such calculations public on our Guesstimate page as time permits. But for the time being, we had to take down our internal impact calculator because the way that we used it internally did not match the ways others (like Slate Star Codex) were using it. We were trying to err on the side of openness by keeping it public for as long as we did, but in retrospect there just wasn't a good way for others to use the tool in the way we used it internally. Thankfully, the Guesstimate platform includes upper and lower bounds directly in the presented data, so we feel it will be much more appropriate for us to share with the public.

You said "I think the error was in the estimate rather than in expectation management" because you felt the estimate itself wasn't good; but I hope this makes it more clear that we feel that the way we were internally using upper and lower bounds was good; it's just that the way we were talking about these calculations was not.

Internally, when we look at and compare animal charities, we continue to use cost effectiveness estimates as detailed on our evaluation criteria page. We intend to publicly display these kinds of calculations on Guesstimate in the future.

As you've said, the lesson should not be for people to trust things others say less in general. I completely agree with this sentiment. Instead, when it comes to us, the lessons we're taking are: (1) communications staff needs to better explain our current stance on existing pages, (2) comm staff should better understand that readers may draw conclusions solely from older pages, without reading our more current thinking on more recently published pages, and (3) research staff should be more discriminating on what types of internal tools are appropriate for public use. There may also be further lessons that can be learned from this as ACE staff continues to discuss these issues internally. But, for now, this is what we're currently thinking.

Comment author: Telofy  (EA Profile) 14 January 2017 11:09:53AM 4 points [-]

Fwiw, I’ve been following ACE closely the past years, and always felt like I was the one taking cost-effectiveness estimates too literally, and ACE was time after time continually and tirelessly imploring me not to.

Comment author: Fluttershy 13 January 2017 05:43:05PM 3 points [-]

Thank you! I really admired how compassionate your tone was throughout all of your comments on Sarah's original post, even when I felt that you were under attack . That was really cool. <3

I'm from Berkeley, so the community here is big enough that different people have definitely had different experiences than me. :)

Comment author: Telofy  (EA Profile) 13 January 2017 07:25:55PM 1 point [-]

Oh, thank you! <3 I’m trying my best.

Oh yeah, the Berkeley community must be huge, I imagine. (Just judging by how often I hear about it and from DxE’s interest in the place.) I hope the mourning over Derek Parfit has also reminded people in your circles of the hitchhiker analogy and two-level utilitarianism. (Actually, I’m having a hard time finding out whether Parfit came up with it or whether Eliezer just named it for him on a whim. ^^)

Comment author: Fluttershy 12 January 2017 04:24:29AM 8 points [-]

I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.

Comment author: Telofy  (EA Profile) 13 January 2017 09:23:58AM 1 point [-]

Thanks. May I ask what your geographic locus is? This is indeed something that I haven’t encountered here in Berlin or online. (The only more recent example that comes to mind was something like “I considered donating to Sci-Hub but then didn’t,” which seems quite innocent to me.) Back when I was young and naive, I asked about such (illegal or uncooperative) options and was promptly informed of their short-sightedness by other EAs. Endorsing Kantian considerations is also something I can do without incurring a social cost.

Comment author: Telofy  (EA Profile) 08 January 2017 07:23:19AM *  0 points [-]

Btw., this article series of yours convinced me of the importance of AI safety work. Thank you and good work!

Comment author: Telofy  (EA Profile) 11 August 2015 09:29:07AM *  4 points [-]

Thank you, great tips! In response, I just sent a donation of $20 to MIRI. :‑)

Comment author: Telofy  (EA Profile) 08 January 2017 07:21:50AM 0 points [-]

It worked! I’m excited about MIRI now!

Comment author: Owen_Cotton-Barratt 31 December 2016 05:50:49PM 2 points [-]

I think "in expectation" is meant to mean that they can access a probability of having large donation size and time investment. You might say "stochastically".

Comment author: Telofy  (EA Profile) 05 January 2017 11:30:28AM 1 point [-]

Thanks!

Comment author: Richard_Batty 03 January 2017 12:20:00PM 2 points [-]

An EA stackexchange would be good for this. There is one being proposed: http://area51.stackexchange.com/proposals/97583/effective-altruism

But it needs someone to take it on as a project to do all that's necessary to make it a success. Oli Habryka has been thinking about how to make it a success, but he needs someone to take on the project.

Comment author: Telofy  (EA Profile) 05 January 2017 11:29:49AM *  0 points [-]

Oh, awesome! I hadn’t seen the Stack Exchange proposal. *fingers crossed*

Comment author: Telofy  (EA Profile) 03 January 2017 11:16:15AM 2 points [-]

Here is something I recently proposed in a .impact chat: Do we need to make it more clear how people can ask questions? When people are new to EA, they’ll have lots of questions, and Google might not always be able to point them to the best documents answering them.

(1) The international effective altruism group has a high bar for quality, so it’s not a good fit for asking a random question; (2) the EA Forum is being used for longer articles, so people who are sensitive to that will refrain from asking questions here; (3) in open threads, questions that arrive rather late are easily overlooked; and (4) not everyone has a meetup nearby or is curious enough about any particular answer to go to one.

https://www.reddit.com/r/EffectiveAltruism/ is the best fit that I can think of for asking questions, but it took me a while to remember that it exists, and I don’t recognize any (nick) names there, so if it really is the best place to ask questions, then it would need to be promoted more to newcomers and seasoned EAs. People who work in outreach could add the Reddit feed to their Feedly accounts, and the people operating the EA Forum could put up a link to recommend it as a place to ask questions.

What do you think?

Comment author: Telofy  (EA Profile) 31 December 2016 05:42:16PM 2 points [-]

Thank you for all the interesting thoughts! Though the general thesis confirmed my prior on the topic, there were many insightful nuggets in it that I need to remember.

One question though. Either I’m parsing this sentence wrong, the “in expectation” is not meant to be there, or it’s supposed to be something along the lines of “per time investment”:

In light of the availability of donor lotteries the rest of this post will be assuming that large donation sizes and time investments are accessible for small donors in expectation.

Comment author: Telofy  (EA Profile) 26 December 2016 03:24:44PM 6 points [-]

All my answers and many more had already been covered in this comment thread when I encountered the post.

View more: Next