Comment author: saulius  (EA Profile) 12 June 2018 12:47:54PM 0 points [-]
In response to Open Thread #39
Comment author: byanyothername 12 June 2018 10:57:17AM *  0 points [-]

Plea for authors to write summaries at the top of each post - it makes skimming the forum easier. (And a reminder that putting your "This post was cross-posted to...", "This post was co-authored by..." etc. above the summary takes up valuable space.)

This is also an idea for the forum structure itself: forcing people to write a tl;dr in order to post.

Comment author: byanyothername 12 June 2018 10:49:24AM 2 points [-]

Also the dank memes stuff...at the meta level of treating it like valuable, serious stuff... This is a separate thing as it's a case of me thinking, "Surely they're still joking...but it really sounds like they're not," but it's another reason for me to give up on trying to understand you because it's too much effort.

Comment author: byanyothername 12 June 2018 10:42:08AM 1 point [-]

Thanks. Data point: the summary at the top of "Crucial Considerations for Terraforming as an S-Risk" seems like a normal level of hard-to-read-ness for EA.

Comment author: byanyothername 12 June 2018 10:39:10AM *  2 points [-]

I don't want to spend too long on this, so to take the most available example (i.e. treat this more as representative than an extreme example): Your summary at the top of this post.

  • General point: I get it now but I had to re-read a few times.
  • I think the old "you're using long words" is a part of this, which is common in EA and non-colloquial terms are often worth the price of reduced accessibility, but you seem to do this more than most (e.g. "posit how" could be "suggest that"/"explore how", "heretofore" could be "thus far", "delineate" could be "identify"/"trace" etc....it's not that I don't recognise these words, they're just less familiar and so make reading more effort).
  • Perhaps long sentences with frequent subordinate clauses - and I note the irony of my using that term - and, indeed, the irony of adding a couple here - add to the density.
  • More paragraphs, subheadings, italics, proofing etc. might help a bit.

I also have the general sense that you use too many words - your comments and posts are usually long but don't seem to be saying enough to justify the length. I am reminded of Orwell:

It is easier — even quicker, once you have the habit — to say "In my opinion it is not an unjustifiable assumption that" than to say "I think".

And yes - mostly on social media. But starting to read this post prompted the comment (I feel like you have useful stuff to say so was surprised to not see many upvotes and wondered if it's because others find you hard to follow too).

Comment author: Evan_Gaensbauer 12 June 2018 12:51:30AM 0 points [-]

I also have some posts I've taken more time to edit for clarity on my personal blog about effective altruism.

Comment author: richard_ngo 12 June 2018 12:28:58AM 0 points [-]

As a followup to byanyothername's questions: Could you say a little about what distinguishes your coaching from something like a CFAR workshop?

Comment author: Evan_Gaensbauer 12 June 2018 12:03:11AM *  1 point [-]

Thanks. Are you referring to my posts and comments on social media? That's more transient, so I make less of an effort on social media to be legible to everyone. Do you have examples of the posts or comments of mine you mean? I don't get tons of feedback on this. Of course people tell me I'm often confusing. But the feedback isn't actionable. I can decode any posts you send me. For example, here is a post of mine where I haven't gotten any negative feedback on the content or writing style. This post was like a cross between a personal essay and dense cause prioritization discussion, so it's something I wouldn't usually post to the EA Forum. It's gotten some downvotes, but clearly more upvotes than downvotes, so somebody is finding it useful. Again, if I get some downvotes it's ultimately feedback on what does or doesn't work on the EA Forum. This is the kind of clearer feedback specifying something.

Comment author: Halstead 11 June 2018 07:33:36PM *  0 points [-]

In case B, it looks to me like the donor should give to TLYCS, in certain conditions, in others not.

(a) Suppose: Because you gave to TLYCS, GiveWell does the research at a cost of $6, fundraising from an otherwise ineffective donor, and getting $10 to GW charities. In this case, your $6 has raised $10 for effective charities minus the $6 from an otherwise ineffective donor (~0 value). So, I don't think causing GW to fundraise further would be bad in this case. Coordinating with GW to just get them to fundraise for donations to their effective charities is even better in this case, but donating to TLYCS is better than doing nothing.

(b) Suppose: Same as before, except GW fundraises from an effective donor who would otherwise have given the $6 to GW charities. In this case, giving to TLYCS is worse than doing nothing because you have spent $6 getting $10 to GW charities, minus what the effective donor would have given to had you not acted (-$6 to GW charities), so you've spent $6 getting $4 to effective charities. Doing nothing would be better, as then $6 goes to effective charities.

This shows that the counterfactual impact of funged/leveraged donations needs to be considered carefully. GiveWell is starting to do this - e.g. if govt money is leveraged or funged they try to estimate the cost-effectiveness of govt money. Outside that, this is probably something EA donors should take more account of.

Another case that should be considered is causing irrational prioritisation with a given amount of funds. Imagine case (a) above except that instead of fundraising, GiveWell moves money from another research project with a counterfactual value of $9 to GW charities because they have not considering these coordination effects (they reason that $10>$9). In that case, you're spending $6 to get $10 to GW charities minus the $9 that would have gone to GW charities.

Regarding C, this seems right. It would be a mistake for the EA funds to add up its impact as the sum of the impact of each of the individual grants it has made.

Comment author: byanyothername 11 June 2018 07:23:03PM 0 points [-]

there might research being done into ways to increase animal’s well-being set-point

Anyone up for working on CRISPR-based gene drives to this end?

Comment author: byanyothername 11 June 2018 06:44:24PM 0 points [-]

Let me try another example. GiveWell wouldn't just say "AMF saves the life of a child under 5 for ~$x. GiveDirectly doubles consumption for one person for 50 years for >$x. Therefore, AMF is a better marginal choice." Not without justifying or at least acknowledging the underlying trade-off there.

Comment author: byanyothername 11 June 2018 06:32:43PM 0 points [-]

It's meaningful to have an opinion one way or the other, but it's far from clear that one is better was my point. Like, I'd imagine people in this community would disagree a lot on the value of CBT vs hot meals in my example, so I wouldn't just claim that one is worse than the other because it costs more.

Comment author: byanyothername 11 June 2018 06:26:58PM 3 points [-]

Evan, just a data point: I don't understand a lot of what you're saying in most of your posts/comments, and I can only think of one person I find more difficult to understand out of everyone I've come across in the EA community who I've really wanted to understand. (By which I mean "I find the way you speak confusing and I often don't know what you mean", not "Boi, you crazy".)

Comment author: byanyothername 11 June 2018 06:16:31PM 1 point [-]

There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”.

[Laughing crying face]

[Not because I'm crying with laughter, but because I'm laughing and crying at the same time]

Comment author: byanyothername 11 June 2018 06:00:17PM 2 points [-]

So awesome to see something in this space!

However,

If BERI does not feel that a strong effort was made to complete the project, and the learnings from failure were not particularly valuable, BERI may require that the grant funding be returned.

Is there a way to provide regular (e.g. monthly) check-ins and assurances that sufficient effort/learning is being made? I suspect that a lot of the value of the program will come from increasing individuals' financial security to such an extent that they feel able to take on valuable projects, but a disproportionate amount of that value is lost if individuals think there's a non-negligible chance that in, say, 6 or 12 months they'll have to return the funding.

Comment author: Peter_Hurford  (EA Profile) 11 June 2018 05:56:20PM 0 points [-]

I disagree with your analogy. I do think it's meaningful to say that I would prefer human-focused interventions at that price tradeoff and that it isn't because of speciesist attitudes. So they're at least comparable enough for people to know what I'm talking about.

Comment author: Khorton 11 June 2018 05:17:40PM *  3 points [-]

"In many ways this won’t be a typical hotel (non-profit, longer term stays, self-service breakfast and lunch, simplified dinner menu, weekly linen/towel changes, EA evening events etc), so I’m not sure how much prior hotel experience is relevant. Really anyone who is a reasonably skilled generalist, passionate about the project, and friendly should be able to do it."

I think this is where we disagree. It's taken me years to develop the (rather basic) domestic skills I have. I think it would be quite a challenge for someone like me, who can competently manage a household, to competently manage a hotel with 17 people. For example, when I organized EA London's weekend retreat and oversaw the housing, cooking and cleaning for 25 people, it was really hard and I made some significant mistakes.

This worries me because a large majority of the EAs I meet in London are worse at cooking/cleaning/household management than I am. If I'm not currently capable of the task, and most EAs are less capable than I am, then I wonder who CAN do the job.

There are a couple of things I might be wrong about: maybe people are better at domestic tasks outside of London, or maybe there are one or two exceptional candidates (and that's really all it takes!). But based on my experience, I really don't think "anyone who is a reasonably skilled generalist, passionate about the project, and friendly should be able to do it" - or at least, not to a high standard, not right away.

Comment author: byanyothername 11 June 2018 05:12:30PM 2 points [-]

Do you have some quick thoughts on when you expect your coaching to be more valuable to EAs than your average productivity coaching? E.g. What factors are relevant when deciding between paying for coaching with you, and doing a Google search and picking a productivity coach with similar/better prices/experience/reviews?

(Although r.e. prices, given that you've received an EA Grant for this work, we should be careful not to double-count.)

Comment author: byanyothername 11 June 2018 04:59:37PM 1 point [-]

Just want to highlight the bit where you describe how you exceeded your goals (at least, that's my takeaway):

As our Gran Canaria "pilot camp" grew in ambition, we implicitly worked towards the outcomes we expected to see for the “main camp”:

  1. Three or more draft papers have been written that are considered to be promising by the research community.
  2. Three or more researchers who participated in the project would obtain funding or a research role in AI safety/strategy in the year following the camp.

It is too soon to say about whether the first goal will be met, although with one paper in preparation and one team having already obtained funding it is looking plausible. The second goal was already met less than a month after the camp.

Congrats!

Comment author: byanyothername 11 June 2018 04:47:38PM *  1 point [-]

evidence in this study points to an estimate of $310 per pig year saved

Christ, I'd give up pork for 4 years for that price. Any takers? 10% discount if it's in the next 24 hours; I'm pretty cash-strapped at the moment.

View more: Prev | Next