In response to Open Thread #39
Comment author: byanyothername 12 June 2018 10:57:17AM *  0 points [-]

Plea for authors to write summaries at the top of each post - it makes skimming the forum easier. (And a reminder that putting your "This post was cross-posted to...", "This post was co-authored by..." etc. above the summary takes up valuable space.)

This is also an idea for the forum structure itself: forcing people to write a tl;dr in order to post.

Comment author: Evan_Gaensbauer 12 June 2018 12:03:11AM *  1 point [-]

Thanks. Are you referring to my posts and comments on social media? That's more transient, so I make less of an effort on social media to be legible to everyone. Do you have examples of the posts or comments of mine you mean? I don't get tons of feedback on this. Of course people tell me I'm often confusing. But the feedback isn't actionable. I can decode any posts you send me. For example, here is a post of mine where I haven't gotten any negative feedback on the content or writing style. This post was like a cross between a personal essay and dense cause prioritization discussion, so it's something I wouldn't usually post to the EA Forum. It's gotten some downvotes, but clearly more upvotes than downvotes, so somebody is finding it useful. Again, if I get some downvotes it's ultimately feedback on what does or doesn't work on the EA Forum. This is the kind of clearer feedback specifying something.

Comment author: byanyothername 12 June 2018 10:49:24AM 2 points [-]

Also the dank memes stuff...at the meta level of treating it like valuable, serious stuff... This is a separate thing as it's a case of me thinking, "Surely they're still joking...but it really sounds like they're not," but it's another reason for me to give up on trying to understand you because it's too much effort.

Comment author: Evan_Gaensbauer 12 June 2018 12:51:30AM 0 points [-]

I also have some posts I've taken more time to edit for clarity on my personal blog about effective altruism.

Comment author: byanyothername 12 June 2018 10:42:08AM 1 point [-]

Thanks. Data point: the summary at the top of "Crucial Considerations for Terraforming as an S-Risk" seems like a normal level of hard-to-read-ness for EA.

Comment author: Evan_Gaensbauer 12 June 2018 12:03:11AM *  1 point [-]

Thanks. Are you referring to my posts and comments on social media? That's more transient, so I make less of an effort on social media to be legible to everyone. Do you have examples of the posts or comments of mine you mean? I don't get tons of feedback on this. Of course people tell me I'm often confusing. But the feedback isn't actionable. I can decode any posts you send me. For example, here is a post of mine where I haven't gotten any negative feedback on the content or writing style. This post was like a cross between a personal essay and dense cause prioritization discussion, so it's something I wouldn't usually post to the EA Forum. It's gotten some downvotes, but clearly more upvotes than downvotes, so somebody is finding it useful. Again, if I get some downvotes it's ultimately feedback on what does or doesn't work on the EA Forum. This is the kind of clearer feedback specifying something.

Comment author: byanyothername 12 June 2018 10:39:10AM *  2 points [-]

I don't want to spend too long on this, so to take the most available example (i.e. treat this more as representative than an extreme example): Your summary at the top of this post.

  • General point: I get it now but I had to re-read a few times.
  • I think the old "you're using long words" is a part of this, which is common in EA and non-colloquial terms are often worth the price of reduced accessibility, but you seem to do this more than most (e.g. "posit how" could be "suggest that"/"explore how", "heretofore" could be "thus far", "delineate" could be "identify"/"trace" etc....it's not that I don't recognise these words, they're just less familiar and so make reading more effort).
  • Perhaps long sentences with frequent subordinate clauses - and I note the irony of my using that term - and, indeed, the irony of adding a couple here - add to the density.
  • More paragraphs, subheadings, italics, proofing etc. might help a bit.

I also have the general sense that you use too many words - your comments and posts are usually long but don't seem to be saying enough to justify the length. I am reminded of Orwell:

It is easier — even quicker, once you have the habit — to say "In my opinion it is not an unjustifiable assumption that" than to say "I think".

And yes - mostly on social media. But starting to read this post prompted the comment (I feel like you have useful stuff to say so was surprised to not see many upvotes and wondered if it's because others find you hard to follow too).

Comment author: byanyothername 11 June 2018 07:23:03PM 0 points [-]

there might research being done into ways to increase animal’s well-being set-point

Anyone up for working on CRISPR-based gene drives to this end?

Comment author: byanyothername 11 June 2018 06:32:43PM 0 points [-]

It's meaningful to have an opinion one way or the other, but it's far from clear that one is better was my point. Like, I'd imagine people in this community would disagree a lot on the value of CBT vs hot meals in my example, so I wouldn't just claim that one is worse than the other because it costs more.

Comment author: byanyothername 11 June 2018 06:44:24PM 0 points [-]

Let me try another example. GiveWell wouldn't just say "AMF saves the life of a child under 5 for ~$x. GiveDirectly doubles consumption for one person for 50 years for >$x. Therefore, AMF is a better marginal choice." Not without justifying or at least acknowledging the underlying trade-off there.

Comment author: Peter_Hurford  (EA Profile) 11 June 2018 05:56:20PM 0 points [-]

I disagree with your analogy. I do think it's meaningful to say that I would prefer human-focused interventions at that price tradeoff and that it isn't because of speciesist attitudes. So they're at least comparable enough for people to know what I'm talking about.

Comment author: byanyothername 11 June 2018 06:32:43PM 0 points [-]

It's meaningful to have an opinion one way or the other, but it's far from clear that one is better was my point. Like, I'd imagine people in this community would disagree a lot on the value of CBT vs hot meals in my example, so I wouldn't just claim that one is worse than the other because it costs more.

Comment author: byanyothername 11 June 2018 06:26:58PM 3 points [-]

Evan, just a data point: I don't understand a lot of what you're saying in most of your posts/comments, and I can only think of one person I find more difficult to understand out of everyone I've come across in the EA community who I've really wanted to understand. (By which I mean "I find the way you speak confusing and I often don't know what you mean", not "Boi, you crazy".)

Comment author: byanyothername 11 June 2018 06:16:31PM 1 point [-]

There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”.

[Laughing crying face]

[Not because I'm crying with laughter, but because I'm laughing and crying at the same time]

Comment author: byanyothername 11 June 2018 06:00:17PM 2 points [-]

So awesome to see something in this space!

However,

If BERI does not feel that a strong effort was made to complete the project, and the learnings from failure were not particularly valuable, BERI may require that the grant funding be returned.

Is there a way to provide regular (e.g. monthly) check-ins and assurances that sufficient effort/learning is being made? I suspect that a lot of the value of the program will come from increasing individuals' financial security to such an extent that they feel able to take on valuable projects, but a disproportionate amount of that value is lost if individuals think there's a non-negligible chance that in, say, 6 or 12 months they'll have to return the funding.

View more: Next