Comment author: Jan_Kulveit 20 July 2018 08:58:49AM *  2 points [-]

Good observation with the SSC natural experiment!

I actually believe LW2.0 is doing a pretty good job, and is likely better than reddit.

Just there is a lot of dilemmas implicitly answered in some way, e.g.

  • total utilitarian or average? total

  • decay with time or not? no decay

  • everything adds to one number? yes

  • show it or hide it? show it

  • scaling? logarithmic

This likely has some positive effects, and some negative ones. I will not go into speculation about what they are. Just if EAF2.0 is going this direction, I'd prefer the karma system to be sufficiently different from LW. E.g going average utilitarian and not displaying the karma would be different enough (just as an example!)

Also the academic literature on "social influence bias" (paper by Lev Muchnik, Sinan Aral and Sean J. Taylor from 2014 and followups) may be worth attention

Comment author: John_Maxwell_IV 20 July 2018 11:37:29PM 1 point [-]

Yeah maybe they could just select whatever karma tweaks would require the minimum code changes while still being relatively sane. Or ask the LW2.0 team what their second choice karma implementation would look like and use it for the EA forum.

Comment author: Jan_Kulveit 19 July 2018 03:17:07PM *  11 points [-]

Feature request: integrate the content from the EA fora into LessWrong in a similar way as

Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.

Implementing the same system here makes the risks correlated.

I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level - it seems somewhat similar to likes on facebook, and it's clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)

In situations with such uncertainty, I would prefer the risks to be less correlated

edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked

Comment author: John_Maxwell_IV 20 July 2018 03:16:31AM *  2 points [-]

Great point. I think it's really interesting to compare the blog comments on to the reddit comments on /r/slatestarcodex. It's a relatively good controlled experiment because both communities are attracted by Scott's writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because "it takes forever to find the good content". And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands--especially in the open threads. The discussion is a lot more leisurely and people don't seem nearly as motivated to grab the reader's interest. The subreddit is a lot more political, maybe because reddit's voting system facilitates mobbing.

Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it's a bad idea to use the EA forum as a skunk works?

BTW there is more discussion of the subforums thing here.

Comment author: remmelt  (EA Profile) 18 July 2018 09:39:56PM *  0 points [-]

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

The example posts I gave are on the extreme end of the kind of granularity I'd personally like to see more of (I deliberately made them extra specific to make a clear case). I agree those kinds of posts tend to show up more in the Facebook groups (though the writing tends to be short there). Then there seems to be stuff in the middle that might not fit well anywhere.

I feel now that the sub-forum approach should be explored much more carefully than I did when I wrote the comment at the top. In my opinion, we (or rather, Marek :-) should definitely still run contained experiments on this because on our current platform it's too hard to gather around topics narrower than being generally interested in EA work (maybe even test a hybrid model that allows for crossover between the forum and the Facebook groups).

So I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Thanks for your points!

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: John_Maxwell_IV 20 July 2018 02:48:53AM 1 point [-]

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

Lol, like I said, I'm not completely sure. Posts & comments seem to go into greater depth, posts sometimes get referenced long after they are written?

I'm not certain subfora are a terrible idea, I just wanted this risk to be on peoples' radar. One possible compromise is to let people tag their posts (perhaps restricted to a set of tags chosen by moderators) and allow users to subscribe to RSS feeds associated with particular tags.

Comment author: remmelt  (EA Profile) 17 July 2018 10:32:57AM *  0 points [-]

Hmm, would you think Schelling points would still be destroyed if it was just clearer where people could meet to discuss certain specific topics besides a ‘common space’ where people could post on topics that are relevant to many people?

I find the comment you link to really insightful but I doubt whether it neatly applies here. Personally, I see a problem with that we should have more well-defined Schelling points as the community grows but that currently the EA Forum is a vague place to go to ‘to read and write posts on EA’. Other places for gathering to talk about more specific topics are widely dispersed over the internet – they’re both hard to find and disconnected from each other (i.e. it’s hard to zoom in and out of topics as well as explore parallel topics that once can work on and discuss).

I think you’re right that you don’t want to accidentally kill off a communication platform that actually kind of works. So perhaps a way of dealing with this is to maintain the current EA Forum structure but then also test giving groups of people the ability to start sub-forums where they can coordinate around more specific Schelling points on ethical views, problem areas, interventions, projects, roles, etc. – conversations that would add noise for others if they did it on the main forum instead.

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: John_Maxwell_IV 17 July 2018 10:43:12PM *  4 points [-]

Yeah. I feel like the EA community already has a discussion platform with very granular topic divisions in Facebook, and yet here were are. I'm not exactly sure why the EA forum seems to me like it's working better than Facebook, but I figure if it's not broken don't fix it. Also, I think something like the EA Forum is inherently a bit more fragile than Facebook... any Facebook group is going to benefit from Facebook's ubiquity as a communication tool/online distraction.

You made a list of posts that we’re missing out on now... those kinda seem like the sort of posts I see on EA facebook groups, but maybe you disagree?

In response to comment by saulius  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 15 July 2018 04:33:10PM *  2 points [-]

Yeah, my conclusions here definitely overlap with the cluelessness stuff. Here I'm thinking specifically about cost-effectiveness.

My main takeaway so far: cost-effect estimates should be weighted less & theoretical models of change should be weighted more when deciding what interventions have the most impact.

Comment author: John_Maxwell_IV 16 July 2018 01:02:38AM 0 points [-]

Do you think you're in significant disagreement with this Givewell blog post?

In response to comment by Peter_Hurford  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 11 July 2018 04:12:44AM 1 point [-]

A lot of big events of in my life have had pretty in-the-moment-trivial-seeming things in the causal chains leading up to them. (And the big events appear contingent on the trivial-seeming parts of the chain.)

I think this is the case for a lot of stuff in my friends' lives as well, and appears to happen a lot in history too.

It's not the far future, but the experience of regularly having trivial-seeming things turn out to be important later on has built my intuition here.

Comment author: John_Maxwell_IV 16 July 2018 12:47:29AM 0 points [-]

It's surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we've managed to document regularities in how the world works. It's true that as you move "up the stack", say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.

Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.

In response to comment by saulius  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 15 July 2018 04:38:24PM *  1 point [-]

Sure, but I don't think those are the only options.

Possible alternative option: come up with a granular theory of change; use that theory to inform decision-making.

I think this is basically what MIRI does. As far as I know, MIRI didn't use cost-effectiveness analysis to decide on its research agenda (apart from very zoomed-out astronomical waste considerations).

Instead, it used a chain of theoretical reasoning to arrive at the intervention it's focusing on.

Comment author: John_Maxwell_IV 16 July 2018 12:30:48AM *  0 points [-]

I'm not sure I understand the distinction you're making. In what sense is this compatible with your contention that "Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately"? Is this "chain of theoretical reasoning" a "model that includes far-future effects"?

We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock's terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to "the math of forecasting".)

In response to Open Thread #40
Comment author: Milan_Griffes 10 July 2018 03:15:37PM *  5 points [-]

Why I'm skeptical of cost-effectiveness analysis

Reposting as comment because mods told me this wasn't thorough enough to be a post.


  • The entire course of the future matters (more)
  • Present-day interventions will bear on the entire course of the future, out to the far future
  • The effects of present-day interventions on far-future outcomes are very hard to predict
  • Any model of an intervention's effectiveness that doesn't include far-future effects isn't taking into account the bulk of the effects of the intervention
  • Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately
Comment author: John_Maxwell_IV 13 July 2018 07:40:21PM 4 points [-]

Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately

"Anything you need to quantify can be measured in some way that is superior to not measuring it at all."

Comment author: Henry_Stanley 13 July 2018 06:00:15PM 1 point [-]

Just a note on the EA Wiki (and on project abandonment in general): lots of projects seem to be really badly run. The EA Wiki was offline for months because of server issues, and until recently you couldn't even register as a new user.

I'm not sure EAs have a shorter attention span than anyone else – I imagine most would maybe try a couple of times to get onto the wiki and then just give up. That's part of the reason I'm not worried about project duplication: so many efforts are half-baked that we shouldn't allow one party to have a monopoly on a particular idea.

Comment author: John_Maxwell_IV 13 July 2018 07:27:33PM *  1 point [-]

Hmmm... One thought is that if projects are half-baked due to a shortage of work hours being thrown at them, consolidating all the work hours into a single project might help address the problem. I also think having more people on the project could help from a motivation perspective, if any given project worker feels responsible for fulfilling their delegated responsibilities and is motivated by a shared vision. But ultimately it's the people who are doing any given project who will figure out how to organize themselves.

In response to Open Thread #40
Comment author: remmelt  (EA Profile) 08 July 2018 08:24:24PM *  17 points [-]

The EA Forum Needs More Sub-Forums

EDIT: please go to the recent announcement post on the new EA Forum to comment

The traditional discussion forum has sub-forums and sub-sub-forums where people in communities can discuss areas that they’re particularly interested in. The EA Forum doesn’t have these and this make it hard to filter for what you’re looking for.

On Facebook on the other hand, there are hundreds of groups based around different cause areas, local groups and organisations, and subpopulations. Here it’s also hard to start rigorous discussions around certain topics because many groups are inactive and moderated poorly.

Then there are lots of other small communication platforms launched by organisations that range in their accessibility, quality standards, and moderation. It all kind of works but it’s messy and hard to sort through.

It’s hard to start productive conversations on specialised niche topics with international people because

  • 1) Relevant people won’t find you easily within the mass of posts

  • 2) You’ll contribute to that mass and thus distract everyone else.

Perhaps this a reason why some posts on specific topics only get a few comments even though the quality of the insights and writing seems high.

Examples of posts that we’re missing out on now:

  • Local group organiser Kate tried X career workshop format X times and found that it underperformed other formats

  • Private donor Bob dug into the documents of start-up vaccination charity X and wants to share preliminary findings with other donors in the global poverty space

  • Machine learning student Jenna would like to ask some specific questions on how the deep reinforcement learning algorithm of AlphaGo functions

  • The leader of animal welfare advocacy org X would like to share some local engagement statistics on vegan flyering, 3D headset demos, before sending them off in a more polished form to ACE.

Interested in any other examples you have. :-)

What to do about it?

I don’t have any clear solutions in mind for this (perhaps this could be made a key focus in the transition to using the forum architecture of LessWrong 2.0). Just want to plant a flag here that given how much the community has grown vs. 3 years ago, people should start specialising more in the work they do, and that our current platforms are woefully behind for facilitating discussions around that.

It would be impossible for one forum to handle all this adequately and it seems useful for people to experiment with different interfaces, communication processes and guidelines. Nevertheless, our current state seems far from optimal. I think some people should consider tracking down and paying for additional thoughtful, capable web developers to adjust the forum to our changing needs.

UPDATE: After reading @John Maxwell IV's comments below, I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: John_Maxwell_IV 13 July 2018 10:02:35AM *  4 points [-]

This sounds like it might be a bad idea to me. I just wrote a long comment about the difficulty the EA community has in establishing Schelling points. This forum strikes me as one of the few successful Schelling points in EA. I worry that if subforums are done in a careless way, dividing a single reasonably high-traffic forum into lots of smaller low-traffic ones, one of the few Schelling points we have will be destroyed.

View more: Next