Comment author: Jan_Kulveit 09 August 2018 10:30:30PM 5 points [-]

I think this is quite useful, thanks for posting it publicly.

Based on CZEA experience, this seems similar to what we converged at - basically

  • public introductory events
  • more in-depth meetups and advanced content workshop for members
  • core with advanced knowledge, incubating and leading projects

An interesting difference seem to be in the approach to workshops, where we have not much tried something small (~8h - and it was more a series of talks than a workshop) but we tried Fri-Sun retreat.

Comment author: SamDeere 20 July 2018 11:29:27PM 2 points [-]

Implementing the same system here makes the risks correlated.

The point re correlation of risks is an interesting one — I've been modelling the tight coupling of the codebases as a way of reducing overall project risk (from a technical/maintenance perspective), but of course this does mean that we correlate any risks that are a function of the way the codebase itself works.

I'm not sure we'll do much about that in the immediate term because our first priority should be to keep changes to the parent codebase as minimal as possible while we're migrating everything from the existing server. However, adapting the forum to the specific needs of the EA community is something we're definitely thinking about, and your comment highlights that there are good reasons to think that such feature differences have the important additional property of de-correlating the risks.

Feature request: integrate the content from the EA fora into LessWrong in a similar way as

That's unfortunately not going to be possible in the same way. My understanding is that the Alignment Forum beta is essentially running on the same instance (server stack + database) as the LessWrong site, and some posts are just tagged as 'Alignment Forum' which makes them show up there. This means it's easier to do things like have parallel karma scores, shared comments etc.

We see the EA Forum as a distinct entity from LW, and while we're planning to work very closely with the LW team on this project (especially during the setup phase), we'd prefer to run the EA Forum as a separate, independent project. This also us the affordance to do things differently in the future if desired (e.g. have a different karma system, different homepage layout etc).

Comment author: Jan_Kulveit 23 July 2018 12:38:21PM 1 point [-]

Thanks for info!

I think running it as a separate project from LW is generally good, and prioritizing move to the new system is right.

With the LW integration, ok, even if it is not technically possible to integrate in the alignment forum way, maybe there is some something-in-between way? (Although its probably more a question which to ask on the LW side.)

Comment author: John_Maxwell_IV 20 July 2018 03:16:31AM *  6 points [-]

Great point. I think it's really interesting to compare the blog comments on to the reddit comments on /r/slatestarcodex. It's a relatively good controlled experiment because both communities are attracted by Scott's writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because "it takes forever to find the good content". And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands--especially in the open threads. The discussion is a lot more leisurely and people don't seem nearly as motivated to grab the reader's interest. The subreddit is a lot more political, maybe because reddit's voting system facilitates mobbing.

Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it's a bad idea to use the EA forum as a skunk works?

BTW there is more discussion of the subforums thing here.

Comment author: Jan_Kulveit 20 July 2018 08:58:49AM *  5 points [-]

Good observation with the SSC natural experiment!

I actually believe LW2.0 is doing a pretty good job, and is likely better than reddit.

Just there is a lot of dilemmas implicitly answered in some way, e.g.

  • total utilitarian or average? total

  • decay with time or not? no decay

  • everything adds to one number? yes

  • show it or hide it? show it

  • scaling? logarithmic

This likely has some positive effects, and some negative ones. I will not go into speculation about what they are. Just if EAF2.0 is going this direction, I'd prefer the karma system to be sufficiently different from LW. E.g going average utilitarian and not displaying the karma would be different enough (just as an example!)

Also the academic literature on "social influence bias" (paper by Lev Muchnik, Sinan Aral and Sean J. Taylor from 2014 and followups) may be worth attention

Comment author: gworley3  (EA Profile) 19 July 2018 09:32:02PM 1 point [-]

I agree that it would be nice if the EA forum was implemented similar to the way Alignment Forum is being done, although since that is itself still in beta maybe the timeline doesn't permit it right away. Maybe it's something that could happen later, though?

As to risks with voting and comparison to likes on Facebook, I guess the question would be is it any worse that any system of voting/liking content? If it's distorting discussions it seems unlikely that the change will be any worse than the existing voting system on this forum since they are structurally similar even if the weighted voting mechanism is new.

Comment author: Jan_Kulveit 19 July 2018 10:26:30PM 2 points [-]

It's a different question.

The worry is this: Two systems of voting/liking may be "equally good" in the sense that they e.g. incentivize 90% of good comments and disincentivize 10% of good comments, but the overlap of good things they disincentivize may be just 1%. (This seems plausible given the differences in the mechanism, way how it is displayed, and how it directs attention)

It makes a difference if you are using two different randomly broken system, or two coppies of one.

Comment author: Jan_Kulveit 19 July 2018 03:17:07PM *  14 points [-]

Feature request: integrate the content from the EA fora into LessWrong in a similar way as

Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.

Implementing the same system here makes the risks correlated.

I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level - it seems somewhat similar to likes on facebook, and it's clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)

In situations with such uncertainty, I would prefer the risks to be less correlated

edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked

Comment author: turchin 25 June 2018 11:51:54AM 3 points [-]

It would be great to have some kind of a committee for info-hazards assessment, like a group of trusted people who will a) will take responsibility to decide whether the idea should be published or not b) will read all incoming suggestions in timely manner с) their contacts (but may be not all the personalities) will be publicly known.

Comment author: Jan_Kulveit 27 June 2018 06:14:07PM 3 points [-]

I believe this is something worth exploring. My model is that while most active people thinking about x-risks have some sort of social network links so they can ask others, there may be a long tail of people thinking in isolation, who may at some point just post something dangerous on LessWrong.

(Also there is a problem of incentives, which are often strongly in favor of publishing. You don't get much credit for not publishing dangerous ideas, if you are not allready part of some established group.)

In response to AI Strategy Nexus
Comment author: Jan_Kulveit 07 June 2018 09:03:01PM 0 points [-]

I have to say I'm somewhat skeptical of AI strategizing not grounded in close contact with the more "technical" AI safety research, and close contact with general AI/ML research. There is likely a niche where just having some contact with the technical side and focus on policy is enough, but IMO it is pretty small.

So I would recommend anyone interested in strategy to join the existing groups on AI safety. I'm sure strategy discussion and research is welcome.

Comment author: Jan_Kulveit 28 May 2018 06:49:45PM 6 points [-]

I. My impression on this is there are large differences between "groups" on the "direct work" dimension. And it may be somewhat harmful if everybody tries to follow the same advice (there is also some value of exploration, so certainly not everybody should follow closely the "best practices").

Some important considerations putting different groups at different places on that dimension may be * The "impermanence" of student groups. If the average time a member spends in the group is something like 1.5 years, it is probably unwise to start large, long-term projects, as there is a large risk of failure when the project leaders move * In contrast, the permanence of national level chapters with some legal person form. These should be long-term stable, in part professional organizations, able to plan and execute medium and long-term projects. (Still the best opportunities may be in narrow community building)

  • Avallability of opportunities, and associated costs. If you happen to be a student in e.g. Oxford, and you want to do direct work in research, or advocacy, or policy, or... trying to do this on the platform of a student group makes much less sense than trying to work with CEA,FHI,GPI, etc. In contrast, if you happen to be a young professional in IT in let's say Brno, such opportunities are far away from you.

II. I completely agree with a point of Michal Trzesimiech that there's value in culture of actually doing things.

III. Everybody should keep somewhere back in their mind that the point from which scientific revolution actually took of was when people started interacting with reality by doing experiments :) (And I say this as a theorist to the bone.)

Comment author: Jan_Kulveit 27 May 2018 03:42:39PM 1 point [-]

It's a good story, thanks!

Some thoughts, in case other effective altruists, want to try something similar.

If you are more interested in changing the world than becoming a tech startup entrepreneur, it make make sense to partner up with a company doing something similar, and just offer them the idea (and expertise). In this case a reasonable fit could be e.g. the team of developers behind Dailyio, or behind Sleep as an Android, Twilight, Mindroid, etc. Their apps seem to be some of the more useful happiness interventions on Android Market, have millions of downloads, and plausibly big part of their users are the sample people who would be interested in your app.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: Jan_Kulveit 05 May 2018 07:04:30AM 3 points [-]

I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!

Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.

(Using such cathegories, some organizations would end up in classified in different boxes)

View more: Next