In response to AI Strategy Nexus
Comment author: Jan_Kulveit 07 June 2018 09:03:01PM 0 points [-]

I have to say I'm somewhat skeptical of AI strategizing not grounded in close contact with the more "technical" AI safety research, and close contact with general AI/ML research. There is likely a niche where just having some contact with the technical side and focus on policy is enough, but IMO it is pretty small.

So I would recommend anyone interested in strategy to join the existing groups on AI safety. I'm sure strategy discussion and research is welcome.

Comment author: Jan_Kulveit 28 May 2018 06:49:45PM 5 points [-]

I. My impression on this is there are large differences between "groups" on the "direct work" dimension. And it may be somewhat harmful if everybody tries to follow the same advice (there is also some value of exploration, so certainly not everybody should follow closely the "best practices").

Some important considerations putting different groups at different places on that dimension may be * The "impermanence" of student groups. If the average time a member spends in the group is something like 1.5 years, it is probably unwise to start large, long-term projects, as there is a large risk of failure when the project leaders move * In contrast, the permanence of national level chapters with some legal person form. These should be long-term stable, in part professional organizations, able to plan and execute medium and long-term projects. (Still the best opportunities may be in narrow community building)

  • Avallability of opportunities, and associated costs. If you happen to be a student in e.g. Oxford, and you want to do direct work in research, or advocacy, or policy, or... trying to do this on the platform of a student group makes much less sense than trying to work with CEA,FHI,GPI, etc. In contrast, if you happen to be a young professional in IT in let's say Brno, such opportunities are far away from you.

II. I completely agree with a point of Michal Trzesimiech that there's value in culture of actually doing things.

III. Everybody should keep somewhere back in their mind that the point from which scientific revolution actually took of was when people started interacting with reality by doing experiments :) (And I say this as a theorist to the bone.)

Comment author: Jan_Kulveit 27 May 2018 03:42:39PM 1 point [-]

It's a good story, thanks!

Some thoughts, in case other effective altruists, want to try something similar.

If you are more interested in changing the world than becoming a tech startup entrepreneur, it make make sense to partner up with a company doing something similar, and just offer them the idea (and expertise). In this case a reasonable fit could be e.g. the team of developers behind Dailyio, or behind Sleep as an Android, Twilight, Mindroid, etc. Their apps seem to be some of the more useful happiness interventions on Android Market, have millions of downloads, and plausibly big part of their users are the sample people who would be interested in your app.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: Jan_Kulveit 05 May 2018 07:04:30AM 3 points [-]

I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!

Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.

(Using such cathegories, some organizations would end up in classified in different boxes)

Comment author: Dunja 07 April 2018 03:38:37PM *  5 points [-]

OK, I have to admit these are really good points :) I don't work in any direct way in the EA sector, but I can imagine that just like with any job, communication with others can kick-start new enthusiasm and even new projects.

Comment author: Jan_Kulveit 08 April 2018 12:25:36AM 6 points [-]

Effective altruism is kind of emotionally hard. Compared to some other forms of altruism It often does not give you the "warm glow" feelings, smiling child's eyes, etc.

This emotional feedback has to be supplied by the community. If it is missing, it has some unfortunate effects - it makes it harder to keep being an effective altruist - you get overrepresentation of people with less intense emotionality, people suppresing their emotions, etc.

Comment author: Alex_Barry 07 April 2018 02:18:35PM 3 points [-]

Thanks for writing this up,

For your impact review this seems likely to have some impact on the program of future years EA: Cambridge retreats. (In particular it seems likely we will include a version of the 'Explaining Concepts' activity, which we would not have done otherwise, as well as being an additional point in favour of CFAR stuff, and another call to think carefully about the space/mood we create).

I am also interested in the breakdown of how you spend the 200h planning time since i would estimate the EA: Cam retreat (which had around 45 attendees, and typically had 2 talks on at the same time) took me <100h (probably <2 weeks FTE). Part of this is likely efficiency gains since I worked on it alone, and I expect a large factor to be I put much much less effort into the program (<10 hours seems very likely).

Comment author: Jan_Kulveit 07 April 2018 11:53:29PM *  2 points [-]

Thanks for the feedback.

Judging from this and some private feedback I think it would actually make sense to create some kind of database of activities, containing not only descriptions, but info like how intellectually/emotionally/knowledge demanding it is, what materials you need, what are prerequisites, the best practices... and ideally also a data about the presentations and feedbacks.

My rough estimate of time costs is 20h general team meetups, 10h syncing between the team and CZEA board, 70h individual time spent planning and preparations, 50h activity developement, 50h survey design, playing with data, writing this, etc. It guess in your case you are not counting the time cost of the people giving the talks preparing them?

Comment author: Dunja 06 April 2018 11:44:53AM *  2 points [-]

Thanks for this, great info and presentation and a very well planned event! That said, I'm in general rather skeptical of the impact such events have on anything but the fun of the participants :) I don't have any empirical data to back this claim (so I might as well be completely wrong), but I have an impression that while such events help like-minded people to get to know each other, in terms of an actual, long-term impact on the goals of EA they don't do much. And here is why: those who're enthusiastic about EA and/or willing to contribute in a certain way will do so anyway. For them online information, or a single talk may even be enough. And the other way around: those who aren't much into it will rarely become so via such an event.

I am aware that this may be quite an unpopular view, but I think it would be great to have some empirical evidence to show if it's really wrong.

My guess is that events organized for an effective knowledge-building in the given domain (including concrete skills required for a very concrete tasks in the given community, some of which were a part of your event) would be those that would make more of a difference. Say, an EA community realizes they lack the knowledge of gathering empirical data or the knowledge of spreading their ideas and attracting new members. In that case, one could invite experts on these issues to provide concrete intensive crash-courses, equipping the given community so that it can afterwards put these skills to action. This means a hard-working event, without much extra-entertainment activities, but with a high knowledge gain. I think networking and getting to know others is nice, but not as essential as the know-how and the willingness to apply it (which may then spontaneously result in a well networked community).

(Edit: I once again checked the primary goal of your event and indeed, if you want to provide a space for people to get to know one another, this kind of retreat certainly makes a lot of sense. So maybe my worries were misplaced given this goal, since I rather had in mind the goal of expanding the EA community and attracting new members).

Comment author: Jan_Kulveit 07 April 2018 11:35:09PM 1 point [-]

Actually I would love to have some RCT on the effects of retreats on EA groups :)

We don't have that, so we have to go by models, guesstimates, anecdotal personal experience, and expert opinion.

As the topic seems broad, I'll just state some points where our models possibly differ and we can try to find out where the crux is

I. - networks are extremely important and you do not assign them appropriate weight (as a network science student I may be biased in opposite direction) - in contrast to reading online material or attending a single talk by an external expert, the social network links provide people feedback, encourangement, early-stage developement of ideas, support, clarification of misunderstandings - in CZEA we want the more engaged members to actually work together on EA projects, rather than just donating together or learning from external sources. this creates a need for higher level of cooperation - practical benefit of strong links for cooperation is the people can model each other better, which makes everything more effective - people are social, emotional animals, mostly motivated not just to ideas, but also by other people

II. "events organized for an effective knowledge-building in the given domain" - I believe the the format taken is actually quite good for an effective knowledge-building and skills building in the domain of effective altruism - diagree with the implied opposition between "hard-working" and "fun". for many people working just on the edge of their capabilities is actually enjoyable

Comment author: Jan_Kulveit 06 April 2018 08:08:42AM 8 points [-]

I like the content.

But a small terminology note: I'm not sure it is a good practice to call it research pieces (and 80k hours is one of the norm-setting organization in EA)

A big portion of the pieces is podcasts, "recorded conversations between smart people". This is useful in many ways, but is it research?

In general, it seems to me ... "research" is high prestige in EA movement. Which creates an incentive to label things as research. So many important things are labelled research.

So it shouldn't surprise anyone there is e.g. shortage of operations people

Comment author: Jan_Kulveit 04 April 2018 10:40:55AM *  6 points [-]

The articles has quite a lot of arguments, so to pick just one manageable topic: I think it's worth to dispute, on empirical grounds, the claim that cryptocurrencies lead to a redistribution of wealth which is somehow less centralized.

According to https://howmuch.net/articles/bitcoin-wealth-distribution about 4% of addresses own above 96% of wealt.

In comparison, in relatively strong European states the top 10% own just above 37% of income. (http://wir2018.wid.world/executive-summary.html)

Comment author: Jan_Kulveit 04 April 2018 10:27:46AM *  24 points [-]

From my observations, the biggest problem in current EA funding ecosystem is structural bottlenecks.

It seems difficult to get relatively modest funding for a promising project if you are not well connected in the network / early stage projects in general (?).

Why?

While OpenPhil has an abundance of resources, they are at the moment staff limited, unlikely to grant to subjects they don't know directly, and unlikely to grant to small projects ($10k)

EA funds seem to also be staff limited and also not capable of giving small grants.

In theory, EA Grants should fill this gap, but the program also seems staff limited (I'm familiar with one grant application where since Nov 2017 the term when the grant program will be open is pushed into future at a rate 1 month per month)

Part of the early-stage projects grant support problem is it generally means investing into people. Investing in people needs either trust or lot of resources to evaluate the people (which is in some aspects more difficult than evaluating projects which are up and running)

Trust in our setting usually comes via network links in the social network, which is quite limiting resource.

So my conclusion is, the efficient allocation is structurally limited by 1] lack of staff in grant-making organizations 2] insufficient size of the "trust network" allowing investment in promising projects based on theor founders

Individual EAs have good opportunities to get more impact from their donations than by donating to EA funds if they're overcoming the structural bottlenecks by their funding. That may mean

a] donating to projects which are under the radar of OpenPhil and EA Funds

b] using their personal knowledge of people to support early stage efforts

View more: Next