Comment author: Dunja 15 June 2018 06:39:13PM *  0 points [-]

Thanks for the reply! It's great if the funding comes from institutions or individuals who are willing to support research topics. I think it would be really bad though if the funding was taken from any standard EA donations without previously attempting to get the funding via existing public grants and institutions (and even in that case, it would still be bad given the comparative impact of such a camp and alternative ways of providing effective charity. I am all for research, but primarily via non-EA research funds which are numerous for topics such as this one - i.e. we should strive to fund EA research topics from general research related funds as much as possible).

Comment author: remmelt  (EA Profile) 17 June 2018 12:10:44PM *  1 point [-]

If it would cost the same or less time to get funding via public grants and institutions, I would definitely agree (i.e. in filling in an application form, in the average number of applications that need to be submitted before the budget is covered, and in loss of time because of distractions and 'meddling' by unaligned funders).

Personally, I don't think this applies to AI Safety Camp at all though (i.e. my guess is that it would cost significantly more time than getting money from 'EA donors', which we would be better off spending on improving the camps) except perhaps in isolated cases that I have not found out about yet.

I'm also not going to spend the time to write up my thoughts in detail but here's a summary:

  • AI alignment is complicated – there's a big inferential gap in explaining to public grantmakers why this is worth funding (as well as difficulty making the case for how this is going to make them look good)
  • The AI Safety Camp is not a project of an academic institution, which gives us little credibility to other academic institutions who would be most capable of understanding the research we are building on
  • Tens of millions of dollars are being earmarked to AI alignment research by people in the EA community right now who are looking to spend that on promising projects run by reliable people. There seems to be a consensus that we need to work at finding talent to spent the money on (not more outside funders).
Comment author: Dunja 14 June 2018 01:38:03PM *  1 point [-]

Thanks for this info, Anne! Could you just clarify who the sponsors of the camp are? I am asking because the attendance is free, but somehow I haven't found any info on who is paying for the whole event. Just out of curiosity :)

Comment author: remmelt  (EA Profile) 15 June 2018 04:58:37PM *  0 points [-]

I’ll answer this point since I happen to know.

  • Left-over funds from the previous camp were passed on
  • Greg Colbourn is willing to make a donation
  • The funding team just submitted an application for EA Grant’s second round

The team does have plenty of back-up options for funding so I personally don’t expect financial difficulties (though it would be less than ideal I think if the Czech Association for Effective Altruism has to cover a budget deficit itself).

Comment author: remmelt  (EA Profile) 15 June 2018 06:38:18AM *  3 points [-]

Really appreciate you putting out your honest thinking behind the way you market recommended charities to people not involved in EA.

My amateur sense is that ACE is now striking the right balance between factual correctness and appeal/accessibility. My worry in the past was that ACE staff members were allowing image considerations to seep into the actual analyis that they were doing (sidenote: I’d be interested to what extent ACE now uses Bayesian reasoning in their estimates, e.g. by adjusting impact by how likely small sample studies are false positives).

When someone is already committed to EA, it tends to become difficult for them to imagine what got them originally excited about effectiveness in helping others and what might motivate new people who are not part of the ‘early adopter crowd’. There is a reason why EA pitches to newcomers also tend to be simple, snappy and focus on one ‘identifiable victim’ before expanding across populations, probabilities and time (my point being that these principles also apply to ACE’s outreach). You cannot expect people to relate to abstract analysis and take action if they have not bridged that gap yet.

However, I hope that ACE’s stance on matching donations will cause other organisations in the effective animal advocacy community to follow their lead. The newsletter by Good Food Institute in December 2017 also had a misleading header saying ‘Twice your impact’. This is an easy thing to slip into when you are focused on raising money.

This was ACE’s marketing material that originally mentioned ‘double your impact’: https://animalcharityevaluators.org/blog/updated-charity-recommendations-december-2017/

I heard this might have been a mistake by less experienced communication staff members as ACE is usually more careful (though it was concerning that outsiders had to mention it to someone working for ACE to start internal Slack discussions). You can find Marianne and I’s original conversation on that below, which we passed on to ACE:

Marianne van der Werf: Animal Charity Evaluators has released their new charity recommendations!

Updated Charity Recommendations: >December 2017 | Animal Charity Evaluators ACE updates our recommendations each year by December 1. This year, we are publishing our recommendations a few days early in order to have our most… ANIMALCHARITYEVALUATORS.ORG

Remmelt Ellen: This statement is intellectually dishonest.🙁 "A generous donor will match donations to ACE’s Recommended Charity Fund, starting today. DONATE TO THE RECOMMENDED CHARITY FUND This means that you can double the impact of your donation from now through the end of the year by donating to our Recommended Charity Fund. We will distribute all of the funds raised through the end of the year to our recommended charities in January. You can find more details about the Fund, including how donations will be divided among charities, here."

Remmelt Ellen: http://benjaminrosshoffman.com/matching-donation-fundraisers-can-be-harmfully-dishonest/

Remmelt Ellen: I'm not happy with the way they've stated that. It doesn't make me feel as confident that they've shifted their marketing-orientation to more rigour.

Remmelt Ellen: Mind you, I'd still recommend donating to one of their recommended charities if you want to donate to prevent factory farming.

Marianne van der Werf: In general that's a good point, but in the case of ACE they're aware of the dishonesty of donation drives and make a point of only doing them when the money is not going to be donated anyway. https://animalcharityevaluators.org/about/background/faq/

Marianne van der Werf ACE should probably mention it in their posts sometimes, because last year people thought less of ACE because of this as well.

Remmelt Ellen: Hmm, but even in this case 'double your impact' is a disingenuous claim to make. That donor would have made a donation to a charity anyway, and probably one in the factory farming space.

Therefore counterfactually-speaking, you can say that the donor probably wouldn't have donated to the recommended charity fund otherwise, not that another donor has doubled their impact.

Remmelt Ellen: "You're donation is being matched –> you've just doubled your impact" is a bold claim to make that's almost impossible to live up to – especially when done by a charity evaluator that should know better.

Remmelt Ellen: More on coordination matching and influence matching: https://blog.givewell.org/2011/12/15/why-you-shouldnt-let-donation-matching-affect-your-giving/

Marianne van der Werf: Good points Remmelt, you should share this conversation with ACE or ask them about their messaging in their upcoming Reddit AMA. I agree that the doubling your impact claim is overly simplistic. It would have been more accurate to just talk about doubling the donations and have people draw their own conclusions about how it influences their impact, because that also depends on people's personal values.

Comment author: byanyothername 11 June 2018 04:59:37PM 1 point [-]

Just want to highlight the bit where you describe how you exceeded your goals (at least, that's my takeaway):

As our Gran Canaria "pilot camp" grew in ambition, we implicitly worked towards the outcomes we expected to see for the “main camp”:

  1. Three or more draft papers have been written that are considered to be promising by the research community.
  2. Three or more researchers who participated in the project would obtain funding or a research role in AI safety/strategy in the year following the camp.

It is too soon to say about whether the first goal will be met, although with one paper in preparation and one team having already obtained funding it is looking plausible. The second goal was already met less than a month after the camp.

Congrats!

Comment author: remmelt  (EA Profile) 13 June 2018 02:24:56PM 1 point [-]

Thanks, yeah, perhaps we should have included that in the summary.

Personally, I was impressed with the committedness with which the researchers worked on their problems (and generally stepped in when there was dishwashing and other chores to be done). My sense is that the camp filled a ‘gap in the market’ where a small young group that’s serious about AI alignment research wanted to work with others to develop their skills and start producing output.

Comment author: 26david26 28 January 2018 08:28:12AM 0 points [-]

Thanks very much for posting all this detailed information. A quick question about making 80000 hours referrals your main metric: my impression from their website is that they are very over-subscribed. So do you have an estimate of the value of getting someone coached (replacement effects etc)?

I notice in your impact assessment for Sjir you talk almost exclusively in terms of getting people involved in EA rather than 80kh referrals, so maybe this is not relevant to your analysis.

Another quick comment: the google doc `See Remmelt’s considerations.' seems to need permissions changed (can't currently access).

Comment author: remmelt  (EA Profile) 20 March 2018 01:03:56PM *  0 points [-]

Hi David, only just saw your comment (I wonder how I can turn on notifications for posts).

At the moment, 80,000 Hours have even closed applications for coaching. We also haven't been able to get a referral link set up through CEA Groups, who strongly recommended to us to use successful 80K referrals as the key metric.

Most of our efforts right now are going to building a committed and active core community. For our monthly community events, we ask people registering to fill in the hours spend on EA, % of income donated and what cause area they would currently see themselves working on. Aside from that, we keep track of people that we think belong to our core community based on multiple criteria, track gender and student/non-student diversity, note down anecdotes of impactful decisions that we might have helped others in making and (new) projects that we've supported. This system is definitely not perfect since we miss important data points but I can consistently incorporate this in my work routine.

Perhaps we should have made a more concerted effort to refer to 80K coaching. Instead, the more natural thing to do in conversation seemed to be to either discuss promising cause areas and career opportunities with a good fit and to point to the career guide.

You raise a good point on replacement effects that I hadn't given thought. I haven't made an estimate on the value of 80,000 Hours referrals and would be interested in seeing one made by someone else.

(also turned on sharing for See Remmelt's considerations)

Comment author: SiebeRozendal 02 March 2018 03:53:54PM *  2 points [-]

Could you be a little more specific about the levels/traits you name? I'm interpreting them roughly as follows:

  • Values: "how close are they to the moral truth or our current understanding of it" (replace moral truth with whatever you want values to approximate).
  • Epistemology: how well do people respond to new and relevant information?
  • Causes: how effective are the causes in comparison to other causes?
  • Strategies: how well are strategies chosen withing those causes?
  • Systems: how well are the actors embedded in a supportive and complementary system?
  • Actions: how well are the strategies executed?

I think a rough categorisation of these 6 traits would be Prioritisation (Values, Epistemology, Causes) & Execution (Strategies, Systems, Actions), and I suppose you'd expect a stronger correlation within these two branches than between?

Comment author: remmelt  (EA Profile) 02 March 2018 06:18:36PM *  0 points [-]

Yeah, I more or less agree with your interpretations.

The number (as well as scope) of decision levels are arbitrary because they can be split. For example:

  • Values: meta-ethics, normative ethics
  • Epistemology: defining knowledge, approaches to acquiring it (Bayes, Occam's razor...), applications (scientific method, crucial considerations...)
  • Causes: the domains can be made as narrow or wide as seems useful for prioritising
  • Strategies: career path, business plan, theory of change...
  • Systems: organisational structure, workflow, to-do list...
  • Actions: execute intention ("talk with Jane"), actuate ("twitch vocal chords")

(Also, there are weird interdependencies here. E.g. if you change the cause area you work on, the career skills acquired before might not be as effective there. Therefore, the multiplier changes. I'm assuming that they tend to be fungible enough for the model still to be useful.)

Your two categories of Prioritisation and Execution seem fitting. Perhaps some people lean more towards wanting to see concrete results, and others more towards wanting to know what results they want to get?

Does anyone disagree with the hypothesis that individuals – especially newcomers – in the international EA community tend to lean one way or the other in terms of attention spent and the rigour with which they make decisions?

Comment author: SiebeRozendal 02 March 2018 03:56:43PM *  1 point [-]

I think it would be better to include this in the OP.

Comment author: remmelt  (EA Profile) 02 March 2018 05:19:34PM 0 points [-]

Will do!

Comment author: remmelt  (EA Profile) 02 March 2018 03:29:17PM *  0 points [-]

To clarify: by implying that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don't mean to say that they should both become generalists.

Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.

In a similar vein, I think it makes sense for CEA's Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support.

However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.

Comment author: MarkusAnderljung 23 February 2018 08:07:29AM *  6 points [-]

The org's I can remember off the top of my head are: EA Sweden (that's me), EA Geneva, EA London, EA China, EA Netherlands (used to have full-time staff, but don't anymore) and EA Australia.

I'm excluding CEA, EAF and Rethink Charity here.

Comment author: remmelt  (EA Profile) 23 February 2018 12:02:16PM 4 points [-]

On EA Netherlands: a major reason why we chose to switch part-time is because we had to look for other income sources (i.e. two of us were working full-time and didn't manage to raise enough funding to cover our basic living costs).

Comment author: remmelt  (EA Profile) 18 November 2017 09:30:04AM 3 points [-]

Just want to say I value that this topic is now openly discussed and considered. A few 'bad apples' (or to put it in more nuanced terms, people who're trying to get their sexual desires/needs met without considering the needs and feelings of the other person enough) in our community can kill off the open, supportive and trusting atmosphere I often experience myself.

An intuition I wanted to bring up: if we'd slam down too hard on the topic of rape, this might create a taboo the other way where it's hard to discuss a possible incident with someone who instigated it because of the shame and social punishment associated with that.

I don't have much experience here but here's a thought: many milder forms of harassment in the EA could plausibly arise from males having poor social awareness and encountering difficulty and frustration trying to date one of a few girls they come into contact with (this seems the most common case to me but there are others as you mentioned).

Setting out 'bright line' rules would still help them gauge when they're going to far. However, this is only one tool, and a rather crude one at that (since it reacts to incidents on the extreme end of the spectrum as they happen, rather than prevention on the lower end).

Personally, I want to work on empowering fellow men to be more emotionally involved and understanding and to seek out and build healthy relationships (such as by hosting circling sessions and practicing non-violent communication together).

Noting that I've scanned through your post and haven't gone through your arguments extensively enough.

View more: Next