Comment author: Gregory_Lewis 21 February 2018 10:24:11PM 16 points [-]

Thank you for writing this post. An evergreen difficulty that applies to discussing topics of such a broad scope is the large number of matters that are relevant, difficult to judge, and where one's judgement (whatever it may be) can be reasonably challenged. I hope to offer a crisper summary of why I am not persuaded.

I understand from this the primary motivation of MCE is avoiding AI-based dystopias, with the implied causal chain being along the lines of, “If we ensure the humans generating the AI have a broader circle of moral concern, the resulting post-human civilization is less likely to include dystopic scenarios involving great multitudes of suffering sentiences.”

There are two considerations that speak against this being a greater priority than AI alignment research: 1) Back-chaining from AI dystopias leaves relatively few occasions where MCE would make a crucial difference. 2) The current portfolio of ‘EA-based’ MCE is poorly addressed to averting AI-based dystopias.

Re. 1): MCE may prove neither necessary nor sufficient for ensuring AI goes well. On one hand, AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators. On the other, even if the AI-designers have the desired broad moral circle, they may have other crucial moral faults (maybe parochial in other respects, maybe selfish, maybe insufficiently reflective, maybe some mistaken particular moral judgements, maybe naive approaches to cooperation or population ethics, and so on) - even if they do not, there are manifold ways in the wider environment (e.g. arms races), or in terms of technical implementation, that may incur disaster.

It seems clear to me that, pro tanto, the less speciesist the AI-designer, the better the AI. Yet for this issue to be of such fundamental importance to be comparable to AI safety research generally, the implication is of an implausible doctrine of ‘AI immaculate conception’: only by ensuring we ourselves are free from sin can we conceive an AI which will not err in a morally important way.

Re 2): As Plant notes, MCE does not arise from animal causes alone: global poverty, climate change also act to extend moral circles, as well as propagating other valuable moral norms. Looking at things the other way, one should expect the animal causes found most valuable from the perspective of avoiding AI-based dystopia to diverge considerably from those picked on face-value animal welfare. Companion animal causes are far inferior from the latter perspective, but unclear on the former if this a good way of fostering concern for animals; if the crucial thing is for AI-creators not to be speciest over the general population, targeted interventions like ‘Start a petting zoo at Deepmind’ look better than broader ones, like the abolition of factory farming.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

Notwithstanding the above, the approach outlined above has a role to play in some ideal ‘far future portfolio’, and it may be reasonable for some people to prioritise work on this area, if only for reasons of comparative advantage. Yet I aver it should remain a fairly junior member of this portfolio compared to AI-safety work.

Comment author: Ben_West  (EA Profile) 22 February 2018 06:49:53PM 2 points [-]

AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators

This is something I also struggle with in understanding the post. it seems like we need:

  1. AI creators can be convinced to expand their moral circle
  2. Despite (1), they do not wish to be convinced to expand their moral circle
  3. The AI follows this second desire to not be convinced to expand their moral circle

I imagine this happening with certain religious things; e.g. I could imagine someone saying "I wish to think the Bible is true even if I could be convinced that the Bible is false".

But it seems relatively implausible with regards to MCE?

Particularly given that AI safety talks a lot about things like CEV, it is unclear to me whether there is really a strong trade-off between MCE and AIA.

(Note: Jacy and I discussed this via email and didn't really come to a consensus, so there's a good chance I am just misunderstanding his argument.)

Comment author: DavidMoss 10 January 2018 02:50:53AM 1 point [-]
Comment author: Ben_West  (EA Profile) 10 January 2018 03:10:54PM 0 points [-]

Thanks. I was hoping that there would be aggregate results so I don't have to repeat the analysis. It looks like maybe that information exists elsewhere in that folder though?

Comment author: Ben_West  (EA Profile) 09 January 2018 06:08:49PM 0 points [-]

Is it possible to get the data behind these graphs from somewhere? (i.e. I want the numerical counts instead of trying to eyeball it from the graph.)

Comment author: Ben_West  (EA Profile) 21 December 2017 11:33:40PM 2 points [-]

I still think both LEAN and SHIC have a substantial risk of not being cost-effective, but I’m far more confident that there is sufficient analytical work going on now that failure would be detected and learned from. Given the amount of information they’re generating, I’m confident we’ll all learn something important even if either (or both) projects fail

Could you say more about this? When I look at their metrics, it's a little unclear to me what failure (or success) would look like. In extremis, every group rating LEAN as ineffective (or very effective) would be an update, but it's unclear to me how we would notice smaller changes in feedback and translate that to counterfactual impact on "hit" group members.

Similarly, for SHIC, if they somehow found a high school student who becomes a top-rated AI safety researcher or something similar that would be a huge update on the benefit of that kind of outreach. But the chances of that seems small, so it's kind of unclear to me what we should expect to learn if they find that students have some moderate changes in their donations but nothing super-high-impact.

Comment author: Ben_West  (EA Profile) 21 December 2017 10:54:17PM 1 point [-]

Thanks for writing this! This is a very interesting idea.

Do you have thoughts on "learning" goals for the next year? E.g. is it possible that you could find a certain valuable food source with significantly more or less effort expected? Or could you learn of a non-EA funding source (e.g. government grants) that would make you significantly more impactful? I'm mostly interested in your $10,000 order of magnitude, if that's relevant.

Also: do you think that your research could negatively impact animal welfare in the event that a global catastrophe does not occur? E.g. could you recommend a change to fishing practices which are implemented prior to a catastrophe which increases the number of farmed fish or changes their quality of life?

Comment author: Ben_West  (EA Profile) 21 December 2017 10:53:05PM 5 points [-]

Thanks Anna! A couple of questions:

  1. If I'm understanding your impact report correctly, you identified 159 IEI alumni, and ~22 very high impact alumni whose path was determined to have been "affected" by CFAR. 1.1 Can you give me an idea of what that implies for the upcoming year? E.g. does that mean that you expect to have another 22 very high impact alumni affected in the next year? 1.2 Can you say more about what the threshold was for determining whether or not CFAR "affected" an alumnus? Was it just that they said there was some sort of counterfactual impact or was there a stricter criterion?
  2. You mention reducing the AI talent bottleneck: is this because you think that the number of people you moved into AI careers is a useful proxy for your ability to teach attendees rationality techniques, or because you think this is/should be the terminal goal of CFAR? (I assume the answer is that you think both are valuable, but I'm trying to get a sense for the relative weighting.)
  3. Do you have "targets" for 2018 impact metrics? Specifically: you mentioned that you think your good done is linear in donations: could you tell us what the formula is? 3.1 Or more generally: could you give us some insight into the value of information we could expect to see from a donation? E.g. "WAISS workshops will either fail or succeed spectacularly, so it will be useful to run some and see."
Comment author: Ben_West  (EA Profile) 29 October 2017 08:36:11PM 5 points [-]

Things for sharing this! You've given me some ideas for the Madison group, and I look forward to hearing about your progress.

Comment author: Ben_West  (EA Profile) 26 October 2017 03:49:25PM 26 points [-]

I prefer to play the long game with my own investments in community building, and would rather for instance invest in someone reasonably sharp who has a track record of altruism and expresses interest in helping others most effectively than in someone even sharper who reasoned their way into EA and consumed all the jargon but has never really given anything up for other people

I believe that Toby Ord has talked about how, in the early days of EA, he had thought that it would be really easy to take people who are already altruistic and encourage them to be more concerned about effectiveness, but hard to take effectiveness minded people and convince them to do significant altruistic things. However, once he actually started talking to people, he found the opposite to be the case.

You mention "playing the long game" – are you suggesting that the "E first, A second" people are easier to get on board in the short run, but less dedicated and therefore in the long run "A first, E second" folks are more valuable? Or are you saying that my (possibly misremembered) quote from Toby is wrong entirely?

Comment author: Ben_West  (EA Profile) 26 October 2017 03:39:09PM *  7 points [-]

Thank you for the interesting post Kelly. I was interested in your comment:

people tend to think that women are more intuitively-driven and less analytical than men, which does not seem to be borne out and in fact the opposite may be more likely

And followed the link through to Forbes. I think the part you are citing is this:

But research shows that women are just as data-driven and analytical as men, if not more so. In a sample of 32 studies that looked at how men and women thought about a problem or made a decision, 12 of the studies found that women adopted an analytical approach more often than men, meaning that women systematically turned to the data, while men were more inclined to go with their gut, hunches, or intuitive reactions. The other 20 studies? They found no difference between men and women’s thinking styles.

Unfortunately, the link there is broken. Do you know what the original source is?

Comment author: Milan_Griffes 18 October 2017 10:18:54PM 1 point [-]

Update: I checked with the study author and he confirmed that "relationships" on p. 5 is the same as "social effects" in Table 5.

Comment author: Ben_West  (EA Profile) 19 October 2017 10:39:02PM 3 points [-]

Thanks Milan! Do you know more about how they defined "relationships" ("altruism")? Given that they think "relationships" and "altruism" are synonymous, it seems possible that the definition they use may not correspond to what people on this forum would call "altruism".

View more: Next