Comment author: WillPearson 12 September 2017 08:24:06AM 1 point [-]

My personal idea of it is a broad church. So the systems that govern our lives, government and the economy distribute resources in a certain way. These can have a huge impact on the world. They are neglected because it involves fighting an uphill struggle against vested interests.

Someone in a monarchy campaigning for democracy would be an example of someone who is aiming for systemic change. Someone who has an idea to strengthen the UN so that it could help co-ordinate regulation/taxes better between countries (so that companies don't just move to low tax, low worker protection, low environmental regulation areas) is aiming for systemic change.

Comment author: Michelle_Hutchinson 12 September 2017 02:44:28PM *  2 points [-]

Will, you might be interested in these conversation notes between GiveWell and the Tax Justice Network: http://files.givewell.org/files/conversations/Alex_Cobham_07-14-17_(public).pdf (you have to c&p the link)

Comment author: Tee 05 September 2017 12:40:16PM 1 point [-]

Hey Michelle, I authored that particular part and I think what you've said is a fair point. As you said, the point was to identify the Bay as an outlier in terms of the amount of support for AI, not declare AI as an outlier as a cause area.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

If anything, I'd say we put a fair amount of emphasis on how EAs are coming around on AI, and how resistance toward putting resources toward AI has dropped significantly.

We could speculate about how future-oriented certain cause areas may be, and how to aggregate or disaggregate them in future surveys. We've made a note to consider that for 2018.

Comment author: Michelle_Hutchinson 05 September 2017 03:24:53PM 3 points [-]

Thanks Tee.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

I agree - my comment was in the context of the false graph; given the true one, the emphasis on poverty seems warranted.

Comment author: Peter_Hurford  (EA Profile) 05 September 2017 02:35:42AM 0 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

I can understand why you're having trouble interpreting the first graph, because it is wrong. It looks like in my haste to correct the truncated margin problem, I accidentally put a graph for "near top priority" instead of "top priority". I will get this fixed as soon as possible. Sorry. :(

We will have to re-explore the aggregation and disaggregation with an updated graph. With 237 people saying AI is the top priority and 150 people saying non-AI far future is the top priority versus 601 saying global poverty is the top priority, global poverty still wins. Sorry again for the confusion.

-

The term 'outlier' seems false according to the stats you cite

The term "outlier" here is meant in the sense of a statistically significant outlier, as in it is statistically significantly more in favor of AI than all other areas. 62% of people in the Bay think AI is the top priority or near the top priorities compared to 44% of people elsewhere (p < 0.00001), so it is a difference of a majority versus non-majority as well. I think this framing makes more sense when the above graph issue is corrected -- sorry.

Looking at it another way, The Bay contains 3.7% of all EAs in this survey, but 9.6% of all EAs in the survey who think AI is the top priority.

Comment author: Michelle_Hutchinson 05 September 2017 09:00:44AM *  3 points [-]

Thanks for clarifying.

The claim you're defending is that the Bay is an outlier in terms of the percentage of people who think AI is the top priority. But what the paragraph I quoted says is 'favoring a cause area outlier' - so 'outlier' is picking out AI amongst causes people think are important. Saying that the Bay favours AI which is an outlier amongst causes people favour is a stronger claim than saying that the Bay is an outlier in how much it favours AI. The data seems to support the latter but not the former.

Comment author: Michelle_Hutchinson 04 September 2017 02:40:19PM *  6 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause. Yet while 600 people said it was the top cause, according to the graph around 800 people said that long run future was the top cause (AI + non-AI far future). It seems plausible to disaggregate AI and non-AI long run future, but at least as plausible to aggregate them (given the aggregation of health / education / economic interventions in poverty), and conclude that most EAs think the top cause is improving the long-run future. Although you might have been allowing people to pick multiple answers, and found that most people who picked poverty picked only that, and most who picked AI / non-AI FF picked both?

The following statement appears to me rather loaded: "For years, the San Francisco Bay area has been known anecdotally as a hotbed of support for artificial intelligence as a cause area. Interesting to note would be the concentration of EA-aligned organizations in the area, and the potential ramifications of these organizations being located in a locale heavily favoring a cause area outlier." The term 'outlier' seems false according to the stats you cite (over 40% of respondents outside the Bay thinking AI is a top or near top cause), and particularly misleading given the differences made here by choices of aggregation. (Ie. that you could frame it as 'most EAs in general think that long-run future causes are most important; this effect is a bit stronger in the Bay)

Writing on my own behalf, not my employer's.

Comment author: Michelle_Hutchinson 25 August 2017 09:44:28AM 1 point [-]

If you haven't come across it yet, you might like to look at Back of the Envelope Guide to Philanthropy, which tries to estimate the value of some really uncertain stuff.

Comment author: Michelle_Hutchinson 19 July 2017 01:47:04PM *  2 points [-]

I broadly agree with you on the importance of inclusivity, but I’m not convinced by your way of cashing it out or the implications you draw from it.

Inclusivity/exclusivity strikes me as importantly being a spectrum, rather than a binary choice. I doubt when you said EA should be about ‘making things better or worse for humans and animals but being neutral on what makes things better or worse’, you meant the extreme end of the inclusivity scale. One thing I assume we wouldn’t want EA to include, for example, is the view that human wellbeing is increased by coming only into contact with people of the same race as yourself.

More plausibly, the reasons you outline in favour of inclusivity point towards a view such as ‘EA is about making things better or worse for sentient beings but being neutral between reasonable theories of what makes things better or worse’. Of course, that brings up the question of what it takes to count as a reasonable theory. One thing it could mean is that some substantial number of people hold / have held it. Presumably we would want to circumscribe which people are included here: not all moral theories which have at any time in the past by a large group of people are reasonable. At the other end of the spectrum, you could include only views currently held by many people who have made it their life’s work to determine the correct moral theory. My guess is that in fact we should take into account which views are and aren’t held by both the general public and by philosophers.

I think given this more plausible cashing out of inclusivity, we might want to be both more and less inclusive than you suggest. Here are a few specific ways it might cash out:

  • We should be thinking about and discussing theories which put constraints on actions you’re allowed to take to increase welfare. Most people think there are some limits on be what we’re allowed to do to others to benefit others. Most philosophers believe there are some deontological principles / agent centred constraints or prerogatives.

  • We should be considering how prioritarian to be. Many people think we should give priority to those who are worst off, even if we can benefit them less than we could others. Many philosophers think that there’s (some degree of) diminishing moral value to welfare.

  • Perhaps we ought to be inclusive of views to the effect that (at least some) non-human sentient beings have little or no moral value. Many people’s actions imply they believe that a large number of animals have little or no moral value, and that robots never could have moral value. Fewer philosophers seem to hold this view.

  • I’m less convinced about being inclusive towards views which place no value on the future. It seems widely accepted that climate change is very bad, despite the fact that most of the harms will accrue to those in the future. It’s controversial what the discount rate should be, but not that the pure time discount rate should be small. Very few philosophers defend purely person-affecting views.

Comment author: Michael_PJ 12 July 2017 09:43:49AM 1 point [-]

Hm, I'm a little sad about this. I always thought that it was nice to have GWWC presenting a more "conservative" face of EA, which is a lot easier for people to get on board with.

But I guess this is less true with the changes to the pledge - GWWC is more about the pledge than about global poverty.

That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.

Comment author: Michelle_Hutchinson 12 July 2017 11:13:17AM 7 points [-]

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

Comment author: Michelle_Hutchinson 31 March 2017 10:27:37AM *  11 points [-]

I'm not totally sure I understand what you mean by IJ. It sounds like what you're getting at is telling someone they can't possible have the fundamental intuition that they claim they have (either that they don't really hold that intuition or that they are wrong to do so). Eg: 'I simply feel fundamentally that what matters most is positive conscious experiences' 'That seems like a crazy thing to think!'. But then your example is

"But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.".

That seems like a different structure of argument, more akin to: 'I feel that what matters most is having positive conscious experiences (X)' 'But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!' The difference is significant: if the person is coming up with a novel Y, or even one that hasn't been made salient to the person in this context, it actually seems really useful. Since that's the case, I assume you meant IJ to refer to arguments more like the former kind.

I'm strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it's easier to describe and explore a novel theory you feel invested in. It's also more interesting for other philosophers to explore novel theories, so in a sense they don't have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it's absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it's important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. 'No other serious philosophers hold that view' might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say 'Your intuition that A is ludicrous', they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.

Comment author: Julia_Wise 07 December 2016 04:22:40PM *  10 points [-]

There are lots of cases of correct models failing to take off for lack of good strategy. The doctor who realized that handwashing prevented infection let his students write up the idea instead of doing it himself, with the result that his colleagues didn't understand the idea properly and didn't take it seriously (even in the face of much lower mortality in his hospital ward). He got laid off, took to writing vitriolic letters to people who hadn't believed him, and died in disgrace in an insane asylum.

Comment author: Michelle_Hutchinson 08 December 2016 03:57:23PM 4 points [-]

That's a horrible story!

Comment author: Michelle_Hutchinson 24 August 2016 10:41:36AM *  4 points [-]

Recognizing the scale of animal suffering starts with appreciating the sentience of individual animals — something surprisingly difficult to do given society’s bias against them (this bias is sometimes referred to as speciesism). For me, this appreciation has come from getting to know the three animals in my home: Apollo, a six-year-old labrador/border collie mix from an animal shelter in Texas, and Snow and Dualla, two chickens rescued from a battery cage farm in California.

I wonder if we might do ourselves a disservice by making it sound really controversial / surprising that animals are thoroughly sentient? It makes it seem more ok not to believe it, but I think also can come across as patronising / strange to interlocutors. I've in the past had people tell me they're 'pleasantly surprised' that I care about animals, and ask when I began caring about animal suffering. (I have no idea how to answer that - I don't remember a time when I didn't) This feels to me somewhat similar to telling someone who doesn't donate to developing countries that you're surprised they care about extreme poverty, and asking when they started thinking that it was bad for people to be dying of malaria. On the one hand, it feels like a reasonable inference from their behaviour. On the other hand, for almost everyone we're likely to be talking to it will be the case that they do in fact care about the plight of others, and that their reasons for not donating aren't lack of belief in the suffering, or lack of caring about it. I would guess that would be similar for most of the people we talk to about animal suffering: they already know and care about animal suffering, and would be offended to have it implied otherwise. This makes the case easier to make, because it means we're already approximately on the same page, and we can start talking immediately about the scale and tractibility of the problem.

View more: Next