Comment author: zdgroff 05 October 2017 07:58:22PM 1 point [-]

Thanks for sharing this! Out of curiosity, was there any particular evidence that drove the basic theory of outreach (awareness -> engagement -> behavior change)? This seems like actually a hotly contested empirical area so I'm curious. Thanks!

Comment author: zdgroff 03 October 2017 05:55:55PM 1 point [-]

What do you think the risk is of AI accidents just adopting the baggage that AI risks has now via the euphemism treadmill?

Comment author: zdgroff 03 October 2017 05:31:36PM 0 points [-]

This seems like an excellent program. If the program is selective at all or you have outreach that is limited and could be varied randomly, have you thought of doing a randomized evaluation of some sort? I've realized that even when we're working on the far future, there are often intermediate outcomes (e.g. changed attitudes) to focus on, and we should test our impacts on those as rigorously as possible.

Comment author: zdgroff 25 September 2017 10:19:26PM 0 points [-]

The first question here got me thinking: "Various respondents also seemed to view EA as a lofty, principle-based lifestyle that they had not yet attained and were therefore hesitant to label themselves 'effective altruists.'"

Surely part of this is helping people to feel comfortable labeling themselves as EAs, but how can we get the vastly larger number of people with EA-ish ideas (atheists/skeptics/rationalists, economics and philosophy students, religious people focused on charity) to behave in a way that meets lofty standards?

In response to S-risk FAQ
Comment author: zdgroff 20 September 2017 06:45:57PM 5 points [-]

This is very helpful, as is Max's write-up linked to at the top.

Comment author: zdgroff 20 September 2017 06:09:32PM 0 points [-]

In general, I think this is correct and a very useful distinction. I hear people make sloppy arguments on this all the time. It's clear that capitalism and socialism do not at least obviously or superficially entail selfishness or selflessness on a conceptual level.

I do think that on a deeper conceptual level, though, there may be more connections. I took a socialist philosophy class in college taught by an "analytic Marxist" (John Roemer) who used the tools of neoclassical economics to model a more reasonable and precise version of Marxist theory. He was best friends with G.A. Cohen, one of the greatest analytic political philosophers of the 20th century (in my opinion the greatest). Both of them held the view that understandings of selfishness and selflessness were pretty key to the conceptual cases for capitalism and socialism: capitalists took selfish motives for granted, both underestimating the extent of selflessness and utterly neglecting the possibility of changing the extent of selflessness and selfishness in human beings. Socialists, in contrast, recognized that it is possible to foster different ethics and that the ethic of socialism, being more selfless than the ethic of capitalism, would feed on itself.

Interestingly, G.A. Cohen argued that socialists should abide by very similar principles to EAs in his book "If You're an Egalitarian, How Come You're So Rich?"

Comment author: zdgroff 19 September 2017 11:58:27PM 4 points [-]

It seems like there's very little work being done by EAs in this area. Is that your impression as someone who has spent a good amount of time talking to the relevant people? Do you think it would be good for more EAs to go into this area relative to the current numbers working on different causes?

In response to The Turing Test
Comment author: zdgroff 11 September 2017 06:48:54PM 2 points [-]

Nice interview subjects! Very impressive. I'd be curious to hear what made you decide to make an ideological Turing test part of every episode. It's clearly very in line with EA ideals, but why that exercise in particular?

Comment author: zdgroff 06 September 2017 06:56:26PM 1 point [-]

That's a damn good list. Would anything from the poverty world make sense? There's More Than Good Intentions and Poor Economics. The latter probably gets too into nitty gritty and the former may be too repetitive, but something from that area that does not come pre-digested by a philosopher might make sense.

Comment author: zdgroff 01 September 2017 10:08:49PM 3 points [-]

This seems like a more specific case of a general problem with nearly all research on persuasion, marketing, and advocacy. Whenever you do research on how to change people's minds, you increase the chances of mind control. And yet, many EAs seem to do this: at least in the animal area, a lot of research pertains to how we advocate, research that can be used by industry as well as effective animal advocates. The AI case is definitely more extreme, but I think it depends on a resolution to this problem.

I resolve the problem in my own head (as someone who plans on doing such research in the future) through the view that people likely to use the evidence most are the more evidence-based people (and I think there's some evidence of this in electoral politics) and that the evidence will likely pertain more to EA types than others (a study on how to make people more empathetic will probably be more helpful to animal advocates, who are trying to make people empathetic, than industry, which wants to reduce empathy). These are fragile explanations, though, and one would think an AI would be completely evidence-based and a priori have as much evidence available to it as those trying to resist would have available to them.

Also, this article on nationalizing tech companies to prevent unsafe AI may speak to this issue to some degree: https://www.theguardian.com/commentisfree/2017/aug/30/nationalise-google-facebook-amazon-data-monopoly-platform-public-interest

View more: Prev | Next