In response to S-risk FAQ
Comment author: zdgroff 20 September 2017 06:45:57PM 3 points [-]

This is very helpful, as is Max's write-up linked to at the top.

Comment author: zdgroff 20 September 2017 06:09:32PM 0 points [-]

In general, I think this is correct and a very useful distinction. I hear people make sloppy arguments on this all the time. It's clear that capitalism and socialism do not at least obviously or superficially entail selfishness or selflessness on a conceptual level.

I do think that on a deeper conceptual level, though, there may be more connections. I took a socialist philosophy class in college taught by an "analytic Marxist" (John Roemer) who used the tools of neoclassical economics to model a more reasonable and precise version of Marxist theory. He was best friends with G.A. Cohen, one of the greatest analytic political philosophers of the 20th century (in my opinion the greatest). Both of them held the view that understandings of selfishness and selflessness were pretty key to the conceptual cases for capitalism and socialism: capitalists took selfish motives for granted, both underestimating the extent of selflessness and utterly neglecting the possibility of changing the extent of selflessness and selfishness in human beings. Socialists, in contrast, recognized that it is possible to foster different ethics and that the ethic of socialism, being more selfless than the ethic of capitalism, would feed on itself.

Interestingly, G.A. Cohen argued that socialists should abide by very similar principles to EAs in his book "If You're an Egalitarian, How Come You're So Rich?"

Comment author: zdgroff 19 September 2017 11:58:27PM 1 point [-]

It seems like there's very little work being done by EAs in this area. Is that your impression as someone who has spent a good amount of time talking to the relevant people? Do you think it would be good for more EAs to go into this area relative to the current numbers working on different causes?

In response to The Turing Test
Comment author: zdgroff 11 September 2017 06:48:54PM 2 points [-]

Nice interview subjects! Very impressive. I'd be curious to hear what made you decide to make an ideological Turing test part of every episode. It's clearly very in line with EA ideals, but why that exercise in particular?

Comment author: zdgroff 06 September 2017 06:56:26PM 1 point [-]

That's a damn good list. Would anything from the poverty world make sense? There's More Than Good Intentions and Poor Economics. The latter probably gets too into nitty gritty and the former may be too repetitive, but something from that area that does not come pre-digested by a philosopher might make sense.

Comment author: zdgroff 01 September 2017 10:08:49PM 3 points [-]

This seems like a more specific case of a general problem with nearly all research on persuasion, marketing, and advocacy. Whenever you do research on how to change people's minds, you increase the chances of mind control. And yet, many EAs seem to do this: at least in the animal area, a lot of research pertains to how we advocate, research that can be used by industry as well as effective animal advocates. The AI case is definitely more extreme, but I think it depends on a resolution to this problem.

I resolve the problem in my own head (as someone who plans on doing such research in the future) through the view that people likely to use the evidence most are the more evidence-based people (and I think there's some evidence of this in electoral politics) and that the evidence will likely pertain more to EA types than others (a study on how to make people more empathetic will probably be more helpful to animal advocates, who are trying to make people empathetic, than industry, which wants to reduce empathy). These are fragile explanations, though, and one would think an AI would be completely evidence-based and a priori have as much evidence available to it as those trying to resist would have available to them.

Also, this article on nationalizing tech companies to prevent unsafe AI may speak to this issue to some degree: https://www.theguardian.com/commentisfree/2017/aug/30/nationalise-google-facebook-amazon-data-monopoly-platform-public-interest

Comment author: zdgroff 30 August 2017 05:05:20PM 3 points [-]

This is a really interesting write-up and definitely persuaded me quite a bit. One thing I see coming up the answers a lot, at least with Geoffrey Miller and Lee Sharkey, is that resistance to abuses of power does not involve matching firepower with firepower but instead can be done by civil resistance. I've read a good bit on the literature on civil resistance movements (people like Erica Chenoweth and Sidney Tarrow), and my impression is that LAWs could hinder the ability to resist civilly as well. For one, Erica Chenoweth makes a big point about how one of the key mechanisms of success for a civil resistance movement is getting members of the regime your challenging to defect. If the essential members are robots, this seems like a much more difficult task. Sure, you can try to build in some alignment mechanism, but that seems like a risky bet. More generally, noncompliance with roles people are expected to play is a large part of what makes civil resistance work. Movements grow to encompass people who the regime depends upon, and those people gum up the works. Again, couldn't robots take away the possibility of doing this?

Also, I think I am one of the people who was talking about this recently in my blog post last week on the subject, and I posted an update today that I've moved somewhat away from my original position, in part because of the thoughtful responses of people like you.

Comment author: Peter_Hurford  (EA Profile) 29 August 2017 11:34:23PM *  4 points [-]

You make some really good points about how tricky it is to sample EA, not the least because defining "EA" is so hard. We don't have any good information to reweigh the population of the survey, so it's hard to say if our populations are over- or underrepresented, though I would agree with your intuitions.

I don't remember the survey link and am on several of those mailing lists.

For one example, we did have an entire EA Newsletter dedicated solely to the EA Survey -- I'm not sure how to make it more apparent without being obnoxious. I am a bit surprised by how few sign-ups we got through the EA Newsletter. I do agree that the inclusion in some of the other newsletters was more subtle.

Comment author: zdgroff 30 August 2017 04:56:25PM 1 point [-]

Yeah, upon reflection I don't think the problem is how the survey is presented in a newsletter but whether people read the newsletter at all. Most newsletters end up in the promotions folder on Gmail and I imagine a similar folder elsewhere. What I've found in the past is you have to find a way to make it appear as a personal email (not have images or formatting, for example, and use people's names), and that can get around it sometimes.

Comment author: zdgroff 29 August 2017 09:41:28PM 4 points [-]

Thanks for doing this! It's very difficult to figure out what the right way of sampling EAs is since it's unclear how you define an EA. Is it someone who acts as an EA or identifies as one? How much altruism (donations, volunteering) etc. is required and how much effectiveness (e.g. being cause-neutral versus just seeking to be effective within a cause) is required?

Just by skimming this, it strikes me that SSC is overrepresented, I'm guessing because of the share of the survey being more salient there and having an overall large readership base. Facebook seems overrepresented too, although I think that's less of an issue since the SSC audience has its own culture in the way that EAs on Facebook probably do not. I do wonder if there would be a way to get better participation from EA organizations' audiences, as I think that's probably the best way to get a picture of things, and I imagine the mailing lists are not very salient. (I don't remember the survey link and am on several of those mailing lists.)

Comment author: kbog  (EA Profile) 29 August 2017 02:39:49AM *  2 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do. That means we can expect and demand that they have a decent moral compass.

But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

We don't have civilian tanks or civilian fighter jets or lots of other things. Revolutions are almost always asymmetric.

Comment author: zdgroff 29 August 2017 09:26:00PM 0 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.

Couldn't it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.

In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.

View more: Next