Comment author: zdgroff 06 September 2017 06:56:26PM 1 point [-]

That's a damn good list. Would anything from the poverty world make sense? There's More Than Good Intentions and Poor Economics. The latter probably gets too into nitty gritty and the former may be too repetitive, but something from that area that does not come pre-digested by a philosopher might make sense.

Comment author: zdgroff 01 September 2017 10:08:49PM 3 points [-]

This seems like a more specific case of a general problem with nearly all research on persuasion, marketing, and advocacy. Whenever you do research on how to change people's minds, you increase the chances of mind control. And yet, many EAs seem to do this: at least in the animal area, a lot of research pertains to how we advocate, research that can be used by industry as well as effective animal advocates. The AI case is definitely more extreme, but I think it depends on a resolution to this problem.

I resolve the problem in my own head (as someone who plans on doing such research in the future) through the view that people likely to use the evidence most are the more evidence-based people (and I think there's some evidence of this in electoral politics) and that the evidence will likely pertain more to EA types than others (a study on how to make people more empathetic will probably be more helpful to animal advocates, who are trying to make people empathetic, than industry, which wants to reduce empathy). These are fragile explanations, though, and one would think an AI would be completely evidence-based and a priori have as much evidence available to it as those trying to resist would have available to them.

Also, this article on nationalizing tech companies to prevent unsafe AI may speak to this issue to some degree: https://www.theguardian.com/commentisfree/2017/aug/30/nationalise-google-facebook-amazon-data-monopoly-platform-public-interest

Comment author: zdgroff 30 August 2017 05:05:20PM 3 points [-]

This is a really interesting write-up and definitely persuaded me quite a bit. One thing I see coming up the answers a lot, at least with Geoffrey Miller and Lee Sharkey, is that resistance to abuses of power does not involve matching firepower with firepower but instead can be done by civil resistance. I've read a good bit on the literature on civil resistance movements (people like Erica Chenoweth and Sidney Tarrow), and my impression is that LAWs could hinder the ability to resist civilly as well. For one, Erica Chenoweth makes a big point about how one of the key mechanisms of success for a civil resistance movement is getting members of the regime your challenging to defect. If the essential members are robots, this seems like a much more difficult task. Sure, you can try to build in some alignment mechanism, but that seems like a risky bet. More generally, noncompliance with roles people are expected to play is a large part of what makes civil resistance work. Movements grow to encompass people who the regime depends upon, and those people gum up the works. Again, couldn't robots take away the possibility of doing this?

Also, I think I am one of the people who was talking about this recently in my blog post last week on the subject, and I posted an update today that I've moved somewhat away from my original position, in part because of the thoughtful responses of people like you.

Comment author: Peter_Hurford  (EA Profile) 29 August 2017 11:34:23PM *  4 points [-]

You make some really good points about how tricky it is to sample EA, not the least because defining "EA" is so hard. We don't have any good information to reweigh the population of the survey, so it's hard to say if our populations are over- or underrepresented, though I would agree with your intuitions.

I don't remember the survey link and am on several of those mailing lists.

For one example, we did have an entire EA Newsletter dedicated solely to the EA Survey -- I'm not sure how to make it more apparent without being obnoxious. I am a bit surprised by how few sign-ups we got through the EA Newsletter. I do agree that the inclusion in some of the other newsletters was more subtle.

Comment author: zdgroff 30 August 2017 04:56:25PM 1 point [-]

Yeah, upon reflection I don't think the problem is how the survey is presented in a newsletter but whether people read the newsletter at all. Most newsletters end up in the promotions folder on Gmail and I imagine a similar folder elsewhere. What I've found in the past is you have to find a way to make it appear as a personal email (not have images or formatting, for example, and use people's names), and that can get around it sometimes.

Comment author: zdgroff 29 August 2017 09:41:28PM 4 points [-]

Thanks for doing this! It's very difficult to figure out what the right way of sampling EAs is since it's unclear how you define an EA. Is it someone who acts as an EA or identifies as one? How much altruism (donations, volunteering) etc. is required and how much effectiveness (e.g. being cause-neutral versus just seeking to be effective within a cause) is required?

Just by skimming this, it strikes me that SSC is overrepresented, I'm guessing because of the share of the survey being more salient there and having an overall large readership base. Facebook seems overrepresented too, although I think that's less of an issue since the SSC audience has its own culture in the way that EAs on Facebook probably do not. I do wonder if there would be a way to get better participation from EA organizations' audiences, as I think that's probably the best way to get a picture of things, and I imagine the mailing lists are not very salient. (I don't remember the survey link and am on several of those mailing lists.)

Comment author: kbog  (EA Profile) 29 August 2017 02:39:49AM *  2 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do. That means we can expect and demand that they have a decent moral compass.

But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

We don't have civilian tanks or civilian fighter jets or lots of other things. Revolutions are almost always asymmetric.

Comment author: zdgroff 29 August 2017 09:26:00PM 0 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.

Couldn't it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.

In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.

Comment author: Carl_Shulman 28 August 2017 05:54:31PM *  13 points [-]

As with Eric, I'd like to express praise for your altruism, respect for your choice, but raise some cautions about the idea of a global human mean income as global norm.

I think it makes sense to think about this in terms of market compensation (including wages and nonpecuniary benefits) and explicit and implicit donation thereof. Depending on people's opportunity costs that salary could represent a large boost in income relative to their outside prospects, a 10% donation rate, 50%, or 99%+. I'd also think about to what extent the change in donation affects your impact (positively and negatively). The degree of sacrifice and magnitude (and sign) of impacts will be enormously different across cases.

This approximate world average has a very strong intuitive appeal to us, because it’s what somebody would get paid if there was complete equality.

Some thoughts on this:

  • If you include nonhuman animals then mean income is orders of magnitude less; even adjusting for cost of living, that would imply low subsistence wages, i.e. absolute poverty and <$200 for humans (which would be clearly highly counterproductive)
  • Equal allocations of income would leave no extra for those expensive needs, e.g. health conditions that require hundreds of thousands of dollars per to survive, young vs old
  • If equality meant bringing up productivity and conditions for poor people, then total and per capita output could rise several fold; if it meant everyone allocating their efforts altruistically optimally then per capita wealth could explode (or collapse alongside skyrocketing total wealth)
  • Conversely, if equality were achieved by taxation at 100% rates and transfers then incomes would collapse, ceteris paribus
  • Median income is dramatically lower than mean, but has a plausible better claim re living high while others die
Comment author: zdgroff 29 August 2017 02:55:54AM 0 points [-]

Good point about nonhuman animals. Going with the norm for humans certainly does have weird implications.

I suppose part of the answer might be that with humans, poverty is the problem we're trying to fix, but with animals, it's a lot more than poverty. (Though at least for wild animals, poverty/starvation is a large part of the problem.)

Comment author: EricHerboso  (EA Profile) 28 August 2017 03:45:41PM 7 points [-]

While I certainly don't want to argue against other EAs taking up this example and choosing to live more frugally in order to achieve more overall good, I nevertheless want to remind the EA community that marketing EA to the public requires that we spend our idiosyncrasy credits wisely.

We only have so many weirdness points to spend. When we spend them on particularly extreme things like intentionally living on such a small amount, it makes it more difficult to get EA newcomers into the other aspects of EA that are more important, like strategic cause selection.

I do not want to dissuade anyone from taking the path of giving away everything above $10k/person, so long as they truly are in a position to do this. But doing so requires a social safety net that, as Evan points out elsewhere in this thread, is generally only available to those in good health and fully able-bodied. I will add that this kind of thing is also generally available only when one is from a certain socio-economic background, and that this kind of messaging may be somewhat antithetical to the goal of inclusion that some of us in the movement are attempting with diversity initiatives.

If living extremely frugally were extremely effective, then maybe we'd want to pursue it more generally despite the above arguments. But the marginal value of giving everything over $10k/person versus the existing EA norm of giving 10-50% isn't that much when you take into account that the former hinders EA outreach by being too demanding. Instead, we should focus on the effectiveness aspect, not the demandingness aspect.

Nevertheless, I think it is important for the EA movement to have heroes that go the distance like this! If you think you may potentially become one of them, then don't let this post discourage you. Even if I believe this aspect of EA culture should be considered supererogatory (or whatever the consequentialist analog is), I nevertheless am proud to be part of a movement that takes sacrifice at this level so seriously.

Comment author: zdgroff 29 August 2017 02:52:21AM 1 point [-]

We only have so many weirdness points to spend. When we spend them on particularly extreme things like intentionally living on such a small amount, it makes it more difficult to get EA newcomers into the other aspects of EA that are more important, like strategic cause selection.

Do you think it's any better in this case than, say, the case of a very particular diet or an odd appearance because in this case Joey is doing something that's uncontroversially good? I would wonder if it would make people want to be more like him and trust his judgment more.

Comment author: zdgroff 29 August 2017 02:50:51AM 2 points [-]

How did you get $220 rent in Vancouver? I live in the Bay Area, which of course is extremely expensive, but I would not have thought it was so vastly more expensive that one could pay $220 for a place in Vancouver. What's the location and how did you find the place?

I'm pretty on the fence about the argument that this garners too many weirdness points. I think if one is weird for doing things that almost everyone would agree makes them a good person, then that's probably as likely to be good as bad. I think much of this question just boils down to what makes us the most effective in our primary activity, and for people with needs or even genuinely more expensive tastes this might not be optimal. I do favor a heavily thrifty norm for EAs, though.

In response to Open Thread #38
Comment author: zdgroff 22 August 2017 10:27:48PM 2 points [-]

I've started blogging regularly. Today I asked whether the push for AI safety needs more of a public movement behind it in light of the letter on AI weapons from Elon Musk and others. Read it and let me know what your thoughts are, as I may act on your answers!

View more: Prev | Next