Comment author: adamaero  (EA Profile) 22 February 2018 02:36:22AM *  -3 points [-]

@Matthew_Barnett As a senior electrical engineering student, proficient in a variety of programming languages, I do think and believe that AI is important to think about and discuss. The theoretical threat of a malevolent strong AI would be immense. But that does not mean one has cause or a valid reason to support CS grad students financially.

A large, significant, asteroid collision with Earth would also be quite devastating. Yet, to fund and support aerospace grads does not follow. Perhaps I really mean this: AI safety is an Earning to Give non sequitur.

Lastly, again, there is no evidence or results. Effective Altruism is about being beneficent instead of merely benevolent (meaning well). In other words, making decisions off well researched initiatives (e.g., bed nets). Since strong AI does not exist, it does not make sense to support though E2G. (I'm not saying it will never exist; that is unknown.) Of course, there are medium-term (systematic change) with results that more or less rely on historical-type empiricism--but that's still some type of evidence. For poverty we have RCTs and developmental economics. For AI safety [something?]. For animal suffering we have proof that less miserable conditions can become a reality.

Comment author: Matthew_Barnett 22 February 2018 05:07:43AM *  2 points [-]

I don't think anyone here is suggesting supporting random CS grads financially. Although, they might endorse something like that indirectly by funding AI alignment research, which tends to attract CS grads.

I agree that simply because an asteroid collision would be devastating, it does not follow that we should necessarily focus on that work in particular. However, there are variables which I think you might be overlooking.

The reason why people are concerned with AI alignment is not necessarily because of the scope of the issue, but also the urgency and tractability of the problem. The urgency of the problem comes from the idea that advanced AI will probably be developed this century. The tractability of the problem comes from the idea that there exists a set of goals that we could in theory put into an AI goals that are congruent with ours -- you might want to read up on the Orthogonality Thesis.

Furthermore, it is dangerous to assume that we should judge the effectiveness of certain activities merely based on prior evidence or results. There are some activities which are just infeasible to give post hoc judgements about -- and this issue is one of them. The inherent nature of the problem is that we will probably only get about one chance to develop superintelligence -- because if we fail, then we will all probably die or otherwise be permanently unable to alter its goals.

To give you an analogy, few would agree that because climate change is an unprecedented threat, it therefore follows that we should wait until after the damage has been done to assess the best ways of mitigating it. Unfortunately for issues that have global scope, it doesn't look like we get a redo if things start going badly.

If you want to learn more about the research, I recommend reading Superintelligence by Nick Bostrom. The vast majority of AI alignment researchers are not worried about malevolent AI despite your statement. I mean this is in the kindest way possible, but if you really want to be sure that you're on the right side of a debate, it's worth understanding the best arguments against your position, not the worst.

Comment author: Matthew_Barnett 21 February 2018 08:01:40AM 5 points [-]

A very interesting and engaging article indeed.

I agree that people often underestimate the value of strategic value spreading. Oftentimes, proposed moral models that AI agents will follow have some lingering narrowness to them, even when they attempt to apply the broadest of moral principles. For instance, in Chapter 14 of Superintelligence, Bostrom highlights his common good principle:

Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.

Clearly, even something as broad as that can be controversial. Specifically, it doesn't speak at all about any non-human interests except insofar as humans express widely held beliefs to protect them.

I think one thing to add is that AIA researchers who hold more traditional moral beliefs (as opposed to wide moral circles and transhumanist beliefs) are probably less likely to believe that moral value spreading is worth much. The reason for this is obvious: if everyone around you holds, more or less, the same values that you do, then why change anyone's mind? This may explain why many people dismiss the activity you proposed.

Comment author: Larks 21 February 2018 02:52:20AM 12 points [-]

Thanks for writing this, I thought it was a good article. And thanks to Greg for funding it.

My pushback would be on the cooperation and coordination point. It seems that a lot of other people, with other moral values, could make a very similar argument: that they need to promote their values now, as the stakes as very high with possible upcoming value lock-in. To people with those values, these arguments should seem roughly as important as the above argument is to you.

  • Christians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with sinners who will go to hell.
  • Egalitarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with wider and wider diversities of wealth.
  • Libertarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with property rights violations.
  • Naturalists could argue that, if the singularity is approaching, it is vitally important that we ensure the beauty of nature won't be bespoiled all over the universe.
  • Nationalists could argue that, if the singularity is approaching, it is vitally important that we ensure the universe will be filled with people who respect the flag.

But it seems that it would be very bad if everyone took this advice literally. We would all end up spending a lot of time and effort on propaganda, which would probably be great for advertising companies but not much else, as so much of it is zero sum. Even though it might make sense, by their values, for expanding-moral-circle people and pro-abortion people to have a big propaganda war over whether foetuses deserve moral consideration, it seems plausible we'd be better off if they both decided to spend the money on anti-malaria bednets.

In contrast, preventing the extinction of humanity seems to occupy a privileged position - not exactly comparable with the above agendas, though I can't exactly cache out why it seems this way to me. Perhaps to devout Confucians a pre-occupation with preventing extinction seems to be just another distraction from the important task of expressing filial piety – though I doubt this.

(Moral Realists, of course, could argue that the situation is not really symmetric, because promoting the true values is distinctly different from promoting any other values.)

Comment author: Matthew_Barnett 21 February 2018 07:36:21AM 3 points [-]

But it seems that it would be very bad if everyone took this advice literally.

Fortunately, not everyone does take this advice literally :).

This is very similar to the tragedy of the commons. If everyone acts out of their own self motivated interests, then everyone will be worse off. However, the situation as you described does not fully reflect reality because none of the groups you mentioned are actually trying to influence AI researchers at the moment. Therefore, MCE has a decisive advantage. Of course, this is always subject to change.

In contrast, preventing the extinction of humanity seems to occupy a privileged position

I find that it is often the case that people will dismiss any specific moral recommendation for AI except this one. Personally I don't see a reason to think that there are certain universal principles of minimal alignment. You may argue that human extinction is something that almost everyone agrees is bad -- but now the principle of minimal alignment has shifted to "have the AI prevent things that almost everyone agrees is bad" which is another privileged moral judgement that I see no intrinsic reason to hold.

In truth, I see no neutral assumptions to ground AI alignment theory in. I think this is made even more difficult because even relatively small differences in moral theory from the point of view of information theoretic descriptions of moral values can lead to drastically different outcomes. However, I do find hope in moral compromise.

Comment author: Matthew_Barnett 01 January 2018 08:49:06PM 1 point [-]

There seems to be somewhat a consensus among effective altruists that the Rare Earth explanation is the most likely resolution to the Fermi Paradox. I tend to agree, but like you, I think that effective altruists generally underestimate the risk from aliens.

However, I would caution against a few assumptions that you made in the article. The first is the assumption that aliens would be anything like they show in the movies -- rouge civilizations restricted to quadrants in the galaxy. As many have pointed out in the past, a civilization with artificial superintelligence would likely be able to colonize the entire galaxy within just a few million years, which means that if aliens with advanced artificial intelligence existed, we probably would have seen evidence of them existing already. Of course, maybe they're hiding, but now you're running up against Occam's razor.

The second assumption is that we can affect the state of affairs of civilizations at our stage of development. Now, even given the generous assumption that we have the ability to share useful knowledge with aliens at our stage of development, it would be unlikely that we ever find aliens that are exactly at our development stage. A civilization just decades younger would be unavailable to contact without radio, and a civilization just centuries more advanced would probably have artificial intelligence already.

Comment author: Matthew_Barnett 14 December 2017 04:50:10AM 4 points [-]

Just a thought: if you think that earning-to-give is a good strategy, then this is one of the best things you can do as an effective altruist. Just to put things in perspective here, if you donated $50,000 to an effective charity for 20 years, then you would be doing just about as much good as merely leaving a good comment in that thread. I hope that helps to internalize just what's at stake here.

Just make sure that the Pineapple fund doesn't generate some animosity towards EA. If it takes 100 good reasons to change someone's mind, it only takes 1 really bad one to turn them away. The person doing the giveaway said that they are interested in the SENS foundation. This is pretty good evidence that they care about the long-term future. We might be able to do the most good if we focus our efforts on that cause area specifically.