Comment author: Brian_Tomasik 21 February 2018 01:02:23PM 0 points [-]

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious.

I would guess that increasing understanding of cognitive science would generally increase people's moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you'll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.

Comment author: nobody 21 February 2018 12:21:18PM *  0 points [-]

I think your 80,000 Hours link could use more coherence. One bullet point is:

You think there is great value to preserving the Earth’s ecosystems and biodioversity.

This is not a utilitarian sentiment. Aren't you guys supposed to be utilitarians here?

I'm not particularly concerned by preserving nature for its own sake. In parks is fine, but not on a global scale. I thought this was a commonality with most people here.

If the climate cause is useful to humans, then we must first understand effects on humans. The Sherwood and Huber paper is the strongest point I have seen on that.

Nor is heat stress of the kind they talk about accounted for by existing models. Precise models of this effect are impossible since we know so little about it. We just don't see this effect today. There's no data. How can you be precise? Without flashy models then maybe you can't publish your paper in a nice journal. But if we are actually interested in being useful then a rough but passably accurate model is better than precise garbage!

Comment author: Dunja 21 February 2018 11:56:02AM *  1 point [-]

Could we re-open this discussion in view of MIRI's achievements over the course of a year?

A recent trend of providing relatively high research grants (relative to some of the most prestigious research grants across EU, such as for instance ERC starting grants ~ 1.5 mil EUR) to projects on AI risks and safety made me curious, and so I looked a bit more into this topic. What struck me as especially curious is the lack of transparency when it comes to the criteria used to evaluate the projects and to decide how to allocate the funds. Now, for the sake of this question, I am assuming that the research topic of AI risks and safety is important and should be funded (to which extent it actually is, is beside the point I'm writing here and deserves a discussion on its own; so let's just say it is among the most pursuit worthy problems in view of both epistemic and non-epistemic criteria).

Particularly surpising was a sudden grant of 3.75 mil USD by Open Philanropy Project (OPP) to MIRI (https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017). Note that the funding is more than double the amount given to ERC starting grantees. Previously, OPP awarded MIRI with 500.000 USD and provided an extensive explanation of this decision (https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support). So, one would expect that for a grant more than 7 times higher, we'd find at least as much. But what we do find is an extremely brief explanation saying that an anonymous expert reviewer has evaluated MIRI's work as highly promising in view of their paper "Logical Induction".

Note that in the last 2 years since I first saw this paper online, the very same paper has not been published in any peer-reviewed journal. Moreover, if you check MIRI's publications (https://intelligence.org/all-publications/) you find not a single journal article since 2015 (or an article published in prestigous AI conference proceedings, for that matter). It suffices to say that I was surpised. So I decided to contact both MIRI asking if perhaps their publications haven't been updated on their website, and OPP asking for the evaluative criteria used when awarding this grant.

MIRI has never replied (email sent on February 8). OPP took a while to reply, and today I've received the following email:

"Hi Dunja,

Thanks for your patience. Our assessment of this grant was based largely on the expert reviewer's reasoning in reviewing MIRI's work. Unfortunately, we don't have permission to share the reviewer's identity or reasoning. I'm sorry not to be more helpful with this, and do wish you the best of luck with your research.

Best,

[name blinded in this public post]"

All this is very surprising given that OPP prides itself on transparency. As stated on their website (https://www.openphilanthropy.org/what-open-means-us):

"Creating detailed reports on the causes we’re investigating. Sharing notes from our information-gathering conversations. Publishing writeups and updates on a number of our grants, including our reasoning and reservations before making a grant, and any setbacks and challenges we encounter."

However, the main problem here is not a mere lack of transparency, but a lack of effective and efficient funding policy. The question, how to decide which projects to fund in order to achieve effective and efficient knowledge acquisition, has been researched within philosophy of science and science policy for decades now. Yet, these very basic criteria seem absent from cases such as the above mentioned one. Not only are the criteria used non-transparent, but an open call for various research groups to submit their projects, where the funding agency then decides (in view of an expert panel - rather than a single reviewer) which project is the most promising one, has never happened. The markers of reliability, over the course of research, are extremely important if we want to advance effective research. The panel of experts (rather than a single expert) is extremely important in assuring procedural objectivity of the given assessment.

Altogether, this is not just surprising, but disturbing. Perhaps the biggest danger is that this falls into the hands of press and ends up being an argument for the point that organizations close to effective altruism are not effective at all.

Comment author: Vidur_Kapur  (EA Profile) 21 February 2018 09:43:55AM 0 points [-]

Thank you for this piece. I enjoyed reading it and I'm glad that we're seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.

I know that it's a weak consideration, but I hadn't, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.

I'm in agreement with Michael_S that hedonium and delorium should be the most important considerations when we're estimating the value of the far-future, and from my perspective the higher probability of hedonium likely does make the far-future robustly positive, despite the valid points you bring up. This doesn't necessarily mean that we should focus on AIA over MCE (I don't), but it does make it more likely that we should.

Another useful contribution, though others may disagree, was the biases section: the biases that could potentially favour AIA did resonate with me, and they are useful to keep in mind.

Comment author: Matthew_Barnett 21 February 2018 08:01:40AM 1 point [-]

A very interesting and engaging article indeed.

I agree that people often underestimate the value of strategic value spreading. Oftentimes, proposed moral models that AI agents will follow have some lingering narrowness to them, even when they attempt to apply the broadest of moral principles. For instance, in Chapter 14 of Superintelligence, Bostrom highlights his common good principle:

Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.

Clearly, even something as broad as that can be controversial. Specifically, it doesn't speak at all about any non-human interests except insofar as humans express widely held beliefs to protect them.

I think one thing to add is that AIA researchers who hold more traditional moral beliefs (as opposed to wide moral circles and transhumanist beliefs) are probably less likely to believe that moral value spreading is worth much. The reason for this is obvious: if everyone around you holds, more or less, the same values that you do, then why change anyone's mind? This may explain why many people dismiss the activity you proposed.

Comment author: Matthew_Barnett 21 February 2018 07:47:37AM *  0 points [-]

Just because an event is theoretical doesn't mean that it won't occur. An asteroid hitting the Earth is theoretical, but something I think you might realize is quite real when it impacts.

Some say that superintelligence doesn't have precedence, but I think that's overlooking a key fact. The rise of homo sapiens has radically altered the world -- and all signs point toward intelligence as the cause. We think at the moment that intelligence is just a matter of information processing, and therefore, there should be a way that it could be done by our own computers some day, if only we figured out the right algorithms to implement.

If we learn that superintelligence is impossible, that means our current most descriptive scientific theories are wrong, and we will have learned something new. That's because that would indicate that humans are somehow cosmically special, or at least have hit the ceiling for general intelligence. On the flipside, if we create superintelligence, none of our current theories of how the world operates must be wrong.

That's why it's important to take seriously. Because the best evidence we have available tells us that it's possible, not that it's impossible.

Comment author: Matthew_Barnett 21 February 2018 07:36:21AM 1 point [-]

But it seems that it would be very bad if everyone took this advice literally.

Fortunately, not everyone does take this advice literally :).

This is very similar to the tragedy of the commons. If everyone acts out of their own self motivated interests, then everyone will be worse off. However, the situation as you described does not fully reflect reality because none of the groups you mentioned are actually trying to influence AI researchers at the moment. Therefore, MCE has a decisive advantage. Of course, this is always subject to change.

In contrast, preventing the extinction of humanity seems to occupy a privileged position

I find that it is often the case that people will dismiss any specific moral recommendation for AI except this one. Personally I don't see a reason to think that there are certain universal principles of minimal alignment. You may argue that human extinction is something that almost everyone agrees is bad -- but now the principle of minimal alignment has shifted to "have the AI prevent things that almost everyone agrees is bad" which is another privileged moral judgement that I see no intrinsic reason to hold.

In truth, I see no neutral assumptions to ground AI alignment theory in. I think this is made even more difficult because even relatively small differences in moral theory from the point of view of information theoretic descriptions of moral values can lead to drastically different outcomes. However, I do find hope in moral compromise.

Comment author: adamaero  (EA Profile) 21 February 2018 05:43:21AM *  -1 points [-]

"The idea of 'helping the worst off' is appealing." Why wouldn't it be? Copenhagen Consensus anyone. I realize systematic change is also important--and that's just as important in this regard too.

Their reaction when they look about extinction risk or AI safety is nonsensical, imaginary and completely unknown--zero tractability. No evidence to go off of since such technology does not exist. Why give to CS grad students...it's like trying to fund a mission to Mars, not a priority.

Lol. "They are generally an unhappy person." I just had to laugh and compare how one interested in AI safety matched up.


These lists were interesting on how they allude to the different psychology and motivations each EA has between the two camps. I hope someday I can have a civil discussion with someone not directly benefiting from AIA (such as being involved in the research). I have a friend who's crazy about futurism, 2045 Initiative/propaganda, and in love with everything Musk says on Twitter.


I simply do not see that individual action or donation to AIA research has measurable outcomes. We're talking about Strong AI here--it does't even exist! In the future, even the medium-term future, general standards of living could be significantly improved. Synthetic meat on a production scale is a much more realistic research area (or even anti-malaria mosquitoes) instead of making a fuss about imaginary-theoretical events.

Comment author: Jacy_Reese 21 February 2018 04:13:56AM *  1 point [-]

Yeah, I think that's basically right. I think moral circle expansion (MCE) is closer to your list items than extinction risk reduction (ERR) is because MCE mostly competes in the values space, while ERR mostly competes in the technology space.

However, MCE is competing in a narrower space than just values. It's in the MC space, which is just the space of advocacy on what our moral circle should look like. So I think it's fairly distinct from the list items in that sense, though you could still say they're in the same space because all advocacy competes for news coverage, ad buys, recruiting advocacy-oriented people, etc. (Technology projects could also compete for these things, though there are separations, e.g. journalists with a social beat versus journalists with a tech beat.)

I think the comparably narrow space of ERR is ER, which also includes people who don't want extinction risk reduced (or even want it increased), such as some hardcore environmentalists, antinatalists, and negative utilitarians.

I think these are legitimate cooperation/coordination perspectives, and it's not really clear to me how they add up. But in general, I think this matters mostly in situations where you actually can coordinate. For example, in the US general election when Democrats and Republicans come together and agree not to give to their respective campaigns (in exchange for their counterpart also not doing so). Or if there were anti-MCE EAs with whom MCE EAs could coordinate (which I think is basically what you're saying with "we'd be better off if they both decided to spend the money on anti-malaria bednets").

Comment author: Larks 21 February 2018 02:52:20AM 1 point [-]

Thanks for writing this, I thought it was a good article. And thanks to Greg for funding it.

My pushback would be on the cooperation and coordination point. It seems that a lot of other people, with other moral values, could make a very similar argument: that they need to promote their values now, as the stakes as very high with possible upcoming value lock-in. To people with those values, these arguments should seem roughly as important as the above argument is to you.

  • Christians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with sinners who will go to hell.
  • Egalitarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with wider and wider diversities of wealth.
  • Libertarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with property rights violations.
  • Naturalists could argue that, if the singularity is approaching, it is vitally important that we ensure the beauty of nature won't be bespoiled all over the universe.
  • Nationalists could argue that, if the singularity is approaching, it is vitally important that we ensure the universe will be filled with people who respect the flag.

But it seems that it would be very bad if everyone took this advice literally. We would all end up spending a lot of time and effort on propaganda, which would probably be great for advertising companies but not much else, as so much of it is zero sum. Even though it might make sense, by their values, for expanding-moral-circle people and pro-abortion people to have a big propaganda war over whether foetuses deserve moral consideration, it seems plausible we'd be better off if they both decided to spend the money on anti-malaria bednets.

In contrast, preventing the extinction of humanity seems to occupy a privileged position - not exactly comparable with the above agendas, though I can't exactly cache out why it seems this way to me. Perhaps to devout Confucians a pre-occupation with preventing extinction seems to be just another distraction from the important task of expressing filial piety – though I doubt this.

(Moral Realists, of course, could argue that the situation is not really symmetric, because promoting the true values is distinctly different from promoting any other values.)

Comment author: Michael_S 21 February 2018 01:35:34AM *  0 points [-]

On this topic, I similarly do still believe there’s a higher likelihood of creating hedonium; I just have more skepticism about it than I think is often assumed by EAs.

This is the main reason I think the far future is high EV. I think we should be focusing on p(Hedonium) and p(Delorium) more than anything else. I'm skeptical that, from a hedonistic utilitarian perspective, byproducts of civilization could come close to matching the expected value from deliberately tiling the universe (potentially multiverse) with consciousness optimized for pleasure or pain. If p(H)>p(D), the future of humanity is very likely positive EV.

Comment author: Jacy_Reese 21 February 2018 12:55:20AM 2 points [-]

Thanks for the comment! A few of my thoughts on this:

Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.

If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.

Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one.

Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that's been done to get media coverage and widespread attention without the technical attention to detail of Bostrom's book.

I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.

There's also social work in coordinating the AIA community.

First, I expect clean meat will lead to the moral circle expanding more to animals. I really don't see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to.

Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn't otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.

Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.

Second, I'd imagine that a mature science of consciousness would increase MCE significantly. Many people don't think animals are conscious, and almost no one thinks anything besides animals can be conscious

I disagree that "many people don't think animals are conscious." I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, "Farmed animals have roughly the same ability to feel pain and discomfort as humans," and presumably even more think they have at least some ability.

Advanced neurotechnologies could change that - they could allow us to potentially test hypotheses about consciousness.

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)

Comment author: Daniel_Eth 21 February 2018 12:23:26AM *  3 points [-]

I thought this piece was good. I agree that MCE work is likely quite high impact - perhaps around the same level as X-risk work - and that it has been generally ignored by EAs. I also agree that it would be good for there to be more MCE work going forward. Here's my 2 cents:

You seem to be saying that AIA is a technical problem and MCE is a social problem. While I think there is something to this, I think there are very important technical and social sides to both of these. Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one. Also, avoiding a technological race for AGI seems important for AIA, and this also is more a social problem than a technical one.

For MCE, the 2 best things I can imagine (that I think are plausible) are both technical in nature. First, I expect clean meat will lead to the moral circle expanding more to animals. I really don't see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to. Second, I'd imagine that a mature science of consciousness would increase MCE significantly. Many people don't think animals are conscious, and almost no one thinks anything besides animals can be conscious. How would we even know if an AI was conscious, and if so, if it was experiencing joy or suffering? The only way would be if we develop theories of consciousness that we have high confidence in. But right now we're very limited in studying consciousness, because our tools at interfacing with the brain are crude. Advanced neurotechnologies could change that - they could allow us to potentially test hypotheses about consciousness. Again, developing these technologies would be a technical problem.

Of course, these are just the first ideas that come into my mind, and there very well may be social solutions that could do more than the technical solutions I mentioned, but I don't think we should rule out the potential role of technical solutions, either.

In response to Open Thread #39
Comment author: imu96 20 February 2018 09:52:21PM 0 points [-]

Has anyone read the 80000 hours pdf? On page 12 I'm not sure how they arrived at the figure that a single individual needs $40000 of individual income. They say in footnote 9 that it comes from the fact that in an average household, the first single individual accounts for 1 out of 2.5 people, but then they say that using this approximation that individual requires about 53% as much as a typical household. Shouldn't it actually be 1/ 2.5 = 40% as much?

Comment author: Peter_Hurford  (EA Profile) 20 February 2018 05:05:21PM 1 point [-]

People sometimes discuss whether poverty alleviation interventions are bad for animals because richer people eat more meat. Do you think your findings affect this discussion?

More on that soon!

Comment author: avacyn 20 February 2018 03:54:48PM 0 points [-]

Really interesting and worthwhile project!

People sometimes discuss whether poverty alleviation interventions are bad for animals because richer people eat more meat. Do you think your findings affect this discussion?

Comment author: HaukeHillebrandt 20 February 2018 12:48:10PM 0 points [-]

Sorry, I missed your previous comment. I'm not an expert on climate change and this not necessarily the best place for this discussion of why this is neglected within effective altruism - I would recommend that you post your question to Effective Altruism Hangout facebook group and ask for an answer. The reason that you get downvoted is that you post on many different threads even though it's not really related to the discussion. I would recommend you reading this: before posting though: https://80000hours.org/2016/05/how-can-we-buy-more-insurance-against-extreme-climate-change/

However, here are my two cents: - everybody here agrees that climate change is an important problem - the 'wet bulb' phenomenon is known and mortality from heatstrokes is included in most assessments of overall cost of climate change. see https://www.givingwhatwecan.org/cause/climate-change/ https://www.givingwhatwecan.org/report/climate-change-2/ https://www.givingwhatwecan.org/report/modelling-climate-change-cost-effectiveness/

  • most scientists agree that the most likely outcome is not that the whole planet will be pretty much uninhabitable. However, there is a chance that this will be true and extreme risks from climate change is a topic that many people in the EA community care about (see:(https://80000hours.org/problem-profiles/ ))
  • you don't propose a particular intervention, but rather highlight a particular bad effect from climate change. There's more active discussion on what is the best thing we can do about climate change rather than listing the various effects (https://www.givingwhatwecan.org/report/ccl/ https://www.givingwhatwecan.org/report/cool-earth/
  • in effective altruism, we also look at 'neglectedness'. Many people work on climate change, fewer care about risks from emerging technology ((https://80000hours.org/problem-profiles/ )), this is why climate change is not more of a priority area.
Comment author: HaukeHillebrandt 20 February 2018 12:08:32PM 0 points [-]

Thanks for asking for clarification - I'm sorry I think I've been unclear about the mechanism. It's not really about shareholder activism, this is just an extra.

I've now added a few graphs and a spreadsheet as a toy model of why mission hedging beats a strategy that maximizes financial returns in the introduction. Can you take a look and see whether it's more clear now? Or maybe I'm missing your question.

Comment author: SiebeRozendal 20 February 2018 10:32:37AM 0 points [-]

Sure! Here it is.

Comment author: MichaelPlant 20 February 2018 09:41:35AM 1 point [-]

unsure why this was downvoted. I assume because many EAs think X-risk is a better bet than aging research. That would be a reason to disagree with a comment, but not to downvote, which is snarky. I upvoted for balance.

View more: Next