Comment author: Kerry_Vaughan 07 July 2017 05:45:21AM 16 points [-]

This was the most illuminating piece on MIRIs work and on AI Safety in general that I've read in some time. Thank you for publishing it.

Comment author: Benito 07 July 2017 05:57:42AM *  6 points [-]

Agreed! It was nice to see the clear output of someone who had spent a lot of time and effort into a good-faith understanding of the situation.

I was really happy with the layout of four key factors, this will help me have more clarity in further discussions.

Comment author: kierangreig 28 June 2017 03:55:57PM *  8 points [-]

(1) To what degree did your beliefs about the consciousness of insects (if insects are too broad a category please just focus on the common fruit fly) change from completing this report and what were the main reasons for those beliefs changing? I would be particularly interested in an answer that covers the following three points: (i) the rough probability that you previously assigned to them being conscious, (ii) the rough probability that you now assign to them being conscious and (iii) the main reasons for the change in that probability.

(2) Do you assign a 0% probability to electrons being conscious?

(3) In section 5.1 you write

I’d like to get more feedback on this report from long-time “consciousness experts” of various kinds. (So far, the only long-time “consciousness expert” from which I’ve gotten extensive feedback is David Chalmers.)

David Chalmers seems like an interesting choice for the one long-time “consciousness expert” to receive extensive feedback from. Why was he the only one that you got extensive feedback from? And of the other consciousness experts that you would like to receive extensive feedback from, do you think that most of them would disagree with some part of the report in a similar way, and if you think they would, what would that disagreement or those disagreements be?

(4) A while ago Carl Shulman put out this document detailing research advice. Can you please do the same, or if you already have a document like this can you please point me to it? I would probably find it useful and I would guess some others would too.

Comment author: Benito 28 June 2017 06:03:50PM 2 points [-]

(Meta: It might be more helpful to submit individual questions as separate comments, so that people can up vote them separately and people's favourite questions (and associated answers) can rise to the top.)

Comment author: Benito 28 June 2017 04:46:37PM 1 point [-]

I was confused by the issue regarding diet qualia. Does the argument reduce to answering this question: “Is it the case that explaining away all the individuals properties of conscious experience could ever add up to a completed explanation-away of consciousness”? (In my understanding, the weak illusionists say that it wouldn’t, the strong illusionists say that it would, and the not-illusionists say that this process can’t even get started.)

Comment author: Benito 28 June 2017 04:15:22PM 9 points [-]

Has OpenPhil (and in particular Lewis Bollard), to your knowledge, altered any grant recommendations based on your report, and if so, how?

Comment author: Benito 28 June 2017 04:04:29PM *  0 points [-]

It seems to me (based only on looking through your report and having read one or two books in the field) that the way many of the better theories of consciousness (e.g. multiple drafts) were formed by philosophers was through the following process:

  • Introspect and notice a phenomena occurring in their conscious experience that they don’t believe to have any known explanation


  • Propose a cognitive mechanism to explain it


  • Call this their explanation of consciousness

Firstly, does this seem like an accurate characterisation of how some of the stronger consciousness theories have been produced?

Secondly, do I correctly understand your hypothetical ‘agenda for producing a theory of consciousness’ (from Appendix B) to be iterating the first two steps of this process, with the idea that in the limit it should account for all the explanda of consciousness (whilst significantly improving the process by (a) writing a program that fits the theory (b) using said program to make predictions, and, instead of largely introspecting yourselves (c) gathering the mass introspections of many people)?

Comment author: Benito 28 June 2017 04:02:19PM 0 points [-]

What outputs/deliverables do you think you’d get from your hypothetical ‘consciousness’ agenda (from Appendix B), and how much resources (time/staff/money) do you think would be required to achieve them? For example, might you (ambitiously) think that this agenda would be able to move the field of consciousness studies into an agreed paradigm (a la your reference to Kuhn)?

Comment author: Benito 28 June 2017 04:00:41PM *  2 points [-]

You mention that a further project might be to attempt to make the case that chimpanzees aren’t conscious, and that Gazami crabs are, each to confirm your suspicion you could in fact make a plausible case for each. Could you outline what such cases might look like (knowing that you can’t provide the output of an investigation you haven’t performed)? What evidences would you be looking into that aren’t already in this report (e.g. would it mainly be information as to how their cognition in particular is similar to / differs from human cognition)?

Comment author: SteveGreidinger 22 June 2017 05:07:20AM 0 points [-]

This is a good start about some of the issues, but there is a need to bulk it up with information directly from neuroscientists.

For instance, some very senior people in the Stanford neuroscience community think that an essential difference between animals and people may be that the astrocytes, "helper cells," are so very different. Among many other things, astrocytes help to create and destroy synapses.

Neuroscientists also routinely do mice experiments, and a few have very sophisticated answers to ethical questions about what they do.

There are a lot of topics in EA ethics that benefit from a grounding in neuroscience and neuroethics. Both of these fields also contain many EA opportunities themselves. If money is being put down, then it's time to add some expert scientific opinion.

Comment author: Benito 22 June 2017 05:49:51PM *  1 point [-]

I think that certain arguments from neuroscience were definitely considered, see the extended section on 'necessary and sufficient conditions' which looks at the cortex-required view, and the section right before that on 'potentially consciousness indicating factors' which looks at 'neuroanatomical similarity' and has a whole appendix associated. These two would both probably cover the types of argument that you're making, even if it doesn't address your specific mechanism, so pointing out what he missed in the relevant section would probably be helpful.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: Benito 04 June 2017 09:26:01PM *  1 point [-]

Yup! I've always seen 'animals v poverty v xrisk' not as three random areas, but three optimal areas given different philosophies:

poverty = only short term

animals = all conscious suffering matters + only short term

xrisk = long term matters

I'd be happy to see other philosophical positions considered.

Comment author: Benito 23 May 2017 10:44:53PM 3 points [-]

My understanding of why MIRI's expected returns didn't come out on top is that you have a strong prior against any org being able to do that much good, and because MIRI's expect impact was so high variance (i.e uncertain), it didn't cause your model to update in any particular direction very much.

What confuses me is this: It feels like if I hadn't thought of astronomical waste / xrisk, and found a great org like AMF, hearing those arguments should make me update strongly that I'm looking at the wrong areas. Yet, the argument that it's high potential cancels out with your prior means I could've been right the whole time, even before I took into account far future considerations.

Which seems implausible. The whole point of astronomical waste is that you should update your probability of being able to have an outsized impact.

I'm not sure which part of your model I'm disagreeing with, but would appreciate knowing if you do?

View more: Next