Comment author: MikeJohnson 15 December 2016 05:37:50AM 0 points [-]

I generally agree with this-- getting it right eventually is the most important thing; getting it wrong for 100 years could be horrific, but not an x-risk.

I do worry some that "trusted reflection process" is a sufficiently high-level abstraction so as to be difficult to critique.

Interesting piece by Christiano, thanks! I would also point to a remark I made above, that doing this sort of ethical clarification now (if indeed it's tractable) will pay dividends in aiding coordination between organizations such as MIRI, DeepMind, etc. Or rather, by not clarifying goals, consciousness, moral value, etc, it seems likely to increase risks of racing to be the first to develop AGI, secrecy & distrust between organizations, and such.

A lot does depend on tractability.

Comment author: Jessica_Taylor 16 December 2016 03:53:46AM *  1 point [-]

I agree that:

  1. clarifying "what should people who gain a huge amount of power through AI do with Earth, existing social structuers, and the universe?" seems like a good question to get agreement on for coordination reasons
  2. we should be looking for tractable ways of answering this question

I think:

a) consciousness research will fail to clarify ethics enough to answer enough of (1) to achieve coordination (since I think human preferences on the relevant timescales are way more complicated than consciousness, conditioned on consciousness being simple).
b) it is tractable to answer (1) without reaching agreement on object-level values, by doing something like designing a temporary global government structure that most people agree is pretty good (in that it will allow society to reflect appropriately and determine the next global government structure), but that this question hasn't been answered well yet and that a better answer would improve coordination. E.g. perhaps society is run as a global federalist democratic-ish structure with centralized control of potentially destructive technology (taking into account "how voters would judge something if they thought longer" rather than "how voters actually judge something"; this might be possible if the AI alignment problem is solved). It seems quite possible to create proposals of this form and critique them.

It seems like we disagree about (a) and this disagreement has been partially hashed out elsewhere, and that it's not clear we have a strong disagreement about (b).

Comment author: MikeJohnson 11 December 2016 03:43:52PM *  1 point [-]

It seems like you're saying here that there won't be clean rules for determining logical counterfactuals? I agree this might be the case but it doesn't seem clear to me. Logical counterfactuals seem pretty confusing and there seems to be a lot of room for better theories about them.

Right, and I would argue that logical counterfactuals (in the way we've mentioned them in this thread) will necessarily be intractably confusing, because they're impossible to do cleanly. I say this because in the "P & C" example above, we need a frame-invariant way to interpret a change in C in terms of P. However, we can only have such a frame-invariant way if there exists a clean mapping (injection, surjection, bijection, etc) between P&C- which I think we can't have, even theoretically.

(Unless we define both physics and computation through something like constructor theory... at which point we're not really talking about Turing machines as we know them-- we'd be talking about physics by another name.)

This is a big part of the reason why I'm a big advocate of trying to define moral value in physical terms: if we start with physics, then we know our conclusions will 'compile' to physics. If instead we start with the notion that 'some computations have more moral value than others', we're stuck with the problem-- intractable problem, I argue-- that we don't have a frame-invariant way to precisely identify what computations are happening in any physical system (and likewise, which aren't happening). I.e., statements about computations will never cleanly compile to physical terms. And whenever we have multiple incompatible interpretations, we necessarily get inconsistencies, and we can prove anything is true (i.e., we can prove any arbitrary physical system is superior to any other).

Does that argument make sense?

... that said, it would seem very valuable to make a survey of possible levels of abstraction at which one could attempt to define moral value, and their positives & negatives.

I think we have a lot more theoretical progress to make on understanding consciousness and ethics. On priors I'd expect the theoretical progress to produce more-satisfying things over time without ever producing a complete answer to ethics. Though of course I could be wrong here; it seems like intuitions vary a lot. It seems more likely to me that we find a simple unifying theory for consciousness than ethics.

Totally agreed!

Comment author: Jessica_Taylor 11 December 2016 10:21:55PM *  1 point [-]

However, we can only have such a frame-invariant way if there exists a clean mapping (injection, surjection, bijection, etc) between P&C- which I think we can't have, even theoretically.

I'm still not sure why you strongly think there's no principled way; it seems hard to prove a negative. I mentioned that we could make progress on logical counterfactuals; there's also the approach Chalmers talks about here. (I buy that there's reason to suspect there's no principled way if you're not impressed by any proposal so far).

And whenever we have multiple incompatible interpretations, we necessarily get inconsistencies, and we can prove anything is true (i.e., we can prove any arbitrary physical system is superior to any other).

I don't think this follows. The universal prior is not objective; you can "prove" that any bit probably follows from a given sequence, by changing your reference machine. But I don't think this is too problematic. We just accept that some things don't have a super clean objective answer. The reference machines that make odd predictions (e.g. that 000000000 is probably followed by 1) look weird, although it's hard to precisely say what's weird about them without making reference to another reference machine. I don't think this kind of non-objectivity implies any kind of inconsistency.

Similarly, even if objective approaches to computational interpretations fail, we could get a state where computational interpretations are non-objective (e.g. defined relative to a "reference machine") and the reference machines that make very weird predictions (like the popcorn implementing a cat) would look super weird to humans. This doesn't seem like a fatal flaw to me, for the same reason it's not a fatal flaw in the case of the universal prior.

Comment author: MikeJohnson 10 December 2016 08:51:49PM 1 point [-]

Physical system P embeds computation C if and only if P has different behavior counterfactually on C taking on different values

I suspect this still runs into the same problem-- in the case of the computational-physical mapping, even if we assert that C has changed, we can merely choose a different interpretation of P which is consistent with the change, without actually changing P.

If it turns out that no simple rule of this form works, I wouldn't be too troubled, though; I'd be psychologically prepared to accept that there isn't a clean quarks<->computations mapping. Similar to how I already accept that human value is complex, I could accept that human judgments of "does this physical system implement this computation" are complex (and thus can't be captured in a simple rule)

This is an important question: if there exists no clean quarks<->computations mapping, is it (a) a relatively trivial problem, or (b) a really enormous problem? I'd say the answer to this depends on how we talk about computations. I.e., if we say "the ethically-relevant stuff happens at the computational level" -- e.g., we shouldn't compute certain strings-- then I think it grows to be a large problem. This grows particularly large if we're discussing how to optimize the universe! :)

I think these are all useful ways of viewing ethics, and I don't feel the need to pick a single view (although I often find it appealing to look at what some views say about what other views are saying and resolving the contradictions between them). There are all kinds of reasons why it might be psychologically uncomfortable not to have a simple theory of ethics (e.g. it's harder to know whether you're being ethical, it's harder to criticize others for being unethical, it's harder for groups to coordinate around more complex and ambiguous ethical theories, you'll never be able to "solve" ethics once and then never have to think about ethics again, it requires holding multiple contradictory views in your head at once, you won't always have a satisfying verbal justification for why your actions are ethical). But none of this implies that it's good (in any of the senses above!) to assume there's a simple ethical theory.

Let me push back a little here- imagine we live in the early 1800s, and Faraday was attempting to formalize electromagnetism. We had plenty of intuitive rules of thumb for how electromagnetism worked, but no consistent, overarching theory. I'm sure lots of people shook their head and said things like, "these things are just God's will, there's no pattern to be found." However, it turns out that there was something unifying to be found, and tolerance of inconsistencies & nebulosity would have been counter-productive.

Today, we have intuitive rules of thumb for how we think consciousness & ethics work, but similarly no consistent, overarching theory. Are consciousness & moral value like electromagnetism-- things that we can discover knowledge about? Or are they like elan vital-- reifications of clusters of phenomena that don't always cluster cleanly?

I think the jury's still out here, but the key with electromagnetism was that Faraday was able to generate novel, falsifiable predictions with his theory. I'm not claiming to be Faraday, but I think if we can generate novel, falsifiable predictions with work on consciousness & valence (I offer some in Section XI, and observations that could be adapted to make falsifiable predictions in Section XII), this should drive updates toward "there's some undiscovered cache of predictive utility here, similar to what Faraday found with electromagnetism."

Comment author: Jessica_Taylor 10 December 2016 09:01:39PM *  1 point [-]

I suspect this still runs into the same problem-- in the case of the computational-physical mapping, even if we assert that C has changed, we can merely choose a different interpretation of P which is consistent with the change, without actually changing P.

It seems like you're saying here that there won't be clean rules for determining logical counterfactuals? I agree this might be the case but it doesn't seem clear to me. Logical counterfactuals seem pretty confusing and there seems to be a lot of room for better theories about them.

This is an important question: if there exists no clean quarks<->computations mapping, is it (a) a relatively trivial problem, or (b) a really enormous problem? I'd say the answer to this depends on how we talk about computations. I.e., if we say "the ethically-relevant stuff happens at the computational level" -- e.g., we shouldn't compute certain strings-- then I think it grows to be a large problem. This grows particularly large if we're discussing how to optimize the universe! :)

I agree that it would a large problem. The total amount of effort to "complete" the project of figuring out which computations we care about would be practically infinite, but with a lot of effort we'd get better and better approximations over time, and we would be able to capture a lot of moral value this way.

Let me push back a little here

I mostly agree with your push back; I think when we have different useful views of the same thing that's a good indication that there's more intellectual progress to be made in resolving the contradictions between the different views (e.g. by finding a unifying theory).

I think we have a lot more theoretical progress to make on understanding consciousness and ethics. On priors I'd expect the theoretical progress to produce more-satisfying things over time without ever producing a complete answer to ethics. Though of course I could be wrong here; it seems like intuitions vary a lot. It seems more likely to me that we find a simple unifying theory for consciousness than ethics.

Comment author: MikeJohnson 10 December 2016 09:01:04AM *  2 points [-]

Awesome, I do like your steelman. More thoughts later, but just wanted to share one notion before sleep:

With regard to computationalism, I think you've nailed it. Downward causation seems pretty obviously wrong (and I don't know of any computationalists that personally endorse it).

IMO consciousness has power over physics the same way the Python program has power over physics

Totally agreed, and I like this example.

it's consistent to view the system as a physical system without reference to the Python program

Right- but I would go even further. Namely, given any non-trivial physical system, there exists multiple equally-valid interpretations of what's going on at the computational level. The example I give in PQ is: let's say I shake a bag of popcorn. With the right mapping, we could argue that we could treat that physical system as simulating the brain of a sleepy cat. However, given another mapping, we could treat that physical system as simulating the suffering of five holocausts. Very worryingly, we have no principled way to choose between these interpretive mappings. Am I causing suffering by shaking that bag of popcorn?

And I think all computation is like this, if we look closely- there exists no frame-invariant way to map between computation and physical systems in a principled way... just useful mappings, and non-useful mappings (and 'useful' is very frame-dependent).

This introduces an inconsistency into computationalism, and has some weird implications: I suspect that, given any computational definition of moral value, there would be a way to prove any arbitrary physical system morally superior to any other arbitrary physical system. I.e., you could prove both that A>B, and B>A.

... I may be getting something wrong here. But it seems like the lack of a clean quarks<->bits mapping ultimately turns out to be a big deal, and is a big reason why I advocate not trying to define moral value in terms of Turing machines & bitstrings.

Instead, I tend to think of ethics as "how should we arrange the [quarks|negentropy] in our light-cone?" -- ultimately we live in a world of quarks, so ethics is a question of quarks (or strings, or whatnot).

However! Perhaps this is just a failure of my imagination. What is ethics if not how to arrange our physical world? Or can you help me steelman computationalism against this inconsistency objection?

Thanks again for the comments. They're both great and helpful.

Comment author: Jessica_Taylor 10 December 2016 06:00:24PM *  1 point [-]

Thanks for your comments too, I'm finding them helpful for understanding other possible positions on ethics.

With the right mapping, we could argue that we could treat that physical system as simulating the brain of a sleepy cat. However, given another mapping, we could treat that physical system as simulating the suffering of five holocausts. Very worryingly, we have no principled way to choose between these interpretive mappings.

OK, how about a rule like this:

Physical system P embeds computation C if and only if P has different behavior counterfactually on C taking on different values

(formalizing this rule would require a theory of logical counterfactuals; I'm not sure if I expect a fully general theory to exist but it seems plausible that one does)

I'm not asserting that this rule is correct but it doesn't seem inconsistent. In particular it doesn't seem like you could use it to prove A > B and B > A. And clearly your popcorn embeds neither a cat nor the suffering of five holocausts under this rule.

If it turns out that no simple rule of this form works, I wouldn't be too troubled, though; I'd be psychologically prepared to accept that there isn't a clean quarks<->computations mapping. Similar to how I already accept that human value is complex, I could accept that human judgments of "does this physical system implement this computation" are complex (and thus can't be captured in a simple rule). I don't think this would make me inconsistent, I think it would just make me more tolerant of nebulosity in ethics. At the moment it seems like clean mappings might exist and so it makes sense to search for them, though.

Instead, I tend to think of ethics as "how should we arrange the [quarks|negentropy] in our light-cone?" -- ultimately we live in a world of quarks, so ethics is a question of quarks (or strings, or whatnot).

On the object level, it seems like it's possible to think of painting as "how should we arrange the brush strokes on the canvas?". But it seems hard to paint well while only thinking at the level of brush strokes (and not thinking about the higher levels, like objects). I expect ethics to be similar; at the very least if human ethics has an "aesthetic" component then it seems like designing a good light cone is at least as hard as making a good painting. Maybe this is a strawman of your position?

On the meta level, I would caution against this use of "ultimately"; see here and here (the articles are worded somewhat disagreeably but I mostly endorse the content). In some sense ethics is about quarks, but in other senses it's about:

  1. computations
  2. aesthetics
  3. the id, ego, and superego
  4. deciding which side to take in a dispute
  5. a conflict between what we want and what we want to appear to want
  6. nurturing the part of us that cares about others
  7. updateless decision theory
  8. a mathematical fact about what we would want upon reflection

I think these are all useful ways of viewing ethics, and I don't feel the need to pick a single view (although I often find it appealing to look at what some views say about what other views are saying and resolving the contradictions between them). There are all kinds of reasons why it might be psychologically uncomfortable not to have a simple theory of ethics (e.g. it's harder to know whether you're being ethical, it's harder to criticize others for being unethical, it's harder for groups to coordinate around more complex and ambiguous ethical theories, you'll never be able to "solve" ethics once and then never have to think about ethics again, it requires holding multiple contradictory views in your head at once, you won't always have a satisfying verbal justification for why your actions are ethical). But none of this implies that it's good (in any of the senses above!) to assume there's a simple ethical theory.

(For the record I think it's useful to search for simple ethical theories even if they don't exist, since you might discover interesting new ways of viewing ethics, even if these views aren't complete).

Comment author: Jessica_Taylor 10 December 2016 06:08:07AM *  1 point [-]

some more object-level comments on PQ itself:

We can say that a high-level phenomenon is strongly emergent​ with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain.

Suppose we have a Python program running on a computer. Truths about the Python program are, in some sense, reducible to physics; however the reduction itself requires resolving philosophical questions. I don't know if this means the Python program's functioning (e.g. values of different variables at different times) are "strongly emergent"; it doesn't seem like an important question to me.

Downward causation​ means that higher-level phenomena are not only irreducible but also exert a causal efficacy of some sort. … [This implies] low-level laws will be incomplete as a guide to both the low-level and the high-level evolution of processes in the world.

In the case of the Python program this seems clearly false (it's consistent to view the system as a physical system without reference to the Python program). I expect this is also false in the case of consciousness. I think almost all computationalists would strongly reject downwards causation according to this definition. Do you know of any computationalists who actually advocate downwards causation (i.e. that you can't predict future physical states by just looking at past physical states without thinking about the higher levels)?

IMO consciousness has power over physics the same way the Python program has power over physics; we can consider counterfactuals like "what if this variable in the Python program magically had a different value" and ask what would happen to physics if this happened (in this case, maybe the variable controls something displayed on a computer screen, so if the variable were changed then the computer screen would emit different light). Actually formalizing questions like "what would happen if this variable had a different value" requires a theory of logical counterfactuals (which MIRI is researching, see this paper).

Notably, Python programs usually don't "make choices" such that "control" is all that meaningful, but humans do. Here I would say that humans implement a decision theory, while most Python programs do not (although some Python programs do implement a decision theory and can be meaningfully said to "make choices"). "Implementing a decision theory" means something like "evaluating multiple actions based on what their effects are expected to be, and choosing the one that scores best according to some metric"; some AI systems like reinforcement learners implement a decision theory.

(I'm writing this comment to express more "computationalism has a reasonable steelman that isn't identified as a possible position in PQ" rather than "computationalism is clearly right")

Comment author: Jessica_Taylor 10 December 2016 06:46:36AM 3 points [-]

(more comments)

Thus, we would need to be open to the possibility that certain interventions could cause a change in a system’s physical substrate (which generates its qualia) without causing a change in its computational level (which generates its qualia reports)

It seems like this means that empirical tests (e.g. neuroscience stuff) aren't going to help test aspects of the theory that are about divergence between computational pseudo-qualia (the things people report on) and actual qualia. If I squint a lot I could see "anthropic evidence" being used to distinguish between pseudo-qualia and qualia, but it seems like nothing else would work.

I'm also not sure why we would expect pseudo-qualia to have any correlation with actual qualia? I guess you could make an anthropic argument (we're viewing the world from the perspective of actual qualia, and our sensations seem to match the pseudo-qualia). That would give someone the suspicion that there's some causal story for why they would be synchronized, without directly providing such a causal story.

(For the record I think anthropic reasoning is usually confused and should be replaced with decision-theoretic reasoning (e.g. see this discussion), but this seems like a topic for another day)

Comment author: MikeJohnson 10 December 2016 05:03:39AM *  3 points [-]

Hi Jessica,

Thanks for the thoughtful note. I do want to be very clear that I’m not criticizing MIRI’s work on CEV, which I do like very much! - It seems like the best intuition pump & Schelling Point in its area, and I think it has potential to be more.

My core offering in this space (where I expect most of the value to be) is Principia Qualia- it’s more up-to-date and comprehensive than the blog post you’re referencing. I pose some hypotheticals in the blog post, but it isn’t intended to stand alone as a substantive work (whereas PQ is).

But I had some thoughts in response to your response on valence + AI safety:

->1. First, I agree that leaving our future moral trajectory in the hands of humans is a great thing. I’m definitely not advocating anything else.

->2. But I would push back on whether our current ethical theories are very good- i.e., good enough to see us through any future AGI transition without needlessly risking substantial amounts of value.

To give one example: currently, some people make the claim that animals such as cows are much more capable of suffering than humans, because they don’t have much intellect to blunt their raw, emotional feeling. Other people make the claim that cows are much less capable of suffering than humans, because they don’t have the ‘bootstrapping strange loop’ mind architecture enabled by language, and necessary for consciousness. Worryingly, both of these arguments seem plausible, with no good way to pick between them.

Now, I don’t think cows are in a strange quantum superposition of both suffering and not suffering— I think there’s a fact of the matter, though we clearly don’t know it.

This example may have moral implications, but little relevance to existential risk. However, when we start talking about mind simulations and ‘thought crime’, WBE, selfish replicators, and other sorts of tradeoffs where there might be unknown unknowns with respect to moral value, it seems clear to me that these issues will rapidly become much more pressing. So, I absolutely believe work on these topics is important, and quite possibly a matter of survival. (And I think it's tractable, based on work already done.)

Based on my understanding, I don’t think Act-based agents or Task AI would help resolve these questions by default, although as tools they could probably help.

->3. I also think theories in IIT’s reference class won’t be correct, but I suspect I define the reference class much differently. :) Based on my categorization, I would object to lumping my theory into IIT’s reference class (we could talk more about this if you'd like).

->4. Re: suffering computations- a big, interesting question here is whether moral value should be defined at the physical or computational level. I.e., “is moral value made out of quarks or bits (or something else)?” — this may be the crux of our disagreement, since I’m a physicalist and I gather you’re a computationalist. But PQ’s framework allows for bits to be “where the magic happens”, as long as certain conditions obtain.

One factor that bears mentioning is whether an AGI’s ontology & theory of ethics might be path-dependent upon its creators’ metaphysics in such a way that it would be difficult for it to update if it’s wrong. If this is a plausible concern, this would imply a time-sensitive factor in resolving the philosophical confusion around consciousness, valence, moral value, etc.

->5. I wouldn’t advocate strictly hedonic values (this was ambiguous in the blog post but is clearer in Principia Qualia).

->6. However, I do think that “how much horrific suffering is there in possible world X?” is a hands-down, qualitatively better proxy for whether it’s a desirable future than “what is the Dow Jones closing price in possible world X?”

->7. Re: neuromorphic AIs: I think an interesting angle here is, “how does boredom stop humans from wireheading on pleasurable stimuli?” - I view boredom as a sophisticated anti-wireheading technology. It seems possible (although I can’t vouch for plausible yet) that if we understand the precise mechanism by which boredom is implemented in human brains, it may help us understand and/or control neuromorphic AGIs better. But this is very speculative, and undeveloped.

Comment author: Jessica_Taylor 10 December 2016 06:08:07AM *  1 point [-]

some more object-level comments on PQ itself:

We can say that a high-level phenomenon is strongly emergent​ with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain.

Suppose we have a Python program running on a computer. Truths about the Python program are, in some sense, reducible to physics; however the reduction itself requires resolving philosophical questions. I don't know if this means the Python program's functioning (e.g. values of different variables at different times) are "strongly emergent"; it doesn't seem like an important question to me.

Downward causation​ means that higher-level phenomena are not only irreducible but also exert a causal efficacy of some sort. … [This implies] low-level laws will be incomplete as a guide to both the low-level and the high-level evolution of processes in the world.

In the case of the Python program this seems clearly false (it's consistent to view the system as a physical system without reference to the Python program). I expect this is also false in the case of consciousness. I think almost all computationalists would strongly reject downwards causation according to this definition. Do you know of any computationalists who actually advocate downwards causation (i.e. that you can't predict future physical states by just looking at past physical states without thinking about the higher levels)?

IMO consciousness has power over physics the same way the Python program has power over physics; we can consider counterfactuals like "what if this variable in the Python program magically had a different value" and ask what would happen to physics if this happened (in this case, maybe the variable controls something displayed on a computer screen, so if the variable were changed then the computer screen would emit different light). Actually formalizing questions like "what would happen if this variable had a different value" requires a theory of logical counterfactuals (which MIRI is researching, see this paper).

Notably, Python programs usually don't "make choices" such that "control" is all that meaningful, but humans do. Here I would say that humans implement a decision theory, while most Python programs do not (although some Python programs do implement a decision theory and can be meaningfully said to "make choices"). "Implementing a decision theory" means something like "evaluating multiple actions based on what their effects are expected to be, and choosing the one that scores best according to some metric"; some AI systems like reinforcement learners implement a decision theory.

(I'm writing this comment to express more "computationalism has a reasonable steelman that isn't identified as a possible position in PQ" rather than "computationalism is clearly right")

Comment author: MikeJohnson 10 December 2016 05:03:39AM *  3 points [-]

Hi Jessica,

Thanks for the thoughtful note. I do want to be very clear that I’m not criticizing MIRI’s work on CEV, which I do like very much! - It seems like the best intuition pump & Schelling Point in its area, and I think it has potential to be more.

My core offering in this space (where I expect most of the value to be) is Principia Qualia- it’s more up-to-date and comprehensive than the blog post you’re referencing. I pose some hypotheticals in the blog post, but it isn’t intended to stand alone as a substantive work (whereas PQ is).

But I had some thoughts in response to your response on valence + AI safety:

->1. First, I agree that leaving our future moral trajectory in the hands of humans is a great thing. I’m definitely not advocating anything else.

->2. But I would push back on whether our current ethical theories are very good- i.e., good enough to see us through any future AGI transition without needlessly risking substantial amounts of value.

To give one example: currently, some people make the claim that animals such as cows are much more capable of suffering than humans, because they don’t have much intellect to blunt their raw, emotional feeling. Other people make the claim that cows are much less capable of suffering than humans, because they don’t have the ‘bootstrapping strange loop’ mind architecture enabled by language, and necessary for consciousness. Worryingly, both of these arguments seem plausible, with no good way to pick between them.

Now, I don’t think cows are in a strange quantum superposition of both suffering and not suffering— I think there’s a fact of the matter, though we clearly don’t know it.

This example may have moral implications, but little relevance to existential risk. However, when we start talking about mind simulations and ‘thought crime’, WBE, selfish replicators, and other sorts of tradeoffs where there might be unknown unknowns with respect to moral value, it seems clear to me that these issues will rapidly become much more pressing. So, I absolutely believe work on these topics is important, and quite possibly a matter of survival. (And I think it's tractable, based on work already done.)

Based on my understanding, I don’t think Act-based agents or Task AI would help resolve these questions by default, although as tools they could probably help.

->3. I also think theories in IIT’s reference class won’t be correct, but I suspect I define the reference class much differently. :) Based on my categorization, I would object to lumping my theory into IIT’s reference class (we could talk more about this if you'd like).

->4. Re: suffering computations- a big, interesting question here is whether moral value should be defined at the physical or computational level. I.e., “is moral value made out of quarks or bits (or something else)?” — this may be the crux of our disagreement, since I’m a physicalist and I gather you’re a computationalist. But PQ’s framework allows for bits to be “where the magic happens”, as long as certain conditions obtain.

One factor that bears mentioning is whether an AGI’s ontology & theory of ethics might be path-dependent upon its creators’ metaphysics in such a way that it would be difficult for it to update if it’s wrong. If this is a plausible concern, this would imply a time-sensitive factor in resolving the philosophical confusion around consciousness, valence, moral value, etc.

->5. I wouldn’t advocate strictly hedonic values (this was ambiguous in the blog post but is clearer in Principia Qualia).

->6. However, I do think that “how much horrific suffering is there in possible world X?” is a hands-down, qualitatively better proxy for whether it’s a desirable future than “what is the Dow Jones closing price in possible world X?”

->7. Re: neuromorphic AIs: I think an interesting angle here is, “how does boredom stop humans from wireheading on pleasurable stimuli?” - I view boredom as a sophisticated anti-wireheading technology. It seems possible (although I can’t vouch for plausible yet) that if we understand the precise mechanism by which boredom is implemented in human brains, it may help us understand and/or control neuromorphic AGIs better. But this is very speculative, and undeveloped.

Comment author: Jessica_Taylor 10 December 2016 05:35:25AM *  2 points [-]

Thanks for the response; I've found this discussion useful for clarifying and updating my views.

However, when we start talking about mind simulations and ‘thought crime’, WBE, selfish replicators, and other sorts of tradeoffs where there might be unknown unknowns with respect to moral value, it seems clear to me that these issues will rapidly become much more pressing. So, I absolutely believe work on these topics is important, and quite possibly a matter of survival. (And I think it's tractable, based on work already done.)

Suppose we live under the wrong moral theory for 100 years. Then we figure out the right moral theory, and live according to that one for the rest of time. How much value is lost in that 100 years? It seems very high but not an x-risk. It seems like we only get x-risks if somehow we don't put a trusted reflection process (e.g. human moral philosophers) in control of the far future.

It seems quite sensible for people who don't put overwhelming importance on the far future to care about resolving moral uncertainty earlier. The part of my morality that isn't exclusively concerned with the far future strongly approves of things like consciousness research that resolve moral uncertainty earlier.

Based on my understanding, I don’t think Act-based agents or Task AI would help resolve these questions by default, although as tools they could probably help.

Act-based agents and task AGI kick the problem of global governance to humans. Humans still need to decide questions like how to run governments; they'll be able to use AGI to help them, but governing well is still a hard problem even with AI assistance. The goal would be that moral errors are temporary; with the right global government structure, moral philosophers will be able to make moral progress and have their moral updates reflected in how things play out.

It's possible that you think that governing the world well enough that the future eventually reflects human values is very hard even with AGI assistance, and would be made easier with better moral theories made available early on.

One factor that bears mentioning is whether an AGI’s ontology & theory of ethics might be path-dependent upon its creators’ metaphysics in such a way that it would be difficult for it to update if it’s wrong. If this is a plausible concern, this would imply a time-sensitive factor in resolving the philosophical confusion around consciousness, valence, moral value, etc.

I agree with this but place low probability on the antecedent. It's kind of hard to explain briefly; I'll point to this comment thread for a good discussion (I mostly agree with Paul).

But now that I think about it more, I don't put super low probability on the antecedent. It seems like it would be useful to have some way to compare different universes that we've failed to put in control of trusted reflection processes, to e.g. get ones that have less horrific suffering or more happiness. I place high probability on "distinguishing between such universes is as hard as solving the AI alignment problem in general", but I'm not extremely confident of that and I don't have a super precise argument for it. I guess I wouldn't personally prioritize such research compared to generic AI safety research but it doesn't seem totally implausible that resolving moral uncertainty earlier would reduce x-risk for this reason.

Comment author: kbog  (EA Profile) 10 December 2016 04:21:00AM 0 points [-]
Comment author: Jessica_Taylor 10 December 2016 05:09:03AM *  1 point [-]

I expect:

  1. We would lose a great deal of value by optimizing the universe according to current moral uncertainty, without the opportunity to reflect and become less uncertain over time.

  2. There's a great deal of reflection necessary to figure out what actions moral theory X recommends, e.g. to figure out which minds exist or what implicit promises people have made to each other. I don't see this reflection as distinct from reflection about moral uncertainty; if we're going to defer to a reflection process anyway for making decisions, we might as well let that reflection process decide on issues of moral theory.

Comment author: MikeJohnson 09 December 2016 05:20:05PM *  6 points [-]

Thanks for the comment! I think the time-sensitivity of this research is an important claim, as you say.

My impression of how MIRI currently views CEV is that it's 'a useful intuition pump, but not something we should currently plan to depend on for heavy lifting'. In the last MIRI AMA, Rob noted that

I discussed CEV some in this answer. I think the status is about the same: sounds like a vaguely plausible informal goal to shoot for in the very long run, but also very difficult to implement. As Eliezer notes in https://arbital.com/p/cev/, "CEV is rather complicated and meta and hence not intended as something you'd do with the first AI you ever tried to build." The first AGI systems people develop should probably have much more limited capabilities and much more modest goals, to reduce the probability of catastrophic accidents.

As an intuition pump, rough sketch, or placeholder, I really like CEV. What I'm worried about is that discussion of CEV generally happens in "far mode", and there's very probably work that could and should be done now in order to evaluate how plausible CEV is, and explore alternatives. Four reasons not to depend too much on CEV alone:

  1. CEV is really hard. This seems consistent with what Rob & Eliezer have said.

  2. CEV may not be plausible. A failure mode acknowledged in the original document is that preferences may never cohere-- but I would add that CEV may simply be too underdefined & ambiguous to be useful in many cases. E.g., a "preference" is a rather leaky abstraction sometimes to begin with. A lot of possibilities look reasonable from far away, but not from up close, and CEV might be one of these.

  3. CEV may give bad answers. It seems entirely possible that any specific implementation of CEV would unavoidably include certain undesirable systemic biases. More troublingly, maybe preference utilitarianism is just a bad way to go about ethics (I think this is true, personally).

  4. Research into qualia may help us get CEV right. If we define the landscape of consciousness as the landscape within which morally-significant things happen, then understanding this landscape better should help us see how CEV could- or couldn't- help us navigate it.

Aside from these CEV-specific concerns, I think research into consciousness & valence could have larger benefits to AI safety- I wrote up some thoughts on this last year at http://opentheory.net/2015/09/fai_and_valence/ .

Rather than time-sensitivity, another way to frame this could be path-dependence based on order of technological development. Do we get better average & median futures if we attempt to build AI without worrying much about qualia, or if we work on both at once?

(Granted, even if this research is all I say it is, there are potential pitfalls of technological development down this path.)

Comment author: Jessica_Taylor 09 December 2016 11:10:33PM *  3 points [-]

Some thoughts:

IMO the most plausible non-CEV proposals are

  1. Act-based agents, which defer to humans to a large extent. The goal is to keep humans in control of the future.
  2. Task AI, which is used to accomplish concrete objectives in the world. The idea would be to use this to accomplish goals people would want accomplished using AI (including reducing existential risk), while leaving the future moral trajectory in the hands of humans.

Both proposals end up deferring to humans to decide the long-run trajectory of humanity. IMO, this isn't a coincidence; I don't think it's likely that we get a good outcome without deferring to humans in the long run.

Some more specific comments:

If pleasure/happiness is an important core part of what humanity values, or should value, having the exact information-theoretic definition of it on-hand could directly and drastically simplify the problems of what to maximize, and how to load this value into an AGI

There's one story where this makes a little bit of sense, where we basically give up on satisfying any human values other than hedonic values, and build an AI that maximizes pleasure without satisfying any other human values. I'm skeptical that this is any easier than solving the full value alignment problem, but even if it were, I think this would be undesirable to the vast majority of humans, and so we would collectively be better off coordinating around a higher target.

If we're shooting for a higher target, then we have some story for why we get more values than just hedonic values. E.g. the AI defers to human moral philosophers on some issues. But this method should also succeed for loading hedonic values. So there isn't a significant benefit to having hedonic values specified ahead of time.

Even if pleasure isn’t a core terminal value for humans, it could still be used as a useful indirect heuristic for detecting value destruction. I.e., if we’re considering having an AGI carry out some intervention, we could ask it what the expected effect is on whatever pattern precisely corresponds to pleasure/happiness.

This seems to be in the same reference class as asking questions like "how many humans exist" or "what's the closing price of the Dow Jones". I.e. you can use it to check if things are going as expected, though the metric can be manipulated. Personally I'm pessimistic about such sanity checks in general, and even if I were optimistic about them, I would think that the marginal value of one additional sanity check is low.

There’s going to be a lot of experimentation involving intelligent systems, and although many of these systems won’t be “sentient” in the way humans are, some system types will approach or even surpass human capacity for suffering.

See Eliezer's thoughts on mindcrime. Also see the discussion in the comments. It does seem like consciousness research could help for defining a nonpersonhood predicate.

I don't have comments on cognitive enhancement since it's not my specialty.

Some of the points (6,7,8) seem most relevant if we expect AGI to be designed to use internal reinforcement substantially similar to humans' internal reinforcement and substantially different from modern reinforcement learning. I don't have precise enough models of such AGI systems that I feel optimistic about doing research related to such AGIs, but if you think questions like "how would we incentivize neuromorphic AI systems to do what we want" are tractable then maybe it makes sense for you to do research on this question. I'm pessimistic about things in the reference class of IIT making any progress on this question, but maybe you have different models here.

I agree that "Valence research could change the social and political landscape AGI research occurs in" and, like you, I think the sign is unclear.

(I am a MIRI research fellow but am currently speaking for myself not my employer).

Comment author: gsastry 12 October 2016 11:47:02PM 2 points [-]

Do you share Open Phil's view that there is a > 10% chance of transformative AI (defined as in Open Phil's post) in the next 20 years? What signposts would alert you that transformative AI is near?

Relatedly, suppose that transformative AI will happen within about 20 years (not necessarily a self improving AGI). Can you explain how MIRI's research will be relevant in such a near-term scenario (e.g. if it happens by scaling up deep learning methods)?

Comment author: Jessica_Taylor 13 October 2016 01:06:30AM *  5 points [-]

I share Open Phil’s view on the probability of transformative AI in the next 20 years. The relevant signposts would be answers to questions like “how are current algorithms doing on tasks requiring various capabilities”, “how much did this performance depend on task-specific tweaking on the part of programmers”, “how much is performance projected to improve due to increasing hardware”, and “do many credible AI researchers think that we are close to transformative AI”.

In designing the new ML-focused agenda, we imagined a concrete hypothetical (which isn’t stated explicitly in the paper): what research would we do if we knew we’d have sufficient technology for AGI in about 20 years, and this technology would be qualitatively similar to modern ML technology such as deep learning? So we definitely intend for this research agenda to be relevant to the scenario you describe, and the agenda document goes into more details. Much of this research deals with task-directed AGI, which can be limited (e.g. not self-improving).

View more: Next