In response to Introducing Enthea
Comment author: Lukas_Gloor 11 August 2017 12:10:01AM *  2 points [-]

This blogpost seems relevant. Admittedly it's labelled 'speculative' by the author, but I find the concerns plausible.

Comment author: Wei_Dai 22 July 2017 10:06:39AM *  6 points [-]

The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default "prudent" thing to do.

Thanks for pointing this out. I've noticed this myself in some of FRI's writings, and I'd say this, along with the high amount of certainty on various object-level philosophical questions that presumably cause the disvaluing of reflection about them, are what most "turns me off" about FRI. I worry a lot about potential failures of goal preservation (i.e., value drift) too, but because I'm highly uncertain about just about every meta-ethical and normative question, I see no choice but to try to design some sort of reflection procedure that I can trust enough to hand off control to. In other words, I have nothing I'd want to "lock in" at this point and since I'm by default constantly handing off control to my future self with few safeguards against value drift, doing something better than that default is one of my highest priorities. If other people are also uncertain and place high value on (safe/correct) reflection as a result, that helps with my goal (because we can then pool resources together to work out what safe/correct reflection is), so it's regrettable to see FRI people sometimes argue for more certainty than I think is warranted and especially to see them argue against reflection.

Comment author: Lukas_Gloor 22 July 2017 10:49:58AM 1 point [-]

That makes sense. I do think as a general policy, valuing reflection is more positive-sum, and if one does not feel like much is "locked in" yet then it becomes very natural too. I'm not saying that people who value reflection more than I do are doing it wrong; I think I would even argue for reflection being very important and recommend it to new people, if I felt more comfortable that they'd end up pursuing things that are beneficial from all/most plausible perspectives. Though what I find regrettable is that the "default" interventions that are said to be good from as many perspectives as possible oftentimes do not seem great from a suffering-focused perspective.

Comment author: kbog  (EA Profile) 21 July 2017 04:20:14PM *  6 points [-]

I think the choice of a metaethical view is less important than you think. Anti-realism is frequently a much richer view than just talking about preferences. It says that our moral statements aren't truth-apt, but just because our statements aren't truth-apt doesn't mean they're merely about preferences. Anti-realists can give accounts of why a rigorous moral theory is justified and is the right one to follow, not much different from how realists can. Conversely, you could even be a moral realist who believes that moral status boils down to which computations you happen to care about. Anyway, the point is that anti-realists can take pretty much any view in normative ethics, and justify those views in mostly the same ways that realists tend to justify their views (i.e. reasons other than personal preference). Just because we're not talking about whether a moral principle is true or not doesn't mean that we can no longer use the same basic reasons and arguments in favor of or against that principle. Those reasons will just have a different meaning.

Plus, physicalism is a weaker assertion than the view that consciousness is merely a matter of computation or information processing. Consciousness could be reducible to physical phenomena but without being reducible to computational steps. (eta: this is probably what most physicalists think.)

Comment author: Lukas_Gloor 22 July 2017 12:33:58AM 2 points [-]

I agree with this.

Comment author: Wei_Dai 21 July 2017 03:39:04PM 6 points [-]

What would you say are the philosophical or other premises that FRI does accept (or tends to assume in its work), which distinguishes it from other people/organizations working in a similar space such as MIRI, OpenAI, and QRI? Is it just something like "preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)"?

It seems to me that a belief in anti-realism about consciousness explains a lot of Brian's (near) certainty about his values and hence his focus on suffering. People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don't focus on suffering as much. Does this seem right, and if so, can you explain what premises led you to work for FRI?

Comment author: Lukas_Gloor 21 July 2017 11:16:58PM *  9 points [-]

Is it just something like "preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)"?

This sounds right. Before 2016, I would have said that rough value alignment (normatively "suffering-focused") is very-close-to necessary, but we updated away from this condition and for quite some time now hold the view that it is not essential if people are otherwise a good fit. We still have an expectation that researchers think about research-relevant background assumptions in ways that are not completely different from ours on every issue, but single disagreements are practically never a dealbreaker. We've had qualia realists both on the team (part-time) and as interns, and some team members now don't hold strong views on the issue one way or the other. Brian especially is a really strong advocate of epistemic diversity and goes much further with it than I feel most people would go.

People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don't focus on suffering as much.

Hm, this does not fit my observations. We had and still have people on our team who don't have strong confidence in either view, and there exists also a sizeable cluster of people who seem highly confident in both qualia realism and morality being about reducing suffering, the most notable example being David Pearce.

The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default "prudent" thing to do. This is far from a consensus and many team members value moral reflection a great deal, but many of us expect less “work” to be done by value-reflection procedures than others in the EA movement seemingly expect. Perhaps this is due to different ways of thinking about extrapolation procedures, or perhaps it’s due to us having made stronger lock-ins to certain aspects of our moral self image.

Paul Christiano’s indirect normativity write-up for instance deals with the "Is “Passing the Buck” Problematic?” objection in an in my view unsatisfying way. Working towards a situation where everyone has much more time to think about their values is more promising the more likely it is that there is “much to be gained,” normatively. But this somewhat begs the question. If one finds suffering-focused views very appealing, other interventions become more promising. There seems to be high value of information on narrowing down one’s moral uncertainty in this domain (much more so, arguably, than with questions of consciousness or which computations to morally care about). One way to attempt to reduce one’s moral uncertainty and capitalize on the value of information is by thinking more about the object-level arguments in population ethics; another way to do it is by thinking more about the value of moral reflection, how much it depends on intuition or self-image-based "lock ins" vs. how much it (either in general or in one's personal case) is based on other things that are more receptive to information gains or intelligence gains.

Personally, I would be totally eager to place the fate of “Which computations count as suffering?” into the hands of some in-advance specified reflection process, even when I feel like I don’t understand the way moral reflection will work out in the details of this complex algorithm. I’d be less confident in my current understanding of consciousness than I’d be confident in being able to pick a reassuring-seeming way of delegating the decision-making to smarter advisors. However, I get the opposite feeling when it comes to questions of population ethics. There, I feel like I have thought about the issue a lot, experience it as easier and more straightforward to think about than consciousness and whether I care about insects or electrons or Jupiter brains, and I have some strong intuitions and aspects of my self-identity about the matter and am unsure in which legitimate ways (as opposed to failures of goal preservation) I could gain evidence that would strongly change my mind. It would feel wrong to me to place the fate of my values into some in-advance specified, open-ended deliberation algorithm where I won’t really understand how it will play out and what initial settings make which kind of difference to the end result (and why). I'd be fine with quite "conservative" reflection procedures where I could be confident that it would likely output something that does not seem too far away from my current thinking, but would be gradually more worried about more open-ended ones.

Comment author: Lukas_Gloor 21 July 2017 08:57:37AM *  10 points [-]

Brian's view is maybe best described as eliminativism about consciousness (which may already seem counterintuitive to many) plus a counterintuitive way to draw boundaries in concept space. Luke Muehlhauser said about Brian's way of assigning non-zero moral relevance to any process that remotely resembles aspects of our concept of consciousness:

"Mr. Tomasik’s view [...] amounts to pansychism about consciousness as an uninformative special case of “pan-everythingism about everything."

See this conversation.

So the disagreement there does not appear to be about questions such as "What produces people's impression of there being a hard problem of consciousness?," but rather whether anything that is "non-infinitely separated in multi-dimensional concept space" still deserves some (tiny) recognition as fitting into the definition. As Luke says here, the concept "consciousness" works more like "life" (= fuzzy) and less like "water" (= H2O), and so if one shares this view, it becomes non-trivial to come up with an all-encompassing definition.

While most (? my impression anyway as someone who works there) researchers at FRI place highest credence on functionalism and eliminativism, there is more skepticism about Brian's inclination to never draw hard boundaries in concept space.

Comment author: lukeprog 28 June 2017 06:42:28PM 0 points [-]

I probably have thoughts on this, but first: Can you say more about what would count as "two systems in conflict"? E.g. would a mere competition among neural signals count? Or would it have to be something more "sophisticated," in a certain way? Also, is the "secondary layer" you're talking about also meant to be "hidden", or are you talking about a "phenomenally conscious" second layer?

Comment author: Lukas_Gloor 28 June 2017 08:01:30PM 0 points [-]

I was thinking about a secondary layer that is hidden as well.

E.g. would a mere competition among neural signals count? Or would it have to be something more "sophisticated," in a certain way?

Hard to say. On Brian's perspective with similarities in multi-dimensional concept space, the competition among neural signals may already qualify to an interesting degree. But let's say we are interested in something slightly more sophisticated, but not sophisticated enough that we're inclined to look at it as "not hidden." (Maybe it would qualify if the hidden nociceptive signals alter subconscious dispositions in interesting ways, though it depends on how that would look like and how it compares to what is going on introspectively with suffering that we have conscious access to.)

Comment author: Lukas_Gloor 28 June 2017 05:35:09PM *  3 points [-]

One thing I found extremely nice about your report is that it could serve EAs (and people in general) as a basis for shared terminology in discussions! If two people from different backgrounds wanted to have a discussion about philosophy of mind or animal consciousness, which texts would you recommend they both read in order to prepare themselves? (Not so much in terms of familiarity with popular terminology, but rather useful terminology.) Can you think of anything really good that is shorter than this report?

Comment author: Lukas_Gloor 28 June 2017 05:29:28PM *  0 points [-]

Are you aware of any "hidden" (nociception-related?) cognitive processes that could be described as "two systems in conflict?" I find the hidden qualia view very plausible, but I also find it plausible that I might settle on a view on moral relevance where what matters about pain is not the "raw feel" (or "intrinsic undesirability" in Drescher's words), but a kind of secondary layer of "judgment" in the sense of "wanting things to change/be different" or "not accepting some mental component/input." I'm wondering whether most of the processes that would constitute hidden qualia are too simple to fit this phenomenological description or not...

Comment author: Lukas_Gloor 28 June 2017 05:14:25PM 0 points [-]

Did you always find illusionism plausible or was there a moment where it made “click” or just a gradual progression? Do you think reading more about neuroscience makes people more sympathetic to it?

Do you think the p-zombie thought experiment can be helpful to explain the difference between illusionism and realism (“classic qualia” mapping onto the position “p-zombies are conceivable"), or do you find that it is unfair or often leads discussions astray?

Comment author: Lukas_Gloor 10 November 2016 11:16:30PM *  14 points [-]

I definitely became less interested in politics ever since identifying as an EA or utilitarian. But then Switzerland passed some ridiculous xenophobic propositions, and Brexit happened, and now Trump. And every time I had this worry in the back of my mind that we're doing something wrong.

Carl mentioned "Misallocating a huge mass of idealists' human capital to donation for easily measurable things and away from more effective things elsewhere, sabotages more effective do-gooding for a net worsening of the world" here. This point doesn't just apply to money, but also very much to attention and activism. And the bias may not just be towards things that are easily measurable, but there may also be a bias away from "current" or "urgent" events. These events shape public discourse, which could have important flow through effects. What's the effect if altruistic and driven people disproportionally stop caring about current events and the discussions that surround them?

Perhaps it's negligible, but it's certainly worth thinking about more. And I was glad to see how much attention the recent votes got within EA.

View more: Next