Comment author: MikeJohnson 25 July 2017 05:36:44PM *  2 points [-]

Functionalism seems internally consistent (although perhaps too radically skeptical). However, in my view it also seems to lead to some flavor of moral nihilism; consciousness anti-realism makes suffering realism difficult/complicated.

If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Part of my reason for writing this critique is to argue that functionalism isn't a useful theory of mind, because it doesn't do what we need theories of mind to do (adjudicate disagreements in a principled way, especially in novel contexts).

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better? I'd love to get your (and FRI's) input here.

Comment author: Kaj_Sotala 25 July 2017 11:17:19PM *  1 point [-]

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better?

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Comment author: SoerenMind  (EA Profile) 25 July 2017 11:15:43AM 0 points [-]

I see quite a bunch of relevant cognitive science work these days, e.g. this: http://saxelab.mit.edu/resources/papers/Kleiman-Weiner.etal.2017.pdf

Comment author: Kaj_Sotala 25 July 2017 01:10:42PM 0 points [-]

That's super-neat! Thanks.

Comment author: MikeJohnson 23 July 2017 08:57:34PM 2 points [-]

My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and 'revealed preference' research directions. OpenPhil may be more of a stretch to categorize in this way; I'm going off what I recall of Holden's debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser's report (he was up-front about his assumptions on this).

Of course it's harder to pin down what people at these organizations believe than it is in Brian's case, since Brian writes a great deal about his views.

So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.

Comment author: Kaj_Sotala 25 July 2017 11:01:35AM *  3 points [-]

Wait, are you equating "functionalism" with "doesn't believe suffering can be meaningfully defined"? I thought your criticism was mostly about the latter; I don't think it's automatically implied by the former. If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

(You could reasonably argue that it doesn't look likely that functionalism will provide such a theory, but then I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Comment author: Kaj_Sotala 22 July 2017 08:22:05AM 1 point [-]

Another discussion and definition of autonomy, by philosopher John Danaher:

Many books and articles have been written on the concept of ‘autonomy’. Generations of philosophers have painstakingly identified necessary and sufficient conditions for its attainment, subjected those conditions to revision and critique, scrapped their original accounts, started again, given up and argued that the concept is devoid of meaning, and so on. I cannot hope to do justice to the richness of the literature on this topic here. Still, it’s important to have at least a rough and ready conception of what autonomy is and the most general (and hopefully least contentious) conditions needed for its attainment.

I have said this before, but I like Joseph Raz’s general account. Like most people, he thinks that an autonomous agent is one who is, in some meaningful sense, the author of their own lives. In order for this to happen, he says that three conditions must be met:

Rationality condition: The agent must have goals/ends and must be able to use their reason to plan the means to achieve those goals/ends.

Optionality condition: The agent must have an adequate range of options from which to choose their goals and their means.

Independence condition: The agent must be free from external coercion and manipulation when choosing and exercising their rationality.

I have mentioned before that you can view these as ‘threshold conditions’, i.e. conditions that simply have to be met in order for an agent to be autonomous, or you can have a slightly more complex view, taking them to define a three dimensional space in which autonomy resides. In other words, you can argue that an agent can have more or less rationality, more or less optionality, and more or less independence. The conditions are satisfied in degrees. This means that agents can be more or less autonomous, and the same overall level of autonomy can be achieved through different combinations of the relevant degrees of satisfaction of the conditions. That’s the view I tend to favour. I think there possibly is a minimum threshold for each condition that must be satisfied in order for an agent to count as autonomous, but I suspect that the cases in which this threshold is not met are pretty stark. The more complicated cases, and the ones that really keep us up at night, arise when someone scores high on one of the conditions but low on another. Are they autonomous or not? There may not be a simple ‘yes’ or ‘no’ answer to that question.

Anyway, using the three conditions we can formulate the following ‘autonomy principle’ or ‘autonomy test’:

Autonomy principle: An agent’s actions are more or less autonomous to the extent that they meet the (i) rationality condition; (ii) optionality condition and (iii) independence condition.

Comment author: Wei_Dai 21 July 2017 03:39:04PM 6 points [-]

What would you say are the philosophical or other premises that FRI does accept (or tends to assume in its work), which distinguishes it from other people/organizations working in a similar space such as MIRI, OpenAI, and QRI? Is it just something like "preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)"?

It seems to me that a belief in anti-realism about consciousness explains a lot of Brian's (near) certainty about his values and hence his focus on suffering. People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don't focus on suffering as much. Does this seem right, and if so, can you explain what premises led you to work for FRI?

Comment author: Kaj_Sotala 21 July 2017 07:54:03PM *  8 points [-]

Rather than put words in the mouths of other people at FRI, I'd rather let them personally answer which philosophical premises they accept and what motivates them, if they wish.

For me personally, I've just had, for a long time, the intuition that preventing extreme suffering is the most important priority. To the best that I can tell, much of this intuition can be traced to having suffered from depression and general feelings of crushing hopelessness for large parts of my life, and wanting to save anyone else from experiencing a similar (or worse!) magnitude of suffering. I seem to recall that I was less suffering-focused before I started getting depressed for the first time.

Since then, that intuition has been reinforced by reading up on other suffering-focused works; something like tranquilism feels like a sensible theory to me, especially given some of my own experiences with meditation which are generally compatible with the kind of theory of mind implied by tranquilism. That's something that has come later, though.

To clarify, none of this means that I would only value suffering prevention: I'd much rather see a universe-wide flourishing civilization full of minds in various states of bliss, than a dead and barren universe. My position is more of a prioritarian one: let's first take care of everyone who's experiencing enormous suffering, and make sure none of our descendants are going to be subject to that fate, before we start thinking about colonizing the rest of the universe and filling it with entirely new minds.

Comment author: Kaj_Sotala 20 July 2017 11:10:53PM *  10 points [-]

This looks sensible to me. I'd just quickly note that I'm not sure if it's quite accurate to describe this as "FRI's metaphysics", exactly - I work for FRI, but haven't been sold on the metaphysics that you're criticizing. In particular, I find myself skeptical of the premise "suffering is impossible to define objectively", which you largely focus on. (Though part of this may be simply because I haven't yet properly read/considered Brian's argument for it, so it's possible that I would change my mind about that.)

But in any case, I've currently got three papers in various stages of review, submission or preparation (that other FRI people have helped me with), and none of those papers presuppose this specific brand of metaphysics. There's a bunch of other work being done, too, which I know of and which I don't think presupposes it. So it doesn't feel quite accurate to me to suggest that the metaphysics would be holding back our progress, though of course there can be some research being carried out that's explicitly committed to this particular metaphysics.

(opinions in this comment purely mine, not an official FRI statement etc.)

Comment author: casebash 17 July 2017 03:18:35AM *  2 points [-]

A few thoughts:

  • If you believe that existential risk is literally the most important issue in the world and that we will be facing possible extinction events imminently, then it follows that we can't wait to develop a mass movement and that we need to find a way to make the small, exceptional group strategy work (although, we may also spread low-level EA, but not as our focus)
  • I suspect that most EAs would agree that spreading low-level EA is worthwhile. The first question is whether this should be the focus/a major focus (as noted above). The second question is whether this should occur within EA or be a spin-off/a set of spin-offs. For example, I would really like to see an Effective Environmentalism movement.
  • Some people take issue with the name Effective Altruism because it implies that everything else is Ineffective Altruism. Your suggestion might mitigate this to a certain extent, but we really need better names!
Comment author: Kaj_Sotala 17 July 2017 10:02:17AM 3 points [-]

I agree that if one thinks that x-risk is an immediate concern, then one should focus specifically on that now. This is explicitly a long-term strategy, so assumes that there will be a long term.

Comment author: John_Maxwell_IV 17 July 2017 04:23:26AM *  6 points [-]

Does anyone know which version of your analogy early science actually looked like? I don't know very much about the history of science, but it seems worth noting that science is strongly associated with academia, which is famous for being exclusionary & elitist. ("The scientific community" is almost synonymous with "the academic science community".)

Did science ever call itself a "movement" the way EA calls itself a movement? My impression is that the the skeptic movement (the thing that spreads scientific ideas and attitudes through society at large) came well after science proved its worth. If broad scientific attitudes were a prerequisite for science, that predicts that the popular atheism movement should have come several centuries sooner than it did.

If one's goal is to promote scientific progress, it seems like you're better off focusing on a few top people who make important discoveries. There's plausibly something similar going on with EA.

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups? If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

The current situation WRT growth of the EA movement seems like it could be the worst of both worlds. The EA movement does marketing, but we also have discussions internally about how exclusive to be. So people hear about EA because of the marketing, but they also hear that some people in the EA movement think that maybe the EA movement should be too exclusive to let them in. We'd plausibly be better off if we adopted a compromise position of doing less marketing and also having fewer discussions about how exclusive to be.

Growth is a hard to reverse decision. Companies like Google are very selective about who they hire because firing people is bad for morale. The analogy here is that instead of "firing" people from EA, we're better off if we don't do outreach to those people in the first place.

[Highly speculative]: One nice thing about companies and universities is that they have a clear, well-understood inclusion/exclusion mechanism. In the absence of such a mechanism, you can get concentric circles of inclusion/exclusion and associated internal politics. People don't resent Harvard for rejecting them, at least not for more than a month or two. But getting a subtle cold shoulder from people in the EA community will produce a lasting negative impression. Covert exclusiveness feels worse than overt exclusiveness, and having an official party line that "the EA movement must be welcoming to everyone" will just cause people to be exclusive in a more covert way.

Comment author: Kaj_Sotala 17 July 2017 10:00:47AM 2 points [-]

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups?

I must have expressed myself badly somehow - I specifically meant that "low-level EA" would be composed of multiple groups. What gave you the opposite impression?

For example, the current situation is that organizations like the Centre for Effective Altruism and Open Philanthropy Project are high-level organizations: they are devoted to finding the best ways of doing good in general. At the same time, organizations like Centre for the Study of Existential Risk, Animal Charity Evaluators, and Center for Applied Rationality are low-level organizations, as they are each devoted to some specific cause area (x-risk, animal welfare, and rationality, respectively). We already have several high- and low-level EA groups, and spreading the ideas would ideally cause even more of both to be formed.

If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

This seems completely compatible with what I said? On my own behalf, I'm definitely interested in trying to achieve a position of higher influence to better spread these ideas.

Comment author: Taylor 16 July 2017 05:38:41PM *  3 points [-]

Really appreciate you taking the time to write this up! My initial reaction is that the central point about mindset-shifting seems really right.

My proposal is to explicitly talk about two kinds of EA (these may need catchier names)

It seems (to me) “low-level” and “high-level” could read as value-laden in a way that might make people practicing “low-level” EA (especially in cause areas not already embraced by lots of other EAs) feel like they’re not viewed as “real” EAs and so work at cross-purposes with the tent-broadening goal of the proposal. Quick brainstorm of terms that make some kind of descriptive distinction instead:

  1. cause-blind EA vs. cause-specific or cause-limited EA
  2. broad EA vs. narrow EA
  3. inter-cause vs. intra-cause

(Thoughts/views only my own, not my employer’s.)

Comment author: Kaj_Sotala 17 July 2017 09:49:46AM 0 points [-]

"General vs. specific" could also be one

Comment author: Carl_Shulman 17 July 2017 12:51:38AM 10 points [-]

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad, e.g. the recent media discussion comparing interventions purely in terms of their carbon emissions without taking anything else into account, suggesting that the existence of a member of a society with GDP per capita of $56,000 is bad if it includes carbon emissions with a social cost of $2,000 per person.

Comment author: Kaj_Sotala 17 July 2017 09:48:53AM *  1 point [-]

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

Thanks for the link! I did a quick search to find if someone had already said something similar, but missed that.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad,

I'm not sure whether the first one is really an issue - just saying that "these are general tools that you can use to improve whatever it is that you care about, and if you're not sure what you care about, you can also apply the same concepts to find that" seems reasonable enough to me, and not particularly gerrymandering.

I do agree that optimizing too specifically within some narrow domain can be a problem that produces results that are globally undesirable, though.

View more: Next