Comment author: Kaj_Sotala 16 March 2018 01:02:36PM 1 point [-]

The following is roughly how I think about it:

If I am in a situation where I need help, then for purely selfish reasons, I would prefer people-who-are-capable-of-helping-me to act in such a way that has the highest probability of helping me. Because I obviously want my probability of getting help, to be as high as possible.

Let's suppose that, as in your original example, I am one of three people who need help, and someone is thinking about whether to act in a way that helps one person, or to act in a way that helps two people. Well, if they act in a way that helps one person, then I have a 1/3 chance of being that person; and if they act in a way that helps two people, then I have a 2/3 chance of being one of those two people. So I would rather prefer them to act in a way that helps as many people as possible.

I would guess that most people, if they need help and are willing to accept help, would also want potential helpers to act in such a way that maximizes their probability of getting help.

Thus, to me, reason and empathy would say that the best way to respect the desires of people who want help, is to maximize the amount of people you are helping.

Comment author: Kaj_Sotala 12 January 2018 03:01:14PM *  1 point [-]

Hi Daniel,

you argue in section 3.3 of your paper that nanoprobes are likely to be the only viable route to WBE, because of the difficulty in capturing all of the relevant information in a brain if an approach such as destructive scanning is used.

You don't however seem to discuss the alternative path of neuroprosthesis-driven uploading:

we propose to connect to the human brain an exocortex, a prosthetic extension of the biological brain which would integrate with the mind as seamlessly as parts of the biological brain integrate with each other. [...] we make three assumptions which will be further fleshed out in the following sections:

There seems to be a relatively unified cortical algorithm which is capable of processing different types of information. Most, if not all, of the information processing in the brain of any given individual is carried out using variations of this basic algorithm. Therefore we do not need to study hundreds of different types of cortical algorithms before we can create the first version of an exocortex.
We already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness. We have a good reason to believe that an exocortex would be compatible with the existing cortex and would integrate with the mind.
The cortical algorithm has an inbuilt ability to transfer information between cortical areas. Connecting the brain with an exocortex would therefore allow the exocortex to gradually take over or at least become an interface for other exocortices.

In addition to allowing for mind coalescence, the exocortex could also provide a route for uploading human minds. It has been suggested that an upload can be created by copying the brain layer-by-layer [Moravec, 1988] or by cutting a brain into small slices and scanning them [Sandberg & Bostrom, 2008]. However, given our current technological status and understanding of the brain, we suggest that the exocortex might be a likely intermediate step. As an exocortex-equipped brain aged, degenerated and eventually died, an exocortex could take over its functions, until finally the original person existed purely in the exocortex and could be copied or moved to a different substrate.

This seems to avoid the objection of it being too hard to scan the brain in all detail. If we can replicate the high-level functioning of the cortical algorithm, then we can do so in a way which doesn't need to be biologically realistic, but which will still allow us to implement the brain's essential functions in a neural prosthesis (here's some prior work that also replicates some aspect of brain's functioning and re-implements it in a neuroprosthesis, without needing to capture all of the biological details). And if the cortical algorithm can be replicated in a way that allows the person's brain to gradually transfer over functions and memories as the biological brain accumulates damage, the same way that function in the biological brain gets reorganized and can remain intact even as it slowly accumulates massive damage, then that should allow the entirety of the person's cortical function to transfer over to the neuroprosthesis. (of course, there are still the non-cortical parts of the brain that need to be uploaded as well)

A large challenge here is in getting the required amount of neural connections between the exocortex and the biological brain; but we are already getting relatively close, taking into account that the corpus callosum that connects the two hemispheres "only" has on the order of 100 million connections:

Earlier this year, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Neural Engineering System Design. It aims to win approval from the US Food and Drug Administration within 4 years for a wireless human brain device that can monitor brain activity using 1 million electrodes simultaneously and selectively stimulate up to 100,000 neurons. (source)

Comment author: Kaj_Sotala 22 December 2017 03:53:01PM 9 points [-]

Also, one forthcoming paper of mine released as a preprint; and another paper that was originally published informally last year but published in somewhat revised and peer-reviewed form this year:

Both were done as part of my research for the Foundational Research Institute; maybe include us in your organizational comparison next year? :)

Comment author: itaibn 18 October 2017 11:35:55AM 0 points [-]

I don't see any high-value interventions here. Simply pointing out a problem people have been aware of for millenia will not help anyone.

Comment author: Kaj_Sotala 18 October 2017 03:22:04PM 1 point [-]

There seem to be a lot of leads that could help us figure out the high-value interventions, though: i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations) iv) and of course psychology in general is full of interesting ideas for improving mental health and well-being that haven't been rigorously tested, which also suggests that v) any meta-work that would improve psychology's research practices would also be even more valuable than we previously thought.

As for the "pointing out a problem people have been aware of for millenia", well, people have been aware of global poverty for millenia too. Then we got science and randomized controlled trials and all the stuff that EAs like, and got better at fixing the problem. Time to start looking at how we could apply our improved understanding of this old problem, to fixing it.

Comment author: DavidMoss 17 October 2017 08:11:04PM *  3 points [-]

Feelings of safety or threat seem to play a lot into feelings of tribalism: if you perceive (correctly or incorrectly) that a group Y is out to get you and that they are a real threat to you, then you will react much more aggressively to any claims that might be read as supporting Y.

This sounds roughly supported by Karen Stenner's work in The Authoritarian Dynamic which argues that "political intolerance, moral intolerance and punitiveness" are increased by perceived levels of threat.

Your comments about increasing happiness and comfort are particularly striking in light of this opinionated description (from a review) of the different groups (based on interviews):

Authoritarians tended to be closed-minded, unintelligent, lacking in self-confidence, unhappy, unfriendly, unsophisticated, inarticulate, and generally unappealing. Libertarians tended toward the opposite; they seemed happy, gregarious, relaxed, warm, open, thoughtful, eloquent, and humble.

That said I am sceptical prime facie that any positive psychology interventions would be powerful enough at producing these effects to be warranted on these grounds.

Comment author: Kaj_Sotala 18 October 2017 03:12:41PM 1 point [-]

Thanks for the reference! That sounds valuable.

Comment author: MikeJohnson 25 July 2017 05:36:44PM *  2 points [-]

Functionalism seems internally consistent (although perhaps too radically skeptical). However, in my view it also seems to lead to some flavor of moral nihilism; consciousness anti-realism makes suffering realism difficult/complicated.

If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Part of my reason for writing this critique is to argue that functionalism isn't a useful theory of mind, because it doesn't do what we need theories of mind to do (adjudicate disagreements in a principled way, especially in novel contexts).

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better? I'd love to get your (and FRI's) input here.

Comment author: Kaj_Sotala 25 July 2017 11:17:19PM *  1 point [-]

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better?

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Comment author: SoerenMind  (EA Profile) 25 July 2017 11:15:43AM 0 points [-]

I see quite a bunch of relevant cognitive science work these days, e.g. this:

Comment author: Kaj_Sotala 25 July 2017 01:10:42PM 0 points [-]

That's super-neat! Thanks.

Comment author: MikeJohnson 23 July 2017 08:57:34PM 2 points [-]

My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and 'revealed preference' research directions. OpenPhil may be more of a stretch to categorize in this way; I'm going off what I recall of Holden's debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser's report (he was up-front about his assumptions on this).

Of course it's harder to pin down what people at these organizations believe than it is in Brian's case, since Brian writes a great deal about his views.

So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.

Comment author: Kaj_Sotala 25 July 2017 11:01:35AM *  3 points [-]

Wait, are you equating "functionalism" with "doesn't believe suffering can be meaningfully defined"? I thought your criticism was mostly about the latter; I don't think it's automatically implied by the former. If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

(You could reasonably argue that it doesn't look likely that functionalism will provide such a theory, but then I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Comment author: Kaj_Sotala 22 July 2017 08:22:05AM 1 point [-]

Another discussion and definition of autonomy, by philosopher John Danaher:

Many books and articles have been written on the concept of ‘autonomy’. Generations of philosophers have painstakingly identified necessary and sufficient conditions for its attainment, subjected those conditions to revision and critique, scrapped their original accounts, started again, given up and argued that the concept is devoid of meaning, and so on. I cannot hope to do justice to the richness of the literature on this topic here. Still, it’s important to have at least a rough and ready conception of what autonomy is and the most general (and hopefully least contentious) conditions needed for its attainment.

I have said this before, but I like Joseph Raz’s general account. Like most people, he thinks that an autonomous agent is one who is, in some meaningful sense, the author of their own lives. In order for this to happen, he says that three conditions must be met:

Rationality condition: The agent must have goals/ends and must be able to use their reason to plan the means to achieve those goals/ends.

Optionality condition: The agent must have an adequate range of options from which to choose their goals and their means.

Independence condition: The agent must be free from external coercion and manipulation when choosing and exercising their rationality.

I have mentioned before that you can view these as ‘threshold conditions’, i.e. conditions that simply have to be met in order for an agent to be autonomous, or you can have a slightly more complex view, taking them to define a three dimensional space in which autonomy resides. In other words, you can argue that an agent can have more or less rationality, more or less optionality, and more or less independence. The conditions are satisfied in degrees. This means that agents can be more or less autonomous, and the same overall level of autonomy can be achieved through different combinations of the relevant degrees of satisfaction of the conditions. That’s the view I tend to favour. I think there possibly is a minimum threshold for each condition that must be satisfied in order for an agent to count as autonomous, but I suspect that the cases in which this threshold is not met are pretty stark. The more complicated cases, and the ones that really keep us up at night, arise when someone scores high on one of the conditions but low on another. Are they autonomous or not? There may not be a simple ‘yes’ or ‘no’ answer to that question.

Anyway, using the three conditions we can formulate the following ‘autonomy principle’ or ‘autonomy test’:

Autonomy principle: An agent’s actions are more or less autonomous to the extent that they meet the (i) rationality condition; (ii) optionality condition and (iii) independence condition.

Comment author: Wei_Dai 21 July 2017 03:39:04PM 7 points [-]

What would you say are the philosophical or other premises that FRI does accept (or tends to assume in its work), which distinguishes it from other people/organizations working in a similar space such as MIRI, OpenAI, and QRI? Is it just something like "preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)"?

It seems to me that a belief in anti-realism about consciousness explains a lot of Brian's (near) certainty about his values and hence his focus on suffering. People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don't focus on suffering as much. Does this seem right, and if so, can you explain what premises led you to work for FRI?

Comment author: Kaj_Sotala 21 July 2017 07:54:03PM *  9 points [-]

Rather than put words in the mouths of other people at FRI, I'd rather let them personally answer which philosophical premises they accept and what motivates them, if they wish.

For me personally, I've just had, for a long time, the intuition that preventing extreme suffering is the most important priority. To the best that I can tell, much of this intuition can be traced to having suffered from depression and general feelings of crushing hopelessness for large parts of my life, and wanting to save anyone else from experiencing a similar (or worse!) magnitude of suffering. I seem to recall that I was less suffering-focused before I started getting depressed for the first time.

Since then, that intuition has been reinforced by reading up on other suffering-focused works; something like tranquilism feels like a sensible theory to me, especially given some of my own experiences with meditation which are generally compatible with the kind of theory of mind implied by tranquilism. That's something that has come later, though.

To clarify, none of this means that I would only value suffering prevention: I'd much rather see a universe-wide flourishing civilization full of minds in various states of bliss, than a dead and barren universe. My position is more of a prioritarian one: let's first take care of everyone who's experiencing enormous suffering, and make sure none of our descendants are going to be subject to that fate, before we start thinking about colonizing the rest of the universe and filling it with entirely new minds.

View more: Next