Comment author: Brian_Tomasik 27 July 2017 02:54:58AM *  2 points [-]

I think a default assumption should be that works by individual authors don't necessarily reflect the views of the organization they're part of. :) Indeed, Luke's report says this explicitly:

the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.

Of course, there is nonzero Bayesian evidence in the sense that an organization is unlikely to publish a viewpoint that it finds completely misguided.

When FRI put my consciousness pieces on its site, we were planning to add a counterpart article (I think defending type-F monism or something) to have more balance, but that latter article never got written.

Comment author: kbog  (EA Profile) 16 August 2017 01:12:49PM *  0 points [-]

MIRI/FHI have never published anything which talks about any view of consciousness. There is a huge difference between inferring based on things that people happen to write outside of the organization, and the actual research being published by the organization. In the second case, it's relevant to the research, whether it's an official value of the organization or not. In the first case, it's not obvious why it's relevant at all.

Luke affirmed elsewhere that Open Phil really heavily leans towards his view on consciousness and moral status.

Comment author: WillPearson 21 July 2017 06:11:49PM *  0 points [-]

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

Both of these would impinge on the vital sets of others though (slavery directly, pollution by disrupting the natural environment people rely on). So it would still be a bad outcome from the autonomy viewpoint if these things happened.

The autonomy viewpoint is only arguing that lots of actions should physically possible for people, all the actions physically possible aren't necessarily morally allowed.

Which of these three scenarios is best?:

  1. No one has guns so no-one gets shot
  2. Everyone has guns and people get shot because they have accidents
  3. Everyone has guns but no-one gets shot because they are well trained and smart.

The autonomy viewpoint argues that the third is the best possible outcome and tries to work towards it. There are legitimate uses for guns.

I don't go into how these things should be regulated, as this is a very complicated subject. I'll just point out that to get the robust free society that I want you would need to not regulate the ability to do these things, but make sure the incentive structures and education is correct.

I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values.

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is. So the best I can do is either claim it as a value, which I do, and refer to other value systems that people might share. I suppose I could talk about other people who value autonomy in itself. Would you find that convincing?

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors.

Unless it is omniscient I don't see how it will see all threats to itself. It may lose a gamble on the logical induction lottery and make an ill-advised change to itself.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive.

I'm personally highly skeptical that this will happen.

I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

How would that be allowed if those people might create a competitor AI?

you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social contexts.

Like I said in the article, if I was convinced of decisive strategic advantage my views of the future would be very different. However as I am not, I have to think that the future will remain similar to the present in many ways.

Comment author: kbog  (EA Profile) 16 August 2017 01:06:03PM *  0 points [-]

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is.

The is-ought problem doesn't say that you can't intrinsically value anything. It just says that it's hard. There's lots of ways to argue for intrinsically valuing things, and I have a reason to intrinsically value well-being, so why should I divert attention to something else?

Unless it is omniscient I don't see how it will see all threats to itself.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

Democratic oversight, international cooperation, good values in AI, FDT to facilitate coordination, stuff like that.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen".

How would that be allowed if those people might create a competitor AI?

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

Comment author: MikeJohnson 23 July 2017 08:57:34PM 2 points [-]

My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and 'revealed preference' research directions. OpenPhil may be more of a stretch to categorize in this way; I'm going off what I recall of Holden's debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser's report (he was up-front about his assumptions on this).

Of course it's harder to pin down what people at these organizations believe than it is in Brian's case, since Brian writes a great deal about his views.

So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.

Comment author: kbog  (EA Profile) 26 July 2017 12:05:44AM *  2 points [-]

Well I think there is a big difference between FRI, where the point of view is at the forefront of their work and explicitly stated in research, and MIRI/FHI, where it's secondary to their main work and is only something which is inferred on the basis of what their researchers happen to believe. Plus as Kaj said you can be a functionalist without being all subjectivist about it.

But Open Phil does seem to have this view now to at least the same extent as FRI does (cf. Muelhauser's consciousness document).

Comment author: kbog  (EA Profile) 23 July 2017 07:07:12PM *  2 points [-]

much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil.

I'm a little confused here. Where does MIRI or FHI say anything about consciousness, much less assume any particular view?

Comment author: SoerenMind  (EA Profile) 21 July 2017 04:03:08PM 6 points [-]

As far as I can see that's just functionalism / physicalism plus moral anti-realism which are both well-respected. But as philosophy of mind and moral philosophy are separate fields you won't see much discussion of the intersection of these views. Completely agreed if you do assume the position is wrong.

Comment author: kbog  (EA Profile) 21 July 2017 04:20:14PM *  6 points [-]

I think the choice of a metaethical view is less important than you think. Anti-realism is frequently a much richer view than just talking about preferences. It says that our moral statements aren't truth-apt, but just because our statements aren't truth-apt doesn't mean they're merely about preferences. Anti-realists can give accounts of why a rigorous moral theory is justified and is the right one to follow, not much different from how realists can. Conversely, you could even be a moral realist who believes that moral status boils down to which computations you happen to care about. Anyway, the point is that anti-realists can take pretty much any view in normative ethics, and justify those views in mostly the same ways that realists tend to justify their views (i.e. reasons other than personal preference). Just because we're not talking about whether a moral principle is true or not doesn't mean that we can no longer use the same basic reasons and arguments in favor of or against that principle. Those reasons will just have a different meaning.

Plus, physicalism is a weaker assertion than the view that consciousness is merely a matter of computation or information processing. Consciousness could be reducible to physical phenomena but without being reducible to computational steps. (eta: this is probably what most physicalists think.)

Comment author: kbog  (EA Profile) 21 July 2017 04:04:18PM *  4 points [-]

Some of the reasons you gave in favor of autonomy come from a perspective of subjective pragmatic normativity rather than universal moral values, and don't make as much sense when society as a whole is analyzed. E.g.:

You disagree with the larger system for moral reasons, for example if it is using slavery or polluting the seas. You may wish to opt out of the larger system in whole or in part so you are not contributing to the activity you disagree with.

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

The larger system is hostile to you. It is an authoritarian or racist government. There are plenty examples of this happening in history, so it will probably happen again.

Individuals could be disruptive or racist, and the government ought to restrain their ability to be hostile towards society.

So when we decide how to alter society as a whole, it's not clear that more autonomy is a good thing. We might be erring on different sides of the line in different contexts.

Moreover, I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values. So we should just think about how to reduce catastrophic risks and how to improve the economic welfare of everyone whose jobs were automated. Autonomy may play a role in these contexts, but it will then be context-specific, so our definition of it and analysis of it should be contextual as well.

The autonomy view vastly prefers a certain outcome to the airisk question. It is not in favour of creating a single AI that looks after us all (especially not by uploading)

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors. If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive. I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

I think this is the kind of problem you frequently get when you construct an explicit value out of something which was originally grounded in purely instrumental terms - you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social constructs.

Comment author: SoerenMind  (EA Profile) 21 July 2017 12:11:10PM *  11 points [-]

What's the problem if a group of people explores the implications of a well-respected position in philosophy and are (I think) fully aware of the implications? Exploring a different position should be a task for people who actually place more than a tiny bit of credence in it, it seems to me - especially when it comes to a new and speculative hypothesis like principle qualia.

This post mostly reads like a contribution to a long-standing philosophical debate to me and would be more appropriately presented as arguing against a philosophical assumption rather than against a research group working under that assumption.

In the cog-sci / neuroscience institute where I currently work, productive work is being done under similar, though less explicit, assumptions as Brian's / FRI's. Including relevant work on modelling valence in animals in the reinforcement learning framework.

I know you disagree with these assumptions but a post like this can make it seem to outsiders as if you're criticizing a somewhat crazy position and by extension cast a bad light on FRI.

Comment author: kbog  (EA Profile) 21 July 2017 03:35:07PM *  1 point [-]

What's the problem if a group of people explores the implications of a well-respected position in philosophy and are (I think) fully aware of the implications?

If the position is wrong then their work is of little use, or possibly harmful. FRI is a nonprofit organization affiliated with EA which uses nontrivial amounts of human and financial capital, of course it's a problem if the work isn't high value.

I wouldn't be so quick to assume that the idea that moral status boils down to asking 'which computations do I care about' is a well-respected position in philosophy. It probably exists but not in substantial measure.

Comment author: kbog  (EA Profile) 20 July 2017 10:17:07PM *  1 point [-]

Re: 2, I don't see how we should expect functionalism to resolve disputes over which agents are conscious. Panpsychism does not such thing, nor does physicalism or dualism or any other theory of mind. Any of these theories can inform inquiry about which agents are conscious, in tandem with empirical work, but the connection is tenuous and it seems to me that at least 70% of the work is empirical. Theory of mind mostly gives a theoretical basis for empirical work.

The problem lies more with the specific anti-realist account of sentience that some people at FRI have, which basically boils down to "it's morally relevant suffering if I think it's morally relevant suffering." I suspect that a good functionalist framework need not involve this.

"But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?"

Actually I think the tension would be problematic if we had philosophical debates about tables and edge cases which may or may not be tables.

Comment author: Austen_Forrester 19 June 2017 10:39:56PM 0 points [-]

Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary. Saying that people don't commit terror from reading philosophical papers and thus those papers are innocent and shouldn't be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy, so to say that first step doesn't matter because the subsequent steps aren't yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish. Although, NUEs do in fact meet the other criteria you mentioned. For instance, I doubt that they have confidence in legitimately influencing policy (ie. convincing the government to burn down all the forests).

FRI and its parent EA Foundation state that they are not philosophy organizations and exists solely to incite action. I agree that terrorism has not in the past been motivated purely by destruction. That is something that atheist extremists who call themselves effective altruists are founding.

I am not a troll. I am concerned about public safety. My city almost burned to ashes last year due to a forest fire, and I don't want others to have to go through that. Anybody read about all the people in Portugal dying from a forest fire recently? That's the kind of thing that NUEs are promoting and I'm trying to prevent. If you're wondering why I don't elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles). And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.

Comment author: kbog  (EA Profile) 15 July 2017 12:09:24AM *  0 points [-]

Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary

Yeah, and you probably think that being a negative utilitarian increases the likelihood for terrorism, but it's not necessary either. In the real world we deal with probabilities and expectations, not speculations and fantasies.

Saying that people don't commit terror from reading philosophical papers and thus those papers are innocent and shouldn't be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy

This is silly handwaving. The radicalization process starts with being born. It doesn't matter where things 'start' in the abstract sense, what matters is what causes the actual phenomenon of terrorism to occur.

to say that first step doesn't matter because the subsequent steps aren't yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish

So your head is too far up your own ass to even accept the possibility that someone who has actually studied international relations and counterinsurgency strategy knows that you are full of shit. Cool.

I am not a troll. I am concerned about public safety.

You are a textbook concern troll.

My city almost burned to ashes last year due to a forest fire, and I don't want others to have to go through that

Welcome to EA, honey. Everyone here is altruistic, you can't get special treatment.

That's the kind of thing that NUEs are promoting

But they're not. You think they're promoting it, or at least you want people to think they're promoting it. But that's your own opinion, so presenting it like this constitutes defamation.

If you're wondering why I don't elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles).

But I have read those materials. And it's not self-evident. And other people have read those articles and they don't find them self-evident either. Actually, it's self-evident that they don't promote it, if you read some of their materials.

And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.

What bullshit. If you actually worried about this then you wouldn't be saying that it's a direct, self-evident conclusion of their beliefs. So either you don't know what you're doing, or you're arguing in bad faith. Probably both.

Comment author: DonyChristie 11 July 2017 08:12:32PM 1 point [-]

I'm interested to know whether Antigravity investments is really needed when EAs have the option of using the existing investment advice that's out there.

Trivial inconveniences.

Comment author: kbog  (EA Profile) 14 July 2017 11:18:20PM 1 point [-]

How is that relevant?

View more: Next