Comment author: Naryan 11 July 2018 06:54:09PM 0 points [-]

I agree that markets are inefficient, but believe that the inefficiency results in opportunities that are both worse than average and better than average. Since I suspect most investors under-value the social impact, this would result in impact investments that are more attractive than average to someone who does value the impact as well as the return.

Generally when was looking to invest, I looked for options that I expected to outperform market average at a set risk level, and I didn't assess social utility in that calculation (assuming I could donate the return more effectively, as you suggest). I'm not sure if this logically follows, but if my choice is between effective charity and impact investment, generally an effective charity would do more good. But if I'm considering my retirement fund, I believe the right impact investment could be better than a comparable equity investment - I just need to remember to include the social utility in my valuation.

Comment author: kbog  (EA Profile) 12 July 2018 03:04:22AM *  0 points [-]

Unless you assign relatively high priority to the cause that is addressed by the company, I think it's appropriate to suppose that other impact investors are over-valuing the social impact. Also, since other impact investors don't think about counterfactuals, they are likely to greatly overestimate the social impact. They may think that when they invest $1000 in a different company, they are actually making that company $1000 richer on balance... when in reality it is only $100 or $10 or $1 richer in the long run, due to market efficiency. I don't think markets are generally inefficient, just a bit, sometimes, it really depends on how you define it.

Comment author: kbog  (EA Profile) 11 July 2018 12:32:16PM *  4 points [-]

If capital markets are efficient and most people aren't impact investors, then there is no benefit to impact investing, as the coal company can get capital from someone else for the market rate as soon as you back out, and the solar company will lose most of its investors unless it offers a competitive rate of return. At the same time, there is no cost to impact investing.

In reality I think things are not always like this, but not only does inefficiency imply that impact investing has an impact, it also implies that you will get a lower financial return.

For most of us, our cause priorities are not directly addressed by publicly traded companies, so I think impact investing falls below the utility/returns frontier set by donations and investments. You can pick a combination of greedy investments and straight donations that is Pareto superior to an impact investment. If renewable energy for instance is one of your top cause priorities, then perhaps it is a different story.

Comment author: turchin 08 July 2018 02:09:55PM 0 points [-]

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts? It that case, there is no moral uncertainty between moral theories, as they are all false. Could it escape this obstacle just by aggregating human's opinion about possible situations?

Comment author: kbog  (EA Profile) 11 July 2018 12:09:16PM *  1 point [-]

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts?

In that case it would be exploring traditional metaethics, not moral uncertainty.

But if moral uncertainty is used as a solution then we just bake in some high level criteria for the appropriateness of a moral theory, and the credences will necessarily sum to 1. This is little different from baking in coherent extrapolated volition. In either case the agent is directly motivated to do whatever it is that satisfies our designated criteria, and it will still want to do it regardless of what it thinks about moral realism.

Those criteria might be very vague and philosophical, or they might be very specific and physical (like 'would a simulation of Bertrand Russell say "a-ha, that's a good theory"?'), but either way they will be specified.

Comment author: kbog  (EA Profile) 04 July 2018 12:09:54AM *  2 points [-]

I disagree with 5. Under subjective probability theory it is not really coherent to think that one's expectation is inaccurate. You probably mean to say that they are difficult to predict precisely, but that's generally not relevant if we are maximizing expected value.

Comment author: kbog  (EA Profile) 03 July 2018 03:38:41AM 1 point [-]

There are so many incredibly fun video games these days.

In response to comment by kbog  (EA Profile) on 1. What Is Moral Realism?
Comment author: Lukas_Gloor 31 May 2018 08:53:53AM *  1 point [-]

Do you think your argument also works against Railton's moral naturalism, or does my One Compelling Axiology (OTA) proposal introduce something that breaks the idea? The way I meant it, OTA is just a more extreme version of Railton's view.

I think I can see what you're pointing to though. I wrote:

Note that this proposal makes no claims about the linguistic level: I’m not saying that ordinary moral discourse let’s us define morality as convergence in people’s moral views after philosophical reflection under ideal conditions. (This would be a circular definition.) Instead, I am focusing on the aspect that such convergence would be practically relevant: [...]

So yes, this would be a bad proposal for what moral discourse is about. But it's meant like this: Railton claims that morality is about doing things that are "good for others from an impartial perspective." I like this and wanted to work with that, so I adopt this assumption, and further add that I only want to call a view moral realism if "doing what is good for others from an impartial perspective" is well-specified. Then I give some account of what it would mean for it to be well-specified.

In my proposal, moral facts are not defined as that which people arrive at after reflection. Moral facts are still defined as the same thing Railton means. I'm just adding that maybe there are no moral facts in the way Railton means if we introduce the additional requirement that (strong) underdetermination is not allowed.

Comment author: kbog  (EA Profile) 13 June 2018 03:14:07AM *  0 points [-]

Yes I think it applies to pretty much any other kind of naturalism as well. At least, any that I have seen.

Comment author: kbog  (EA Profile) 31 May 2018 12:36:19AM *  0 points [-]

For One Compelling Axiology, assuming that "ideal" is defined in a manner that does not beg the question, the theory implies that moral facts allow us to make empirical predictions about the world - for instance, a given philosopher, or group of philosophers, or ASI, or myself, will adopt such-and-such moral attitude with probability p. Moreover, moral facts seem to be defined purely in terms of their empirical ramifications.

This I find to be deeply troubling because it provides no grounds to say that there are any moral facts at all, just empirical ones. Suppose that there is a moral proposition X which states the one compelling axiology, okay. Now on what grounds do you say that X is a moral fact? Merely because it's always compelling? But such a move is a non sequitur.

Of course, you can say that you would be compelled to follow X were you to be an ideal reasoner, and therefore it's reasonable of you to follow it. But again, all we're saying here is that we would follow X were we to have whatever cognitive properties we associate with the word "ideal", and that is an empirical prediction. So it doesn't establish the presence of moral facts, there are just empirical facts about what people aspire to do under various counterfactuals and predicates.

Comment author: kieuk 17 May 2018 05:00:12PM 0 points [-]

I'm not really talking about showing how friendly you are

It looks like we were talking at cross purposes. I was picking up on the admittedly months-old conversation about "signalling collaborativeness" and [anti-]"combaticism", which is a separate conversation to the one on value signals. (Value signals are probably a means of signalling collaborativeness though.)

you should probably signal however friendly you are actually feeling

I think politeness serves a useful function (within moderation, of course). 'Forcing' people to behave more friendly than they feel saves time and energy.

I think EA has a problem with undervaluing social skills such as basic friendliness. If a community such as EA wants to keep people coming back and contributing their insights, the personal benefits of taking part need to outweigh the personal costs.

Comment author: kbog  (EA Profile) 17 May 2018 05:44:29PM 0 points [-]

I think EA has a problem with undervaluing social skills such as basic friendliness. If a community such as EA wants to keep people coming back and contributing their insights, the personal benefits of taking part need to outweigh the personal costs.

Not if people aren't attracted to such friendliness. Lots of successful social movements and communities are less friendly than EA.

Comment author: kbog  (EA Profile) 16 May 2018 04:02:18PM *  1 point [-]

So I skimmed this and it looks like you are basically just applying MacAskill's method. Did I miss something?

Btw, whether to assign ordinal or cardinal scores to things isn't really something that you should do in the context of normative uncertainty. It should come from the moral theory itself, and not be altered by considerations of uncertainty. If the moral theory has properties that allow us to model it with a cardinal ranking, then we do that, and if it doesn't then we use an ordinal ranking. One moral theory may have ordinal rankings and another may have cardinal ones. By the way, as far as MEC is concerned, an ordinal moral ranking is just a special case of cardinal moral rankings where the differences between consecutively ranked options are uniform.

Comment author: kieuk 14 May 2018 04:01:15PM 1 point [-]

But I can control whether I am priming people to get accustomed to over-interpreting.

That sounds potentially important. Could you give an example of a failure mode?

Because my approach is not merely about how to behave as a listener. It's about speaking without throwing in unnecessary disclaimers.

Consider how my question "Could you give an example...?" reads if I didn't precede it with the following signal of collaborativeness: "That sounds potentially important." At least to me (YMMV), I would be like 15% less likely to feel defensive in the case where I precede it with such a signal, instead of leaping into the question -- which I would be likely (on a System 1y day) to read as "Oh yeah? Give me ONE example." Same applies to the phrase "At least to me (YMMV)": I'm chucking in a signal that I'm willing to listen to your point of view.

Those are examples of disclaimers. I argue these kinds of signals are helpful for promoting a productive atomsphere; do they fall into the category you're calling "unnecessary disclaimers"? Or is it only something more overt that you'd find counterproductive?

I take the point that different people have different needs with regards to this concern. I hope we can both steer clear of typical-minding everyone else. I think I might be particularly oversensitive to anything resembling conflict, and you are over on the other side of the bell curve in that respect.

Comment author: kbog  (EA Profile) 15 May 2018 05:46:37PM *  0 points [-]

That sounds potentially important. Could you give an example of a failure mode?

The failure mode where people over-interpret things that other people say, and then come up with wrong interpretations.

I argue these kinds of signals are helpful for promoting a productive atomsphere; do they fall into the category you're calling "unnecessary disclaimers"?

Well you should probably signal however friendly you are actually feeling, but I'm not really talking about showing how friendly you are, I'm talking about going out of your way to say "of course I don't mean X" and so on.

https://www.overcomingbias.com/2018/05/skip-value-signals.html

View more: Next