DC

Diego_Caleiro

206 karmaJoined Sep 2014

Comments
115

This is a little old, but it's a similar concept with far higher level of investment: https://www.lesswrong.com/posts/GfRKvER8PWcMj6bbM/sidekick-matchmaking

I haven't read the whole thing. But this seems to be one of the, if not the coolest idea in EA in 2018. Glad you did it.

Good luck for everyone who goes to live or work there!

It has been about 3 years, and only very specific talent still matters for EA now. Earning to Give to institutions is gone, only giving to individuals still makes sense.

It is possible that there will be full scale repleaceability of non-researchers in EA related fields by 2020.

But only if, until then, we keep doing things!

Kaj, I tend to promote your stuff a fair amount to end the inferential silence, and it goes without saying that I agree with all else you said.

Don't give up on your ideas or approach. I am dispirited that there are so few people thinking like you do out there.

It's been less than two years and all the gaps have either been closed, or been kept open in purpose, which Ben Hoffman has been staunchly criticising.

But anyway, it has been less than 2 years and Open Phil has way more money than it knows what to do with.

QED.

Amanda Askell has interesting thoughts suggestive of using "care" to have a counterfactual meaning. She suggests we think of care as what you would have cared about if you were in a context such that this was a thing you could potentially change. In a way, the distinction is between people who think about "care" in terms of rank "oh, that isn't the thing I most care about" and those who care in terms of absolutes "oh, I think the moral value of this is positive." further complicated by the fact some people are thinking in expected value of action and others are thinking absolute value of the object the action affects.

Semantically, if we think it is a good idea to "expand our circle of care" we should probably adopt "care" to mean the counterfactual meaning, as that broadens the scope of things we can truthfully claim to care about.

They need not imply, but I would like a framework where they do under ideal circumstances. In that framework - which I paraphrase from Lewis - if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge).

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

I’m unsure I got your notation. =/= means different? yes What is the meaning of “/” in “A/The…”? same as person/persons, it means either.

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

I mean failure to exercise moral reasoning. You would be right about what you value, you would desire as you desire to desire, have all the relevant beliefs right, have no conflicting desires or values, but you would not act to serve your desires according to your beliefs. In your instance things would be more complicated given that it involves knowing a negation. Perhaps we can go about like this. You would be right maximizing welfare is not your fundamental value, you would have the motivation to stop solely desiring to desire welfare, you would cease to desire welfare, there would be no other desire inducing a desire on welfare, there would be no other value inducing desire on welfare, but you would fail to pursue what serves your desire. This fits well with the empirical fact psychopaths have low-IQ and low levels of achievement. Personally, I would bet your problem is more with allowing to have moral akrasia with the excuse of moral uncertainty.

I don't think you carved reality at the joints here, let me do the heavy lifting: The distinction between our paradigms seems to be that I am using weightings for values and you are using binaries. Either you deem something a moral value of mine or not. I however think I have 100% of my future actions left to do, how do I allocate my future resources towards what I value. Part of it will be dedicated to moral goods, and other parts won't. So I do think I have moral values which I'll pay high opportunity cost for, I just don't find them to take a load as large as the personal values, which happen to include actually implementing some sort of Max(Worldwide Welfare) up to a Brownian distance from what is maximally good. My point, overall is that the moral uncertainty is only part of the problem. The big problem is the amoral uncertainty, which contains the moral uncertainty as a subset.

Why just minds? What determines the moral circle? Why does the core need to be excluded from morality? I claim these are worthwhile questions.

Just minds because most of the value seems to lie in mental states, the core is excluded from morality by definition of morality. My immediate one second self, when thinking only about itself of having an experience simply is not a participant of the moral debate. There needs to be some possibility of reflection or debate for there to be morality, it's a minimum complexity requirement (which by the way makes my Complexity value seem more reasonable).

If this is true, maximizing welfare cannot be the fundamental value because there is not anything that can and is epistemically accessible.

Approximate maximization under a penalty of distance from the maximally best outcome, and let your other values drift within that constraint/attractor.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It is certainly true of VNM, I think it is true of a lot more of what we mean by rationality. Not sure I understood your token/type token, but it seems to me that token commensurability can only obtain if there is only one type. It does not matter if it is linear, exponential or whatever, if there is a common measure it would mean this measure is the fundamental value. It might also be that the function is not continuous, which would mean rationality has a few black spots (or that value monism has, which I claim are the same thing).

I was referring to the trivial case where the states of the world are actually better or worse in the way they are (token identity) and where another world, if it has the same properties this one has (type identity) the moral rankings would also be the same.

About black spots in value monism, it seems that dealing with infinities leads to paradoxes. I'm unaware of what else would be in this class.

I know a lot of reasonable philosophers that are not utilitarians, most of them are not mainstream utilitarians. I also believe the far future (e.g. Nick Beckstead) or future generations (e.g. Samuel Scheffler) is a more general concern than welfare monism, and that many utilitarians do not share this concern (I’m certain to know a few). I believe if you are more certain about the value of the future than about welfare being the single value, you ought to expand your horizons beyond utilitarianism. It would be hard to provide another Williams regarding convincingness, but you will find an abundance of all sort of reasonable non-utilitarian proposals. I already mentioned Jonathan Dancy (e.g. http://media.philosophy.ox.ac.uk/moral/TT15_JD.mp4), my Nozick’s Cube, value pluralism and so on. Obviously, it is not recommendable to let these matters depend on being pointed.

My understanding is that by valuing complexity and identity in addition to happiness I already am professing to be a moral pluralist. It also seems that I have boundary condition shadows, where the moral value of extremely small values of these things are undefined, in the same way that a color is undefined without tone, saturation and hue.

I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism is right, and “just a reason” if it isn't. But if not what is your reason for doing it?

My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but as the thing that I want. They are more like my desire for a back massage than like my desire for a better world. A function from my actions to my reasons to act would be partially a moral function, partially a prudential function.

If you are not acting like you think you should after having complete information and moral knowledge, perfect motivation and reasoning capacity, then it does not seem like you are acting on prudential reasons, it seems you are being unreasonable.

Appearances deceive here because "that I should X" does not imply "that I think I should X". I agree that if both I should X and I think I should X, then by doing Y=/=X I'm just being unreasoable. But I deny that mere knowledge that I should X implies that I think I should X. I translate

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

In your desert scenario, I think I should(convolution) defend my self, though I know I should (morality) not

Hence, even if impersonal reasons are all the moral reasons there are, insofar as there are impersonal reasons for people to have personal reasons these latter are moral reasons.

We are in disagreement. My understanding is that the four quadrants can be empty or full. There can be impartial reasons for personal reasons, personal reasons for impartial reasons, impartial reasons for impartial reasons and personal reasons for personal reasons. Of course not all people will share personal reasons, and depending on which moral theory is correct, there may well be distinctions in impersonal reasons as well.

Being an EA while fully knowing maximizing welfare is not the right thing to do seems like an instance of psychopathy (in the odd case EA is only about maximizing welfare). Of course, besides these two pathologies, you might have some form of cognitive dissonance or other accidental failures.

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

Perhaps you are not really that sure maximizing welfare is not the right thing to do.

Most of my probability mass is that maximizing welfare is not the right thing to do, but maximizing a combination of identity, complexity and welfare is.

I prefer this solution of sophisticating the way moral reasons behave than to claim that there are valid reasons to act that are not moral reasons; the latter looks, even more than the former, like shielding the system of morality from the real world. If there are objective moral truths, they better have something to do with what people want to want to do upon reflection.

One possibility is that morality is a function from person time slices to a set of person time slices, and the size to which you expand your moral circle is not determined a priori. This would entail that my reasons to act morally only when considering time slices that have personal identity 60%+ with me would look a lot like prudential reasons, whereas my reasons to act morally accounting for all time slices of minds in this quantum branch and its descendants would be very distinct. The root theory would be this function.

The right thing to do will always be an open question, and all moral reasoning can do is recommend certain actions over others, never to require. If there is more than one fundamental value, or if this one fundamental value is epistemically inaccessible, I see no other way out besides this solution.

Seems plausible to me.

Incommensurable fundamental values are incompatible with pure rationality in its classical form.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It seems to me Williams made his point; or the point I wished him to make to you. You are saying “if this is morality, I reject it”. Good. Let’s look for one you can accept.

I would look for one I can accept if I was given sufficient (convoluted) reasons to do so. At the moment it seems to me that all reasonable people are either some type of utilitarian in practice, or are called Bernard Williams. While I don't get pointed thrice to another piece that may overwhelm the sentiment I was left with, I see no reason to enter exploration stage. For the time being, the EA in me is peace.

Load more