Comment author: MichaelPlant 08 July 2017 09:02:57PM 0 points [-]

Very much enjoyed this. Good to see the thinking developing.

My only comment is on simple replacability. I think you're right to say this is too simple in an EA context, where someone this could cause a cascade or the work wouldn't have got done anyway.

Do you think simple replacability doesn't apply outside the EA world? For example, person X wants to be a doctor because they think they'll do good. If they take a place at med school, should we expect that 'frees up' the person who doesn't get the place to go and do something else instead? My assumption is the borderline medical candidate is probably not that committed to doing the most good anyway.

To push the point in a familiar case, assume I'm offered a place in an investment bank and I was going to E2G, but I decide to do something more impactful, like work at an EA org. It's unlike the person who gets my job and salary instead would be donating to good causes.

If you think replacability is sometimes true and other times not, it would be really helpful to specify that. My guess is motivation and ability to be an EA play the big role.

12

The marketing gap and a plea for moral inclusivity

In this post, I make three points. First, I note there seems to be a gap between what EA markets itself as being about (effective poverty reduction) and what many EAs really believe is important (poverty isn’t the top priority) and this marketing gap is potentially problematic. Second, I propose... Read More
Comment author: Robert_Wiblin 02 July 2017 10:24:22PM 0 points [-]

Imagine a universe that lasts forever, with zero uncertainty, constant equally good opportunities to turn wealth into utility, and constant high investment returns (say, 20% per time period).

In this scenario you could (mathematically) save your wealth for an infinite number of periods and then donate it, generating infinite utility.

It sounds paradoxical but infinities generally are, and the paradox only exists if you think there's a sufficient chance that the next period will exist and have opportunities to turn wealth into utility relative to the interest rate - that is, you 'expect' an infinitely long lasting universe.

A less counterintuitive approach with the same result would be to save everything with that 20% return and also donate some amount that's less than 20% of the principal each period. This way each period the principal continues to grow, while each year you give away some amount between 0-20% (non-inclusive) and generate a finite amount of utility. After an infinite number of time periods you have accumulated an infinite principal and also generated infinite utility - just as high an expected value as the 'save it all for an infinite number of time periods and then donate it' approach suggested above!

Infinities are weird. :)

Comment author: MichaelPlant 03 July 2017 03:14:36PM 2 points [-]

I think you and Ben have picked up the part of the problem I wasn't focusing on. I'm less concerned about the totalist version: I think you can spin a version of the story where you should donate the end of time, and that's just the best thing you can do.

My point was that, given you accept the totalist philanthropist's paradox, there's an additional weirdness for person-affecting views. That's the bit I found interesting.

Although, I suppose there's a reframing this that makes the puzzle more curious. Totalists get a paradox where they recognise they should donate at the end of time, and that feels odd. Person-affecting views might think they dodge this problem by denying the value of the far future, but they get another kind of paradox for them.

Comment author: MichaelPlant 02 July 2017 10:39:32PM 1 point [-]

Could you say what forum volunteering involves and how much time you spend each week doing it?

Comment author: Ben_Todd 01 July 2017 11:47:20PM *  3 points [-]

What’s the solution if we ignore the practical concerns? One option is to note that, at some stage, you (or, your executors) will have enough money to solve all the world’s problems. At that point, you should spend it as there’s no value in growing your investment further. This won’t work if the financial costs of solving the world’s problems keeps growing and grows faster than your investment increases. However, if one supposes the universe will eventually end – all the stars will burn out at some point – then you will eventually reach a stage where it’s better to spend the money. If you wait any longer there won’t be any people left. This might not be a very satisfactory response, but then it is called a ‘paradox’ for a reason.

I'm not sure I understand why this is a paradox.

Ignoring practical concerns, there are basically two effects:

1) If you wait, then you can compound your investment by x%.

2) If you wait, then the world gets wealthier, so the social return per dollar given to charity goes down by y%. (no pure time discounting require)

If x% > y%, then it's better to wait; and vice versa.

As a first approximation, both x and y are roughly equivalent to real economic growth, so both waiting or giving now are equally good.

To a second order, it can go either way depending on the situation.

It only seems counterintuitive if you can argue that x > y for most of history, so you should never give your money, but I don't see why you would think that.

And in reality, practical concerns would come in long before this. In practice, you couldn't ensure money spent long after your death would do much good, so you should very likely give it within your lifetime, or soon after.

More detail on these models: http://globalprioritiesproject.org/2015/02/give-now-or-later/

Comment author: MichaelPlant 02 July 2017 07:52:11PM 1 point [-]

Hello Mr T.

The paradox only comes about if you think it's generally true it's better to invest and give later rather than now. This might not be true for various practical reasons (i.e. the ones you gave), but I wanted to ignore those for the sake of argument, so i could present a new problem. If you think later is generally better than now, and you're a totalist, it seems like you should try to grow that money until the end of time. This seems somewhat odd: you were planning to invest and do a bit more good tomorrow, now you're investing for 100,000 years.

If you grant the structure fo the paradox for the totalist, person-affecting views have an additional problem.

Comment author: TruePath 26 June 2017 07:33:29AM 4 points [-]

I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.

I don't think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).

More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus's ship style worries is to shrug and say there isn't any fact of the matter but the presentist can't take that line because there are huge moral implications to where we draw the line for them.

Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?

Comment author: MichaelPlant 26 June 2017 10:12:16AM 1 point [-]

I'm probably a necessitiarian, and many (most?) people implicitly hold person-affecting views. However, that's besides the point. I'm neither defending nor evaluating person-affecting views, or indeed any positions in population axiology. As I mentioned, and is widely accepted by philosophers, all the views in population ethics have weird outcomes.

FWIW, and this is unrelated to anything said above, nothing about person-affecting views need rely on person identity. The entity of concern can just be something that is able to feel happiness or unhappiness. This is typically the same line total utilitarians take. What person-affectors and totalism disagree about is whether (for one reason on another) creating new entities is good.

In fact, all the problems you've raised for person-affecting views also arise for totalists. To see this, let's imagine a scenario where a mad scientist is creating a brain inside a body, where the body is being shocked with electricity. Suppose he grows it to a certain size, takes bits out, shrinks it, grows it again, etc. Now the totalist needs to take a stance on how much harm the scientist is doing and draw a line somewhere. The totalist and the person-affector can draw the line in the same place, wherever that is.

Whatever puzzles qualia poses for person-affecting views also apply to totalism (at least, the part of morality concerned with subjective experience).

2

The Philanthropist’s Paradox

TL;DR. Many effective altruists wonder whether it's better to give now or invest and give later. I’ve realised there is an additional worry for those who (like me) are sceptical of the value of the far future. Roughly, it looks like such people are rationally committed to investing their money... Read More
Comment author: MichaelPlant 14 June 2017 10:45:21AM 0 points [-]

I agree on the writing being scattered. Task 1) is: get the writing on a given topic into a single place. That still leaves task 2) get all those collated writings into a single place.

On 2) it strikes me it would be good if CEA compiled a list of EA-relevant resources. An alternative would be someone be someone creating an edited collection of the best recent EA work on a range of topics. Or if we have an academic EA global, then treating that like a normal academic conference and publishing the presented papers.

Comment author: MichaelPlant 14 June 2017 10:16:36AM 2 points [-]

This is a purposefully vague warning for reasons that should not need to be said. Unfortunately, this forces this post to discuss these issues at a higher level of generality than might be ideal, and so there is definitely merit to the claim that this post only deals in generalisations. For this reason, this post should be understood more as an outline of an argument than as an actual crystalized argument

I found this post unhelpful and this part of it particularly so. Your overall point - "don't concede too much on important topics" - seems reasonable, but as I don't know what topics you're referring to, or what would count as 'too much' on those, I can't learn anything.

More generally, I find EAs who post things of the flavour "we shouldn't do X, but I can't tell you what I mean by X for secret reasons" annoying, alienating and culty and wish people wouldn't do it.

Comment author: MichaelPlant 09 June 2017 11:52:17AM 4 points [-]

This is all very exciting and I'm glad to see this is happening.

A couple of comments.

  1. The deadline for this is only three weeks, which seems quite tight.

  2. Could you give examples of the types of things you wouldn't fund or are very unlikely to fund? That avoids you getting lots of applications you don't want as well as people spending time submitting applications that will get rejected. For instance, would/could CEA provide seed funding for any for altruistic for profit organisations, like start-ups? Asking for a friend...

View more: Prev | Next