Comment author: Carl_Shulman 02 October 2015 11:09:17PM *  17 points [-]

Say you have two interventions, A and B, and two outcome metrics, X and Y. You expect A will improve X by 100 units per dollar and B will improve Y by 100 units per dollar. However each intervention will have some smaller effect of uncertain sign on the other outcome metric. A will cause +1 or -1 units of Y, and B will cause +1 or -1 units of X.

It would be silly to decide for or against one of these interventions based on its second-order effect on the other outcome metric:

  • If you think either X or Y is much more important than the other metric, then you just pick based on the more important metric and neglect the other
  • If you think X and Y are of similar importance, again you focus on the primary effect of each intervention rather than the secondary one
  • If you are worried about A harming metric Y because you want to ensure you have expected positive impact on both X and Y, you can purchase offsets by putting 1% of your resources into B, or vice versa for B harming X

Cash transfers significantly relieve poverty of humans who are alive today, and are fairly efficient at doing that. They are far less efficient at helping or harming non-human animals today or increasing or reducing existential risk. Even if they have some negative effect here or there (more meat-eating, or habitat destruction, or carbon emissions) the cost of producing a comparable benefit to offset it in that dimension will be small compared to the cash transfer. E.g. an allocation of 90% GiveDirectly, and 10% to offset charities (carbon reduction, meat reduction, nuclear arms control, whatever) will wind up positive on multiple metrics.

If you have good reasons to give to poverty alleviation rather than existential risk reduction in the first place, then minor impacts on existential risk from your poverty charities are unlikely to reverse that conclusion (although you could make some smaller offsetting donations if you wanted to have a positive balance on as many moral theories as possible). It makes sense to ask how good those reasons really are and whether to switch, but not to worry too much about small second-order cross-cause effects.

ETA: As I discuss in a comment below, moral trade gives us good reasons to be reciprocally supportive with efforts to very efficiently serve different conceptions of the good with only comparatively small costs according to other conceptions.

Comment author: Vincent_deB 03 October 2015 12:36:52AM 2 points [-]

This is useful but doesnt entirely answer William's question. To put it another way: suppose GiveDirectly reduced extreme poverty in East Africa by 50%. What would your best estimate of the effect of that on xrisk be? I'd expect it to be quite positive, but havent thought about how to estimate the magnitude.

Comment author: MichaelDickens  (EA Profile) 16 September 2015 04:13:02PM *  4 points [-]

The great thing about nested comments is derailments are easy to isolate. :)

Why do you think that these are the only things of value?

I don't understand what it would mean for anything other than positive and negative experiences to have value. I believe that when people say they inherently value art (or something along those lines), the reason they say this is because the thought of art existing makes them happy and the thought of art not existing makes them unhappy, and it's the happy or unhappy feelings that have actual value, not the existence of art itself. If people thought art existed but it actually didn't, that would be just as good as if art existed. Of course, when I say that you might react negatively to the idea of art not existing even if people don't know it exists; but now you know that it doesn't exist so you still experience the negative feelings associated with art not existing. If you didn't experience those feelings, it wouldn't matter.

do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would?

I expect there's a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it's more likely that factory farms are worse for humans than that they're worse for chickens/fish, so in expectation, they're worse for humans, but not much worse.

I don't know how consciousness works, although I believe it's fundamentally an empirical question. My best guess is that certain types of mental structures produce heightened consciousness in a way that gives a being greater moral value, but that most of the additional neurons that humans have do not contribute at all to heightened consciousness. For example, humans have tons of brain space devoted to facial recognition, but I don't expect that we can feel greater levels of pleasure or pain as a result of having this brain space.

Can you give an example of the ideal form of joy?

The best I can do is introspect about what types of pleasure I enjoy most and how I'm willing to trade them off against each other. I expect that the happiest possible being can be much happier than any animal; I also expect that it's possible in principle to make interpersonal utility comparisons, so we could know what a super-happy being looks like. We're still a long way away from being able to do this in practice.

What's the most unintuitive result that you're prepared to accept, and which gives you most pause?

There are a lot of results that used to make me feel uncomfortable, but I didn't consider this good evidence that utilitarianism is false. They don't make me uncomfortable anymore because I've gotten used to them. Whichever result gives me the most pause is one that I haven't heard of before, so I haven't gotten used to it. I predict that the next time I hear a novel thought experiment where utilitarianism leads to some unintuitive conclusion, it will make me feel uncomfortable but I won't change my mind because I don't consider discomfort to be good evidence. Our intuitions are often wrong about how the physical world works, so why should we expect them to always be right about how the moral world works?

At some point we have to use intuition to make moral decisions--I have a strong intuition that nothing matters other than happiness or suffering, and I apply this. But anti-utilitarian thought experiments usually prey on some identifiable cognitive bias. For example, the repugnant conclusion takes advantage of people's scope insensitivity and inability to aggregate value across separate individuals.

Comment author: Vincent_deB 18 September 2015 03:23:11PM 0 points [-]

I expect there's a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it's more likely that factory farms are worse for humans than that they're worse for chickens/fish, so in expectation, they're worse for humans, but not much worse.

Woaha, I didn't realize that anyone thought that, it would make me change my views greatly if I did.

Comment author: MichaelDickens  (EA Profile) 18 September 2015 03:34:28AM 0 points [-]

Can you talk more about what convinced you that they're a good giving opportunity on the margin?

I asked Tobias Pulver about this specifically. He told me about their future plans and how they'd like to use marginal funds. They have things that they would have done if they'd had more money but couldn't do. I don't know if they're okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.

I know you know I think this, but I think it's better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn't currently as strong as I'd like.

If ACE thought this was best, couldn't it direct some of the funds I donate to its top charities? (This is something I probably should have considered and investigated, although it's moot since I'm not planning on donating directly to ACE.)

Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don't see why you'd think it likely.

AI safety is such a new field that I don't expect you need to be a genius to do anything groundbreaking. MIRI researchers are probably about as intelligent as most FLI grantees. But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.

Comment author: Vincent_deB 18 September 2015 03:21:42PM 1 point [-]

Can you talk more about what convinced you that they're a good giving opportunity on the margin?

I asked Tobias Pulver about this specifically. He told me about their future plans and how they'd like to use marginal funds. They have things that they would have done if they'd had more money but couldn't do. I don't know if they're okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.

I heard - all be it second hand, and last year - of two people involved, Lukas Gloor and Tobias Pulver, saying that thought that the minimal share of GBS/EAF manpower - 1.5 FTEs - that was being invested in REG was sufficient.

Comment author: Paul_Christiano 15 June 2015 07:33:55PM *  0 points [-]

Submission: Ben Kuhn's blog post Does Donation Matching Work?

At the time of submission the post had been read by about 300 people for a total of 20 hours. It has since been posted to hacker news and read by more like 1500 people for a total of 60 hours.

Comment author: Vincent_deB 18 June 2015 05:23:55PM 0 points [-]

Is the number of reads really relevant? How come? I figure people who read content generally don't act on it, and certainly not in high impact ways.

Comment author: Paul_Christiano 15 June 2015 08:04:55PM 2 points [-]

Our very crude evaluation:

We expect that EA Madison has reached something like 50 person-hours of attention (excluding Gina and Ben), though that could easily be off by a factor of 2 in either direction and we have talked to no one else involved with it.

Our very rough guess for the stimulated donations per hour was around $20. A lot of this comes in the form of general engagement rather than directly in the form of donations. This gives a value of about $1000 of donations stimulated.

We expect it to be possible to make a much less noisy evaluation in the future once we can evaluate some of the impact of the group in retrospect. We think our evaluation will predictably rise in expectation over time, reflecting our general preference for purchasing things after their impacts are easier to assess. Our actual expectation for the project's impact obviously can't predictably rise.

Comment author: Vincent_deB 18 June 2015 05:22:29PM 0 points [-]

What do you think the average (mean) expected value of an EA group is?

In response to May in EA Projects
Comment author: Vincent_deB 12 June 2015 01:34:07AM 1 point [-]
Comment author: HaukeHillebrandt 11 June 2015 09:25:50AM 1 point [-]

Have already posted this on the FB group - and it'll go up on the Giving What We Can blog after the review!

Comment author: Vincent_deB 12 June 2015 01:33:19AM 0 points [-]

I look forwards to seeing it there! Will it be a slightly different version taking into account feedback?

Comment author: Jeff_Kaufman 06 June 2015 11:52:20AM 2 points [-]

For an EA, another consideration is that I'd expect a movement of people who give as they earn to be more persuasive and grow faster than a movement of people who invest-to-give or borrow-to-give. I think if you're public about your giving (and you should be!) this is a very large factor.

Comment author: Vincent_deB 10 June 2015 06:33:07PM 1 point [-]
Comment author: ChrisSmith 04 June 2015 11:52:29PM 3 points [-]

As someone who really admired George Monbiot as a teenager, I'm slightly surprised to hear him described as an effective altruist in spirit.

I admire his transparency and his willingness to change his mind, but he does strike me as someone quite committed to an ideology (generally a progressive one not too far from my own!) around issues like state intervention and ownership/delivery of public services. I'm also not convinced that rewilding is a promising or cost effective way of tackling the environmental issues which he (quite possibly rightly) prioritises so much. I'm not saying I don't think he is a good person, but I am saying it seems a stretch to think of him as an effective altruist in spirit. Do you know him personally?

I actually think the argument in his piece is pretty good as someone who works in one of the industries he is upset about. I can think of several friends, none of whom would consider themselves effective altruists, who have indeed followed the sort of path he outlines. All too often, people do not make differences from the inside.

I take your point that there are counterexamples where people do good from inside (I would hope to consider myself here, as someone donating 15%+ and triggering donations from colleagues worth around twice that last year) but as a general phenomenon his piece is pretty sound. A rebuttal would be difficult, but a response could go along the lines of "Not all City workers" or similar. Do you think this would still be valuable?

Comment author: Vincent_deB 10 June 2015 06:31:45PM 0 points [-]

Yes, I agree. 'Effective altruist' appears to me to be a label picking out a very particular and narrow movement and group of people, despite the broadness of the words we happen to have adopted as a label.

Comment author: Vincent_deB 10 June 2015 06:29:34PM 1 point [-]

Like the GoogleDoc you posted before, I think this would get the best response on the Giving What We Can blog, or as a quick link on the Effective Altruists facebook group :-) It could go in a forum open thread too!

View more: Next