21

Ethical offsetting is antithetical to EA

[My views are my own, not my employer's. Thanks to Michael Dickens for reviewing this post prior to publication.]

[More discussion here]


Summary

Spreading ethical offsetting is antithetical to EA values because it encourages people to focus on negating harm they personally cause rather than doing as much good as possible. Also, the most favored reference class for the offsets is rather vague and arbitrary.


There are a few positive aspects of using ethical offsets, and situations in which advocating ethical offsets may be effective.



Definition

Ethical offsetting is the practice of undoing harms caused by one's activities through donations or other acts of altruism. Examples of ethical offsetting include purchasing carbon offsets to make up for one’s carbon emissions and donating to animal charities to offset animal product consumption. More explanation and examples are available in this article.



Against offsetting

I think ethical offsetting is antithetical to EA values, and have three main objections to it.

   1) In practice, people doing ethical offsetting use vague and arbitrary reference classes.

   2) It's not the most effectively altruistic thing to do.

   3) It spreads suboptimal and non-consequentialist memes/norms about doing good.

 


1) The reference class people pick for ethical offsets is arbitrary.

For example, let's say I cause some harm by buying milk that came from a cow that was treated poorly, and I want to negate the harm. I have a bunch of options.

 

I cannot undo the exact harm done by my purchase once it's happened, but I could (try to) seek out that specific cow and try to do something nice for her, negating the harm I caused for that specific cow's utility calculus. I could donate some money to a charity that helps cows, negating my harmful effect on the total utility of cow-kind. I could donate some money to a charity that helps all farmed animals, negating my harmful effect on farmed animal-kind. Or I could donate to whatever charity I thought did the most good per dollar, negating my negative impact on the universe most cost-effectively but less directly.

 

People seem to settle on a sort of broad cause-area-level offsetting preference (e.g. donating to help farmed animals). While reference class seems intuitive, it's ultimately arbitrary*.

 


2) Ethical offsetting isn't the most effectively altruistic thing.

You should do the things you think are most effectively altruistic, and you should donate to the charities you think are most effective. If you eat dead animals and don't believe animal charities are the most effective charities, I don't think you should donate to them.


Like everything else, ethical offsetting has opportunity costs; you could use that money to donate to the best charity, which is often different from the charity you’re using for ethical offsetting. It causes a harm relative to the world where you donate only to the most effective charity.


Even if you think the charity you donate your offsetting money to is the most effective, I don’t think it’s helpful to do ethical offsetting. Much of the suffering in the world isn’t directly caused by anyone, so an offsetting mindset increases the probability that you’ll miss big sources of suffering down the line. It causes a bias towards addressing anthropogenic harms, rather than harms from nature.


3) Ethical offsetting spreads anti-EA memes and norms

Ethical offsetting reinforces a preoccupation with not doing harmful things (instead of not allowing harmful things to happen, and taking action when they do). But EAs should (and usually do) focus on the sufferers, not themselves.

 

By encouraging others to offset, we set norms oriented around people’s personal behavior. We encourage an inefficient model of charity that involves donating based on one’s activities, not one’s abilities or the needs of charities that help neutralize various harms. We miss the chance to communicate about core EA ideas like cause prioritization and room for more funding by establishing a framework that has little room for them.


There are some other dangers involved in ethical offsetting, although I haven’t seen much evidence they actually occur: Offsetting may also encourage unhealthy scrupulosity about the harms we inevitably contribute to in order to function (although it could also help alleviate anxiety about them). And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them. This could contribute to careless and destructive norms about personal behavior.

 


Caveats

Offsetting is better than nothing. There may be situations in which ethical offsetting is the biggest plausible ask one can make. In such situations, I think bringing up the idea of ethical offsetting may be appropriate. And it may be an interesting conversation starter about sources of suffering and ways of alleviating them.

 

I've previously discussed my concerns about the obstacles to changing one's mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed.

 

It may be really psychologically beneficial for some people, similar to the way donations for the dubiously-named fuzzies (donations for causes that are especially personally meaningful to the donor rather than maximally effective) sometimes are.

 

I think the argument that we should focus on doing lots of good rather than fixing harms we cause could drive destructive thoughtlessness about personal behavior, so I’m wary about making it too frequently. I’m most worried about this concern.

 

 

 


*The reference class schelling point is stronger with carbon offsets, where the harmful thing is adding some carbon dioxide to the atmosphere. Carbon dioxide molecules are pretty interchangeable. If you remove as many as you added, you neutralize the harm from your emissions-causing action very directly, which is intuitively appealing.

 

All suffering may be equally important, but not all forms of harm are the same, or even similar. How similar the harm you offset is to the harm you cause can vary a lot. Few other types of offsetting I’ve heard of allow the opportunity to create a future so similar to the one where the harmful activity had never been done.

Comments (53)

Comment author: Ben_Todd 05 January 2016 08:48:06PM 15 points [-]

This is a bit of a side point, but to what extent do EAs actually promote ethical offsetting? It seems to me like it normally gets raised in the following ways:

  1. A dominance argument to show that ethical consumption isn't the most important thing to focus on. Hypothetical example: If I think AMF is the best donation opportunity, but donating to The Humane League is better than going vegetarian (because it would be very cheap to "offset" my diet), it shows that donations to AMF are very much better than going vegetarian. This shows going vegetarian makes a small contribution to my potential social impact, so I shouldn't do it unless it involves negligible sacrifice.

  2. As an option for non-consequentialist minded people who don't just want to focus on the best activities, because they have special obligations to avoid doing certain types of harm.

It doesn't seem like EAs promote ethical offsetting as a generally good thing to do. Rather, EAs suggest identifying the highest leverage ways for you to make a difference in the world, and focusing your attention on those. (and not worrying about other ways to have more impact that involve more sacrifice)

Comment author: ClaireZabel 06 January 2016 05:12:59AM 3 points [-]

I don't think many EAs spend a lot of time promoting it, but I hear EAs discuss the idea positively (and, I think, uncritically) with one another from time to time. It was more common shortly following the SSC article.

Comment author: MichaelDickens  (EA Profile) 06 January 2016 03:37:19PM 4 points [-]

If I think AMF is the best donation opportunity, but donating to The Humane League is better than going vegetarian (because it would be very cheap to "offset" my diet), it shows that donations to AMF are very much better than going vegetarian. This shows going vegetarian makes a small contribution to my potential social impact, so I shouldn't do it unless it involves negligible sacrifice.

Does it actually show this? I generally hear the argument go something like this:

  1. You can probably convert a lot of vegetarians by donating to The Humane League, which is better than becoming vegetarian yourself. Therefore donating to THL is better than being vegetarian.
  2. Naive estimates say THL does more good than AMF, but AMF has much more robust evidence than THL, so donating to AMF is better.
  3. Therefore donating to AMF is better than being vegetarian.

Parts 1 and 2 use contradictory claims. Part 1 claims that naive expected value dominates, and part 2 claims that robustness of evidence dominates.

Comment author: Carl_Shulman 06 January 2016 06:00:08PM *  5 points [-]

Michael, do you have an example? I've never seen the union of those 3 in one argument before, although I have seen each of the three claims made by different people.

E.g. it doesn't describe this post by Jeff Kaufman or this by Greg Lewis. The usual reasons I hear from such people favoring AMF over THL are greater flow-through effects or lower weight on nonhuman animals.

Separately, I hear people, e.g. Tom Ash and Peter Hurford, saying something like #2, but they are themselves vegetarian, and not making arguments for offsetting that I have seen. Indeed, they have challenged it on the basis that the estimates for ACE charities are not robust, which is consistent and contra the argument you described.

Comment author: Peter_Hurford  (EA Profile) 06 January 2016 10:24:11PM 3 points [-]

You're correct that Tom and I both assert something along the lines of #2 but have never argued #3.

Comment author: MichaelDickens  (EA Profile) 06 January 2016 07:46:18PM 2 points [-]

I hear people separately make #1 and #2, I can't recall hearing someone say both #1 and #2 in a single breath. But if you favor AMF over THL because AMF has stronger evidence behind it, that doesn't preclude going vegetarian. "AMF is better than THL" is not a good argument against being vegetarian, and doesn't show that vegetarianism is negligible compared to AMF donations, which is the argument Ben was quoting.

Comment author: Carl_Shulman 06 January 2016 09:37:14PM *  3 points [-]

So you don't actually hear people making the argument you mentioned, and the published arguments by Kaufman and Lewis don't suffer from the inconsistency you mention? Kaufman makes an argument that counting human and cow lives equally, modest AMF donations can be a bigger deal than dairy consumption, while Lewis argues that if one takes ACE estimates seriously, then modest donations to ACE-recommended charities can be a bigger deal than general carnivory.

On the question of donations to AMF vs THL, Kaufman weights AMF over ACE charities because he cares less about nonhuman animals than humans. Some others do so because of flow-through effects. Lewis is vegetarian, but I think mainly donates to poverty and existential risk related things, and I don't know his precise reasons but they aren't germane to his essay.

"is the argument Ben was quoting."

Ben's description didn't specify someone thinking AMF was better because they didn't believe in the robustness of THL 'animals spared' estimates. You inserted that, which created the tension in your hypothetical argument. People who favored AMF over THL because of flow-through effects, or because of weighting humans more, wouldn't have that tension (I would argue the flow-through view would create other tensions, but that's a different story).

Comment author: MichaelDickens  (EA Profile) 08 January 2016 10:12:58PM 2 points [-]

I think you're right actually. A lot of people who prefer AMF to THL are still vegetarian, and that's totally reasonable and self-consistent.

Comment author: ScottA 10 January 2016 08:12:17AM *  11 points [-]

I don't think ethical offsetting is antithetical to EA. I think it's orthogonal to EA.

We face questions in our lives of whether we should do things that harm others. Two examples are taking a long plane flight (which may take us somewhere we really want to go, but also release a lot of carbon and cause global warming) or whether we should eat meat (which might taste good but also contribute to animal suffering). EA and the principles of EA don't give us a good guide on whether we should do these things or not. Yes, the EA ethos is to do good, but there's also an understanding that none of us are perfect. A friend of a friend used to take cold showers, because the energy that would have heated her shower would be made by a polluted coal plant. I think that's taking ethical behavior in your personal life too far. But I also think that it's possible to take ethical behavior in your personal life not far enough, and counterproductively shrug it off with "Well, I'm an EA, who cares?" But nobody knows exactly how far is too far vs. not far enough, and EA doesn't help us figure that out.

Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg "I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it", or a literal way "I am actually going to pay 0.01 cents to offset the costs of this shower."

As such, I think all of your objections to offsetting fall short:

  1. The reference class doesn't particularly matter. The point is that you worried you were doing vast harm to the world by taking a hot shower, but in fact you're only doing 0.01 cents of harm to the world. You can pay that back to whoever it most soothes your conscience to pay it back to.

  2. Nobody is a perfectly effective altruist who donates 100% of their money to charity. If you choose to donate 10% of your money to charity, that remaining 90% is yours to do whatever you want with. If what you want is to offset your actions, you have just as much right to do that as you have to spend it on booze and hookers.

  3. Ethical offsetting isn't an "anti-EA meme" any more than "be vegetarian" or "tip the waiter" are "anti-EA memes". Both involve having some sort of moral code other than buying bednets, but EA isn't about limiting your morality to buying bednets, it's about that being a bare minimum. Once you've done that, you can consider what other moral interests you might have.

People who become vegetarian believe that, along with their charitable donations, they feel morally pushed to being vegetarian. That's okay. People who want to offset meat-eating believe that, along with their charitable donations, they feel morally pushed to offset not being vegetarian. That's also okay. As long as they're not taking it out of the money they've pledged to effective charity, it's not EA's business whether they want to do that or not, just as it's not EA's business whether they become vegetarian or tip the waiter or behave respectfully to their parents or refuse to take hot showers. Other forms of morality aren't in competition with EA and don't subvert EA. If anything they contribute to the general desire to build a more moral world.

Comment author: ClaireZabel 13 January 2016 05:44:11AM 3 points [-]

[written when very tired]

Other forms of morality aren't in competition with EA and don't subvert EA. If anything they contribute to the general desire to build a more moral world.

They can be in competition for EA, or subvert it. I think most do, if you follow them to their conclusions. Philanthrolocalism is a straightforward example of a philanthropic practice that seems to be in direct conflict with EA. But more broadly, many ethical frameworks like moral absolutism come into conflict with EA ideas pretty fast. You can say most EAs don't only do EA things, and I'd agree with you. And you can say people shouldn't let EA ideas determine all their behaviors, and I'd also agree with you.

And additionally, for most ideologies, most people fall short much of the time. Christians sin, feminists accidentally support the patriarchy, etc. That doesn't mean sinning isn't antithetical to being a good Christian or supporting the patriarchy to being a good feminist. You can expect people to fall short, and accept them, and not blame them, and celebrate their efforts anyway, without pretending those things were good or right.

Ethical offsetting isn't an "anti-EA meme" any more than "be vegetarian" or "tip the waiter" are "anti-EA memes". Both involve having some sort of moral code other than buying bednets, but EA isn't about limiting your morality to buying bednets, it's about that being a bare minimum.

Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that's to be expected, and great that they try! But an activity one knows doesn't do the most good (directly or indirectly) should not be called EA.

From all this, you could continue to press your argument that they're merely orthogonal. I might have agreed, until I started seeing EAs trying to convince other EAs to do ethical offsetting in EA fora and group discussions. At that point, it's being billed (I think) as an EA activity and taking up EA-allocated resources with specifically non-EA principles (in particular, I think practices driving (probably already conscientious!) individual to focus on their harm committed rather than seeking out great sources of suffering has been one of the most counterproductive habits of general do-goodery in recent history).

Without EA already existing, ethical offsetting may have been a step in the right direction (I think it's probably 35% likely that spreading the practice was net positive). With EA, and amongst EAs, I think it's a big step back.

That said, I agree with you that:

Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg "I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it", or a literal way "I am actually going to pay 0.01 cents to offset the costs of this shower.

Comment author: ScottA 13 January 2016 02:23:02PM *  3 points [-]

Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that's to be expected, and great that they try! But an activity one knows doesn't do the most good (directly or indirectly) should not be called EA.

I think "do as much good as possible" is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it's counterproductive to define this in terms of "well, I guess they failed at EA, but everyone fails at things, so that's fine"; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn't seem very friendly (see also my response to Squark above).

My interpretation of EA is "devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible". This interpretation is agnostic about what you do with the rest of your resources.

Consider the decision to become vegetarian. I don't think anybody would think of this as "anti-EA". However, it's not very efficient - if the calculations I've seen around are correct, then despite being a major life choice that seriously limits your food options, it's worth no more than a $5 - 50 donation to an animal charity. This isn't "the most effective thing" by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes - it's part of their personal morality that's not necessarily subsumed by EA, and it's not hurting EA, so why not?

I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it's part of some people's personal morality, and it's not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn't going to EA charity, it's not hurting EA for the person to give it to offsets instead of, I don't know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that's the fallacy in this comic: https://xkcd.com/871/

Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?

Recommending "stop offsetting and become vegetarian" results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.

Recommending "stop offsetting but don't become vegetarian" results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.

The only thing that doesn't seem strictly worse is "stop offsetting and donate the $5 to a charity more effective than the animal charity you're giving it to now". But why should we be more concerned about making them give the money they're already using semi-efficiently to a more effective charity, as opposed to starting with the money they're spending on clothes or games or something, and having the money they're already spending pretty efficiently be the last thing we worry about redirecting?

Comment author: nino 13 January 2016 06:53:03PM *  1 point [-]

Aren't you kind of not disagreeing at all here?

The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn't have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it's not the most effective thing to do (with your EA money).

The two claims don't seem incompatible with each other, unless I'm missing something.

Comment author: Squark  (EA Profile) 13 January 2016 09:06:02AM 2 points [-]

Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).

The deviation from imaginary “perfect altruisim” is either due to having values other than improving the world or due to practical limitations of humans. In neither case do moral offsets offer much help. In the former case, the deciding factor is the importance of improving the world versus the importance of helping yourself and your close circle, which offsets completely fail to reflect. In the latter case, the deciding factor is what can you actually endure without losing productivity to an extent which is more significant than the gain. Again, moral offsets don’t reflect the relevant considerations.

Comment author: ScottA 13 January 2016 02:24:01PM 2 points [-]

I gave the example of giving 10% to bed nets because that's an especially clear example of a division between charitable and non-charitable money - eg I have pledged to give 10% to charity, but the other 90% of my money goes to expenses and luxuries and there's no cost to EA to giving that to offsets instead. I know many other EAs work this way too.

If you believe this isn't enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there's such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you "should have" given to charity.

But if you're not working off a model where you have to agonize over everything, I'm not sure why you should agonize over offsets.

Comment author: Squark  (EA Profile) 15 January 2016 10:07:55AM *  1 point [-]

I don't think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don't reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.

Comment author: kbog  (EA Profile) 07 April 2016 09:41:34PM *  0 points [-]

If you believe this isn't enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there's such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you "should have" given to charity.

No. Have you tried it? I have. It works fine for me.

Maybe some people are too addicted to modern comforts or maybe they can't handle the stress and pity they feel when thinking about charity. Sucks for them, but it's a pragmatic issue which doesn't directly change the moral issue.

Comment author: tomstocker 11 January 2016 07:48:18PM -2 points [-]

An important point. Failing to take this into account comes across as morally narrow.

Comment author: Jeff_Kaufman 07 January 2016 04:44:27PM 4 points [-]

Often EAs propose offsetting as a counterargument to "if something harms others you must not do it". So you show that offsetting is better than strict harm avoidance, and then you give reasons why you should instead focus on the most important things.

Offsetting isn't antithetical to EA; to my mind it's a step towards EA.

Comment author: JGWeissman 05 January 2016 08:43:29PM 3 points [-]

I am not aware of EA associated people using ethical offsets beyond a small amount they don't consider part of their charity budget. Is there an "Ethical Offsetting is Great for EA" position you are arguing against?

Comment author: ClaireZabel 06 January 2016 05:13:58AM 1 point [-]

It's not very common but I've heard it promoted among EAs several times in different EA circles.

Comment author: Julia_Wise 06 January 2016 04:30:02PM 2 points [-]

Jeff has advocated this.

Comment author: Jeff_Kaufman 07 January 2016 06:05:51PM *  8 points [-]

I'm not advocating offsetting, but I don't have a good name for what I am trying to advocate. The idea is that you should prioritize the activities that have the best tradeoff between downside-for-you and upside-for-others. There are ways that this is similar to offsetting (if you can show that the harm caused by X is less than the harm caused by not donating $Y then you should feel fine donating $Y instead of avoiding X) but in this framework you don't get to donating via tallying up your harms and pricing them, instead you set out to do as much good as you can without making yourself miserable.

Comment author: JGWeissman 06 January 2016 03:42:11PM 1 point [-]

I think that your argument is much more likely to discourage people making reasonable use of ethical offsets than anyone engaged in the problem you describe, mostly based of the proportion of such people that actually exist. As such, I think publishing such an argument without having the opposed view being actually promoted by anyone you care to mention is irresponsible.

Comment author: ClaireZabel 06 January 2016 08:01:20PM 1 point [-]

I wouldn't make this argument in a context where I don't think the vast majority of people reading it are EAs. It wouldn't make sense in a none EA-dense context, since the argument is "offsetting isn't EA" not "offsetting is bad and no one should do it". Like I said, I think offsetting is better than nothing. The proportions are obviously very different in the EA community than outside it.

I don't want to mention people because a) they may not want their views made public b) it might embarrass them to name them in a context where I'm being critical of their views, and c) in about 2/3 of the cases I remember the conversation was in person, so I can't easily cite the argument anyway.

Comment author: JGWeissman 06 January 2016 10:23:42PM 1 point [-]

The proportions are obviously very different in the EA community than outside it.

This is not at all obvious. All I hear about ethical offsets is at least EA adjacent.

I don't want to mention people because a) they may not want their views made public b) it might embarrass them to name them in a context where I'm being critical of their views, and c) in about 2/3 of the cases I remember the conversation was in person, so I can't easily cite the argument anyway.

Understanding all of this, I still say that it is net negative to publicly make your argument when there is nothing you can publicly cite as promoting what you argue against. If you notice such views in private communications, it may make sense to address them in those private communications.

Comment author: ClaireZabel 07 January 2016 06:11:48AM 0 points [-]

This is not at all obvious. All I hear about ethical offsets is at least EA adjacent.

If it's all EAs or EA-adjacent people, then why would my post be "much more likely to discourage people making reasonable use of ethical offsets than anyone engaged in the problem [I] describe, mostly based of the proportion of such people that actually exist." What do you mean by "reasonable use"? If it's mostly EAs doing ethical offsetting (it isn't), that makes it more likely that my post is helpful, since my post is more relevant for people with EA-ish goals.

Given that I notice people discussing offsetting in separate circles consistently, it makes sense to believe other people are having those conversations that I'm not aware of. In some of the cases, the conversation took place in a large group (between 20 and 30 people) where I didn't have the opportunity to express my views fully (nor were they as fully developed at that time).

There are relatively public conversations (Jeff's, as cited by Julia, comments on the SSC post, some others). I could cite sources (I can think of two more that are definitely online and not from private conversation). I am choosing not to because I'm not convinced it's a helpful exercise.

If you don't think people are interested in vegan offsetting, then why would telling them not to do it matter? It would probably not be impactful (harmful or helpful) if no one was interested in ethical offsetting to begin with.

Comment author: JGWeissman 07 January 2016 05:05:24PM 1 point [-]

I consider "reasonable use" to mean spending a small amount of money on offsets to purchase mental health in the form of not feeling guilty of small harms one might cause, where these offsets are not considered an EA activity, and one who considers themselves a part of EA would be spending more money, time, effort, whatever resource on something they chose for efficiency.

All advocacy for ethical offsets I have seen has been compatible with this reasonable use, and I don't think anyone is doing the unreasonable thing of calling ethical offsets an EA activity or focusing their EA efforts on them, or saying anyone should do that.

Jeff's article does not talk about ethical offsets. It says be careful about trading your happiness inefficiently for small gains in general utility, not anything about paying offsets instead.

I could cite sources (I can think of two more that are definitely online and not from private conversation). I am choosing not to because I'm not convinced it's a helpful exercise.

The fact that you don't think citing these sources is a helpful exercise is evidence that publicly arguing against them is also not a helpful exercise.

If you don't think people are interested in vegan offsetting, then why would telling them not to do it matter?

I think people are interested in reasonable offsetting, not offsetting as a primary activity. I think I have been clear about this.

I don't care very much specifically about vegan offsets. I care a lot about the general category EAs being able to do small sub-optimal things that enable to them to focus more on their more optimal efforts, and to sustain that focus long term.

Comment author: Jeff_Kaufman 09 January 2016 10:07:17PM 0 points [-]

be careful about trading your happiness inefficiently for small gains in general utility

Yes!

Comment author: RyanCarey 05 January 2016 06:57:10PM 8 points [-]

I sympathise with the point you make with this post.

However, isn't it antithetical to consequentialism, rather than EA? EAs can have prohibitions against causing harms to groups of people.

How does this speak to people who use rule-based ethics that obliges them to investigate the benefit of their charitable gifts?

Comment author: Linch 05 January 2016 07:16:54PM 7 points [-]

This will make sense, except that pretty much every argument for offsets that I've seen comes from consequentialists or consequentialist-aligned people.

Offsetting doesn't seem very virtuous, and deontologists generally have a poor model for positive rights/obligations.

Comment author: kbog  (EA Profile) 05 January 2016 07:56:15PM *  3 points [-]

I don't think most nonconsequentialist theories provide a basis to accept offsetting either though. But I'd have to see some people make a positive case for it to know where they're coming from.

Comment author: casebash 06 January 2016 12:27:38AM 0 points [-]

It seems to depend on the harm. People accept off-setting for minor harms, but not for major ones.

Comment author: tomstocker 11 January 2016 07:55:23PM 1 point [-]

I think they're consistent with a Kantian perspective. Also, a risk averse consequentialist. Also, someone that likes to take responsibility for the consequences of their actions in a like for like manner for ethical-aesthetic reasons.

Comment author: amc 05 January 2016 09:41:07PM 4 points [-]

Notice that the narrowest possible offset is avoiding an action. This perfectly undoes the harm one would have done by taking the action. Every time I stop myself from doing harm I can think of myself as buying an offset of the harm I would have done for the price it cost me to avoid it.

I think your arguments against offsetting apply to all actions. The conclusion would be to never avoid doing harm unless it's the cheapest way to help.

Comment author: ClaireZabel 06 January 2016 05:19:23AM 1 point [-]

Yep. Except I think this would be most of the time, since it people tend to dislike it when you harm others in big or unusual ways, and doing so is often illegal. So at the very least you frequently take hits to your reputation (and the reputation of EA, theoretically) and effectiveness when you cause big unusual harms.

Comment author: kbog  (EA Profile) 05 January 2016 07:46:28PM *  3 points [-]

Thanks for this post.

It's like we came full circle from people donating minimal amounts of money to charity to relieve their guilt over their perpetuation of global injustice, to people working very hard and doing everything they can to fight global injustice, to people donating minimal amounts of money to relieve their guilt over their perpetuation of global injustice.

Just accept it. Some of your actions will harm others no matter what you do. The only way to make it worthwhile is to go out there and achieve lots of valuable things. Be confident and proud of what you accomplish and you can accept the harm that you will have to commit.

Comment author: Gleb_T  (EA Profile) 07 January 2016 03:11:31AM 0 points [-]

Just accept it. Some of your actions will harm others no matter what you do. The only way to make it worthwhile is to go out there and achieve lots of valuable things. Be confident and proud of what you accomplish and you can accept the harm that you will have to commit.

+1

Comment author: Telofy  (EA Profile) 14 January 2016 10:56:15PM *  1 point [-]

Thank you for this thought-provoking article! We want to make it the topic of our next meetup, so I’ve tried to clarify what my new position should be.

Your first two points are easily conceded—in my view everyone should direct their donations to the, in their view, most effective charity when offsetting. Your third point is most interesting.

Nino already married your and Scott’s positions, but I find it more useful to structure my thoughts in a list of pros and cons anyway.

On the pro side I see the following arguments:

  1. Contrary to Claire’s point, I think offsetting also questions the act-omission distinction because instead of forgoing something, one engages in proactive activism. Having done that, it will be harder to later argue that doing good is supererogatory, because it would be inconsistent with one’s past behavior.
  2. Offsetting can be used as a starting point to extend the circle of compassion in that a person could be brought to care enough about the harm inflicted by friends and family members to offset for them too. (But I haven’t seen this implemented.)
  3. Charities that advocate for nonhuman animals are probably the most commonly chosen reference class, and they are highly funding constrained, possibly more than they are talent constrained, so that an additional regular donor may be worth many additional vegans.
  4. Outside EA there are many nonveg*ns that are compassionate and want to reduce suffering but find that for them or in their context, veganism would be hard. Instead of resorting to the defensiveness and denigration discussed at the last meetup, they can join in with highly impactful donations.
  5. Offsetting can counter the cliché that veg*ns are dogmatic Siths that only deal in absolutes.
  6. Bridging the schism between veg*ns and nonveg*ns can help make advocacy for farmed animals a universally accepted movement, which would greatly simplify political advocacy.

On the con side I see the following arguments:

  1. Offsetting also bolsters the act-omission distinction because it fails to provide incentives to scale one’s proactive activism beyond the low level of harm the average person inflicts, so that the offsetter will fall far short of their potential. (Unless they also offset for friends and family members or even larger circles.)
  2. Offsetting may incur moral licensing when the satisfaction a person gains from “having donated” doesn’t scale in proportion with the size of the donation, so that a small donation makes further donations unlikely to the same extend that a large donation would have.
  3. Advantage 3 only holds for our current state of an anti-inductive system. In a decade or two there will hopefully be a point when the suffering of farmed animals has been reduced sufficiently to make offsetting much more expensive. At that point, an additional veg*n will be more valuable than an additional offsetter given what the latter can be expected to be able to donate. In short, success in offsetting values spreading diminishes its own value. Core EA ideas don’t suffer from that problem.
  4. Offsetting when described in terms of offsetting is only compatible with a subclass of consequentialist moralities, so that it’s impact is limited or the framing should be reconsidered.
  5. Offsetting may signal a readiness to defect (in such situations as the prisoner’s dilemma or the stag hunt), which might interfere with the offsetter’s chances for trade with agents that are not value aligned.
  6. Offsetting when described in terms of offsetting may in turn introduce (or aggravate) the schism between deontological and consequentialist veg*ns.
  7. When offsetting funds are taken from a person’s EA budget, it is at best meaningless because the money would’ve been donated effectively anyway, and likely harmful if the reference class is chosen to exclude the most effective giving opportunities.
  8. When offsetting becomes associated with EA, it may increase the perceived weirdness of EA, making it harder for people to associate with more important ideas of EA.

Some of the disadvantages only limit the scope of offsetting, others could be avoided with different rhetoric. What other pros or cons did I forget?

Comment author: ClaireZabel 15 January 2016 12:05:14AM 1 point [-]

Cool, this mostly seems right.

I think the harmfulness of offsetting's focus on collectively anthropogenic sources of suffering is still being underestimated in these conversation. (I'm using "collectively anthropogenic" because there are potential sources of badness like UFAI that are anthropogenic, but only caused by a few people to the idea of offsetting would be useless to spread to most people to address the problem of UFAI. Also, offsetting the harm done by UFAI would be, uh, tricky.) I think offsetting might even reenforce a non-interventionist mindset that could prove extremely harmful for addressing problems like wild animal suffering.

One good aspect of offsetting that I think I initially underestimated is the way it can be used as a psychological tool for beginning to alieve that a cause area matters. For example, I can imagine an individual who is beginning to suspect animals suffering is important, but finds the idea of vegetarianism or veganism daunting, and shies away from it and thus doesn't want to think more about animal suffering. For them, offsetting could be a good bridge step. I don't think this conflicts with anything I said, but I don't want people to feel like it's shameful to use this tool.

I'd want to add on to:

Pro 3: If you're just offsetting, it's worth only as much as one additional vegan (if your numbers are right). I haven't seen evidence that ethical offsetting leads to big regular donors. It may, and if you just meant to bring up the possibility that seems reasonable.

Pro 4: People who eat animal products can donate to animal charities even if it's not offsetting. That's great! But you don't need offsetting to introduce that possibility. I think offsetting harmfully frames the discussion around them "making up" for their behavior, instead of possibly just making large donations that help lots of animals. Many vegetarians enthusiastically make large donations to animal charities, which is wonderful, without worrying about offsetting. I don't know what happened at your last meetup but I think it's awesome when nonvegans donate to animal charities. Pro 6: I'm not sure how offsetting helps bridge this schism well. I can imagine some arguments about how it would help, and others about how it would hurt.

Con 5: I'm not sure how offsetting signals a willingness to defect. Could you explain that more?

Comment author: Telofy  (EA Profile) 15 January 2016 10:51:21AM 0 points [-]

Collectively anthropogenic sources of suffering: True, and that class of suffering is already broad. I wouldn’t expect people to extend their circle of compassion to even just the harm caused by all of humanity just via the idea of offsetting. The friends and family scenario is probably already the limit.

Psychological tool: Indeed. This tool is also one that can be employed without using the term “offsetting,” like “If veganism is too hard for you at this point, just reduce chicken, eggs, and fish. You can also donate to one of ACE’s top charities. That might seem too easy, but at the moment a donation of just $50 allows you to do as much good for the animals as being vegan for a year.” (Well, basically Ben’s point.)

A related problem is figuring out whether the supplements I buy are overpriced compared to an animal product plus top charity donation counterfactual. I wonder if I can just straight compare the prices or whether there are any multipliers I’m overlooking.

About pro 3: Yes, that’s what I meant, the average regular donor compared to the average vegan minus any donations they might make.

About pro 4: The framing we’ve come up with is one for older people who have a harder time changing their habits, namely that they’re donating to create a better society for the next generation. Offsetting isn’t mentioned, but you can still get nonveg*ns donating.

About pro 6: The topic of our last meetup was the threat of unfavorable social moral comparison, that some people trivialize or denigrate people or the behavior of people who they perceive as being more moral. I seem to be well filter-bubbled against such people, but studies have found that a lot of nonveg*ns are ascribing various nasty terms to veg*ns.

When animal advocacy has to fight against such strong forces as people trying to protect their identities and self-image against it, it’ll remain an uphill battle and be labeled as “controversial,” whereas, when we can invite a wide range of people into the movement, we may not be producing the best activists, but we’ll be reducing opposition. (The reducetarian movement is working on that too.) How might offsetting hurt this exact cause?

About con 5: Not compared to nonveg*ns but compared to deontological veg*ns. Then again a given nonveg*n could be assumed to be nonveg*n out of ignorance, while the same could not be assumed about an offsetter. When you’re offsetting you could be seen as defecting against some animals to save other animals (except that nonhuman animals are not really “agenty”).

For example, when a profit-oriented employer pays a person to deliver some pointless advertisement to hundreds of households, and the person does that in order to donate a portion to a charity the employer doesn’t care about, then this deal might work just fine. But when the employer sees that a potential employee has a history of defecting in such arrangements to further their moral goal, the employer may imagine that the potential employee will sell the advertisement to a company that buys scrap paper to donate even more and save time that they can use to swindle several advertisement companies in parallel. So it might hurt a person’s–or more likely, a group’s or movement’s–reputation.

Comment author: RayTaylor 08 December 2016 06:51:02PM 0 points [-]

did this happen at the MeetUp? outcomes?

Comment author: Telofy  (EA Profile) 10 December 2016 11:27:05AM 0 points [-]

Oops, too long ago; I don’t remember. But I don’t think I updated any more that evening. Not entirely sure.

Comment author: Denise_Melchin 05 January 2016 07:28:09PM 1 point [-]

"I've previously discussed my concerns about the obstacles to changing one's mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed."

This has roughly been my reasoning for considering donating small sums to Animal suffering as a cause area and Climate Change as a cause area. (Though I haven't done so yet.) I think it helps people to keep an open mind and am therefore happy to see them offsetting their 'wrong' behaviour.

I agree with Ryan's and Linch's comments as well.

Comment author: RayTaylor 08 December 2016 06:49:21PM 0 points [-]

Good points, but I would go further, having worked in this field both with meteorologists and politicals.

Individual Offsets are easier than behaviour change to do, so a handy sop to guilty conscience of middle class people, who want to keep driving and flying, so perfect for self-deception.

More here: www.rationalreflection.net/can-we-offset-immorality

Thus offsets at individual and local level = advanced greenwash, wrapped up as an environmental project.

In fact, most offsets are deeply flawed and many, particularly renewable energy projects (which may help with health and education and have many other justifications) lead to INCREASED emissions - as they tend to lead to purchases of electrical goods, and so a huge increase in energy use locally and in the countries manufacturing the goods, even with reducing carbon intensity.

The best destinations for carbon funds probably include wetland protection in semi-arid regions (see my own www.theglobalcoolingproject.com) or climate campaign groups eg EIA for their HCFC work on the Montreal/Kigali protocols or anyone working on aircraft emissions or in India/China or on combating denialism or bridging political divides (eg George Marshall from Climate Outreach).

Comment author: JoshYou 06 January 2016 02:50:11AM 0 points [-]

"And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them."

I think it would be helpful to distinguish between the claims (1) "given that one has imposed some harm, one is obligated to offset it" and (2) "any imposition of harm is justified if it is offset." This article argues against the first claim, while Scott argues that the second one seems false. It seems pretty easy to imagine someone accepting (1) and rejecting (2), and I'd be pretty skeptical of a causal connection between promoting (1) and more people believing in (2). The reverse seems just as (un)likely: "hey, if I don't have to offset my harms, maybe causing harm doesn't really matter to begin with."

Comment author: ClaireZabel 06 January 2016 05:27:36AM 1 point [-]

I don't think the causal link between (1) and (2) is weak at all, but agree that the reverse is also likely, which is why I mentioned it: "the argument that we should focus on doing lots of good rather than fixing harms we cause could drive destructive thoughtlessness about personal behavior, so I’m wary about making it too frequently."

Scott discusses claim (2) in his section III (below)

"The second troublesome case is a little more gruesome.

Current estimates suggest that $3340 worth of donations to global health causes saves, on average, one life.

Let us be excruciatingly cautious and include a two-order-of-magnitude margin of error. At $334,000, we are super duper sure we are saving at least one life.

So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…

Okay, fine. Get the irrelevant objections out of the way first and establish the least convenient possible world. I’m a criminal mastermind, it’ll be the perfect crime, and there’s zero chance I’ll go to jail. I can make it look completely natural, like a heart attack or something, so I’m not going to terrorize the city or waste police time and resources. The guy’s not supporting a family and doesn’t have any friends who will be heartbroken at his death. There’s no political aspect to my grudge, so this isn’t going to silence the enemies of the rich or anything like that. I myself have a terminal disease, and so the damage that I inflict upon my own soul with the act – or however it is Leah always phrases it – will perish with me immediately afterwards. There is no God, or if there is one He respects ethics offsets when you get to the Pearly Gates.

Or you know what? Don’t get the irrelevant objections out of the way. We can offset those too. The police will waste a lot of time investigating the murder? Maybe I’m very rich and I can make a big anonymous donation to the local police force that will more than compensate them for their trouble and allow them to hire extra officers to take up the slack. The local citizens will be scared there’s a killer on the loose? They’ll forget all about it once they learn taxes have been cut to zero percent thanks to an anonymous donation to the city government from a local tycoon.

Even what seems to me the most desperate and problematic objection – that maybe the malarial Africans saved by global health charities have lives that are in some qualitative way just not as valuable as those of happy First World citizens contributing to the global economy – can be fixed. If I’ve got enough money, a few hundred thousand to a million ought to be able to save the life of a local person in no way distinguishable from my victim. Heck, since this is a hypothetical problem and I have infinite money, why not save ten local people?

The best I can do here is to say that I am crossing a Schelling fence which might also be crossed by people who will be less scrupulous in making sure their offsets are in order. But perhaps I could offset that too. Also, we could assume I will never tell anybody. Also, anyone can just go murder someone right now without offsetting, so we’re not exactly talking about a big temptation for the unscrupulous." (http://slatestarcodex.com/2015/01/04/ethics-offsets/)

Comment author: Gregory_Lewis 09 January 2016 11:07:13PM 0 points [-]

I agree with this, and have written similarly here:

There is more that can be said. An objector could turn the screw by offering caveats to the thought experiment (or just sufficient offset) that the harm of killing is genuinely outweighed by the benefits – yet, they insist, doing so would still be wrong...

I think consequentialists should ultimately yield – these are costs to the theory, and the best that can be hoped for is to ameliorate these (one could appeal to saying ones intuitions in a sufficiently caveated case are unreliable may be one approach). It would be remarkable if a moral theory on reflection accorded with all of our moral intuitions all the time. Instead of defending the difficultly defensible, it is perhaps better to appeal that the balance of benefits of costs of consequentialism does better than other theories, notwithstanding its poor performance in these particular cases

Comment author: casebash 06 January 2016 12:34:41AM 0 points [-]

Offsetting can also be viewed as deciding to co-operating in a tragedy of the commons like situation. If a large enough proportion of the population/businesses decided to offset their emissions then presumably global warming would cease to be an issue. This would cost everyone a small amount individually, but the individual gain would be large. Perhaps the money could do more good elsewhere, but defecting simply encourages more people to defect as well and possibly causes the whole deal to collapse.

Not that I offset my carbon, just an interesting thought.

Comment author: ClaireZabel 06 January 2016 05:30:31AM 1 point [-]

If everyone "defected" by donating to the most effective charity instead of offsetting, the whole deal wouldn't collapse. The world would be a better place.

So if the problem is that people are copycats so doing a thing encourages other people to do the same, it's better to donate more to an effective charity than to offset, since when people copy you doing that it will make the world even better.

Comment author: casebash 09 January 2016 11:21:46AM 0 points [-]

The worry is that enough people will defect from the current social norms so that they break down, but not enough people defect to create a new norm of donating to effective charities instead.

Comment author: Jeff_Kaufman 09 January 2016 10:08:50PM 1 point [-]

Neither an "offset your harm" nor a "donate to effective charities" norm are especially well established in the general population, though. Your argument sounds like it's based on the former being widespread?

Comment author: casebash 10 January 2016 02:36:22PM 0 points [-]

Global warming offsets are pretty big.

Comment author: Jeff_Kaufman 11 January 2016 03:23:24PM 2 points [-]

The idea of global warming offsets is pretty widespread, but I don't think a norm of buying them is. Specifically, I don't think either that they're very widely bought or even seen as something you're supposed to buy.

(My impression is that it's catching on as a norm among sustainably minded companies, though.)