TO

Toby_Ord

2514 karmaJoined Aug 2014

Comments
119

Thanks Danny! 

For those who are unaware, Benefit Cost Analysis (BCA) is the main form of quantitative evaluation of cost-effectiveness in the US, UK, and beyond. Two of the biggest problems with it as a method are its counting of the value of a dollar equally for all people (which leads to valuing people themselves in proportion to their income) and a high, constant, discount rate. So the action on both of these are big improvements to quantitative priority setting in the US!

Yes, I completely agree. When I was exploring questions about wild animal welfare almost 20 years ago, I was very surprised to see how the idea of thinking about individual animals' lives was so foreign to the field.

While I only had time to quickly read this piece, I agree with much of what I read and think it is a great contribution to the literature. 

To clarify my own view, I think animals matter a great deal — now and over the longterm future. The focus on humanity in my work is primarily because we are the only moral agents we know of. In philosophical terms, this means that humanity has immense instrumental value. If we die, then as far as we know, there is nothing at all striving to shape the future of Earth (or our whole galaxy) towards what is good or just. It is with humanity in this role as moral agent, rather than moral patient, that I think the case for avoiding existential risk to humanity is at its most powerful. I don't know whether the lion's share of intrinsic value we create over the longterm future will be in the form of human flourishing, animal flourishing, or something else, and welcome much more discussion on that, with your paper being a good example.

As well as avoiding existential risk, I think that work to avoid locking in bad values or practices could also be very important on longtermist grounds, and that values connected to animals are good candidates.

I focused on what could happen to animal species rather than individual animals in those particular passages, but much of my thinking on animal ethics is in terms of individuals. 

Like most people, I'm not sold on the idea that wild animal suffering makes the biosphere have negative overall value, such that ecosystem destruction would be good instead of bad (and so forth). But nor am I claiming that we should introduce animals to other planets. My point in those passages was to sketch the magnitude of the kinds of things humanity could achieve in terms of the environment and animal life. What to do with that power raises very big and very uncertain questions. My main claims were that we should protect our potential, and then think very long and hard about how best to fulfil it.

Thanks Shakeel! This is an excellent post. There are so many big wins in the EA community that it can be hard to see the big picture and keep them all in mind. We all strive to see the big picture, but sometimes even for us, the latest drama can quickly drive the big successes from our memory. So big picture summaries like this are very useful.

Indeed, summaries of the whole period of EA achievements, or summaries that include setbacks as well as wins would also be good.

Very exciting news! Welcome Zach — this is making me feel optimistic about EA in 2024.

Toby_Ord
5mo118
34
0
25

Thanks so much for writing this, and even more for all you've done to help those less fortunate than yourself.

I'm glad I did that Daily Politics spot! It was very hard to tell in the early days how impactful media work was (and it still is!) so examples like this are very interesting.

Thanks so much for all your hard work on CEA/EV over the many years. You have been such a driving force over the years in developing the ideas, the community, and the institutions we needed to help make it all work well. Much of that work over the years has happened through CEA/EV, and before that through Giving What We Can and 80,000 Hours before we'd set up CEA to house them, so this is definitely in some sense the end of an era for you (and for EV). But a lot of your intellectual work and vision has always transcended the particular organisations and I'm really looking forward to much more of that to come!

Oh, I definitely agree that the guilt narrative has some truth to it too, and that the final position must be some mix of the two, with somewhere between a 10/90 and 90/10 split. But I'd definitely been neglecting the 'we got used' narrative, and had assumed others were too (though aprilsun's comment suggests I might be incorrect about that).

I'd add that for different questions related to the future of EA, the different narratives change their mix. For example, the 'we got used' narrative is at its most relevant if asking about 'all EAs except Sam'. But if asking about whether it is good to grow EA, it is relevant that we may get more Sams. And if asking 'how much good or bad do people who associate with EA do?' the 'guilt' narrative increases in importance. 

This is a very interesting take, and very well expressed. You could well be right that the narrative that 'we got used' is the most correct simple summary for EAs/EA. And I definitely agree that it is an under-rated narrative. There could even be psychological reasons for that (EAs being more prone to guilt than to embarassment?).

I note that even if P(FTX exists | EA exists) were quite a bit higher than P(FTX exists | ~EA exists), that could be compatible with your suggested narrative of EAs being primarily marks/victims. To reuse your example, if you were the only person the perpetrator of the heist could con into lending their car to act as a getaway vehicle, then that would make P(Heist happens | Your actions) quite a bit higher than P(Heist happens | You acting differently), but you would still be primarily a mark or (minor) victim of the crime, rather than primarily one of the responsible parties for it.

Toby_Ord
7mo133
18
2
29

Nick is being so characteristically modest in his descriptions of his role here. He was involved in EA right from the start — one of the members of Giving What We Can at launch in 2009 — and he soon started running our first international chapter at Rutgers, before becoming our director of research. He contributed greatly to the early theory of effective altruism and along with Will and I was one of the three founding trustees of the Centre for Effective Altruism. I had the great pleasure of working with him in person for a while at Oxford University, before he moved back to the States to join Open Philanthropy. He was always thoughtful, modest, and kind. I'm excited to see what he does next.

Load more