Comment author: SoerenMind  (EA Profile) 21 July 2017 04:03:08PM 6 points [-]

As far as I can see that's just functionalism / physicalism plus moral anti-realism which are both well-respected. But as philosophy of mind and moral philosophy are separate fields you won't see much discussion of the intersection of these views. Completely agreed if you do assume the position is wrong.

Comment author: kokotajlod 22 July 2017 09:47:29PM 0 points [-]

SoerenMind: It's wayyy more than just functionalism/physicalism plus moral anti-realism. There are tons of people who hold both views, and only a tiny fraction of them are negative utilitarians or anything close. In fact I'd bet it's somewhat unusual for any sort of moral anti-realist to be any sort of utilitarian.

Comment author: Brian_Tomasik 21 July 2017 11:08:37PM 7 points [-]

Is it just something like "preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)"?

I also don't want to speak for FRI as a whole, but yeah, I think it's safe to say that a main thing that makes FRI unique is its suffering focus.

My high confidence in suffering-focused values results from moral anti-realism generally (or, if moral realism is true, then my unconcern for the moral truth). I don't think consciousness anti-realism plays a big role because I would still be suffering-focused even if qualia were "real". My suffering focus is ultimately driven by the visceral feeling that extreme suffering is so severe that nothing else compares in importance. Theoretical arguments take a back seat to this conviction.

Comment author: kokotajlod 22 July 2017 09:44:44PM 2 points [-]

Interesting. I'm a moral anti-realist who also focuses on suffering, but not to the extent that you do (e.g. not worrying that much about suffering at the level of fundamental physics.) I would have predicted that theoretical arguments were what convinced you to care about fundamental physics suffering, not any sort of visceral feeling.

Comment author: BenHoffman 21 May 2017 11:26:35PM *  2 points [-]

Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.

[...]

Later, we realised that understanding the impact of the Good Food Institute was too difficult, so we replaced it with Animal Charity Evaluators on our shortlist. Animal Charity Evaluators finds advocates for highly effective opportunities to improve the lives of animals.

If quantitative models were used for these decisions I'd be interested in seeing them.

Comment author: kokotajlod 22 May 2017 04:40:54PM 5 points [-]

That second quote in particular seems to be a good example of what some might call measurability bias. Understandable, of course--it's hard to give out a prize on the basis of raw hunches--but nevertheless we should work towards finding ways to avoid it.

Kudos to OPP for being so transparent in their thought process though!

Comment author: kokotajlod 30 April 2017 03:07:14PM 10 points [-]

Thanks for this! Even within EA I think there's a need for more brainstorming of different cause areas, and you've presented a well-researched case for this one. I am tentatively convinced!

What do you think is the best counterargument? That is, what's the best reason to think that maybe this isn't as tractable/neglected/important as you think?

I think the biggest concern (for me) is whether or not the research on the matter is solid. Does physical punishment cause worse outcomes, or does it merely correlate? Etc. This is important both for determining how serious the problem is, and for determining how tractable it is (because without research to back up our claims, it will be hard to convince anyone to change.) I haven't looked into it myself of course, but I'm glad you have.

Comment author: John_Maxwell_IV 21 March 2017 11:57:05PM 1 point [-]

Can you elaborate on this?

Comment author: kokotajlod 25 March 2017 04:50:54PM 0 points [-]

Sure, sorry for the delay.

The ways that I envision suffering potentially happening in the future are these: --People deciding that obeying the law and respecting the sovereignty of other nations is more important than preventing the suffering of people inside them --People deciding that doing scientific research (simulations are an example of this) is well worth the suffering of the people and animals experimented on --People deciding that the insults and microagressions that affect some groups are not as bad as the inefficiencies that come from preventing them --People deciding that it's better to have a few lives without suffering than many many many lives with suffering (even when the many lives are all still all things considered good.) --People deciding that AI systems should be designed in ways that make them suffer in their daily jobs, because it's most efficient that way.

Utilitarianism comes down pretty strongly in favor of these decisions, at least in many cases. My guess is that in post-scarcity conditions, ordinary people will be more inclined to resist these decisions than utilitarians. The big exception is the sovereignty thing; in those cases I think utilitarians will lead to less suffering than the average humans. But those cases will only happen for a decade or so and will be relatively small-scale.

Comment author: kokotajlod 21 March 2017 08:52:15PM 3 points [-]

"Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings."

I'm pretty sure this is false. Superintelligent singletons that don't specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They'll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they'll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.

If you disagree, I'd love to hear that you do--because I'm thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.

Comment author: kokotajlod 21 March 2017 08:53:51PM 1 point [-]

And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.

Comment author: kokotajlod 21 March 2017 08:52:15PM 3 points [-]

"Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings."

I'm pretty sure this is false. Superintelligent singletons that don't specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They'll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they'll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.

If you disagree, I'd love to hear that you do--because I'm thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.

2

Anyone have thoughts/response to this critique of Effective Animal Altruism?

I recently came across this lengthy and harsh critique of ACE in particular and animal-focused EA more generally and to some extent EA more generally.  https://medium.com/ @harrisonnathan/the-actual-number-is-almost-surely-higher-92c908f36517#.kq58x4oor I don't know what to think about it, since I don't know much about ACE. I'm sure some of the concerns it raises... Read More
Comment author: kokotajlod 14 December 2016 08:12:18PM 4 points [-]

This is great. Please do it again next year.

Comment author: kokotajlod 30 November 2016 01:40:52AM 2 points [-]

I agree with Owen. I don't have anything to add to what's been said, other than a response to the strongest reason against having that norm: It only conflicts with the norm of "do what's most effective" if it truly is more effective to donate to one's own employer. But because of the signaling/weirdness reasons (and, yes, the bias) that doesn't seem to be true. We're sophisticated enough that we can have a hierarchy of norms, with "do what's most effective" at the top and "don't donate to your employer unless there's a special circumstance" as a lower norm--as a helpful heuristic/guideline.

How much money is saved from taxes by foregoing salary? If it's at least 20% of the donation then I might change my mind.

View more: Next