Comment author: zdgroff 21 June 2018 02:15:55AM 2 points [-]

This is awesome. I suppose this is something anyone could fix, but I'm curious why it seems to deviate from the normal EA division of causes and has animal welfare as a subcause? Animal welfare already has a number of categories under it, and not all policy-related, so seems like maybe it should be its own category.

Comment author: zdgroff 15 June 2018 07:49:43PM 6 points [-]

I think ACE is acting responsibly. These criticisms strike me as off, particularly the one involving cute/fuzzy animals. I think some in EA are put off by the use of emotions as a tactic, and that's going to lead us to a completely warped picture of human behavior and a correspondingly warped approach to changing it.

Comment author: zdgroff 23 April 2018 11:47:47PM 5 points [-]

This is so necessary and helpful. This is a significant update for me toward a donor advised fund (and also reinforces my current practice of donating regularly rather than saving to donate).

This data to me suggests that the EA community may have made some mistakes in modeling our decisions as more rational than they are. Specifically, whether broad career capital makes sense depends a lot on whether we are rational and will optimize or whether we need commitment devices. Maybe we all need more of a behavioral econ update.

Comment author: zdgroff 06 March 2018 12:20:26AM 2 points [-]

I'm excited to see what happens here! Will you be comparing different areas and the lessons learned to apply to the others? I think lessons from poverty may in some cases translate to animal advocacy and vice versa (and there may be some potential for cross-pollination with growing EA or other causes).

Comment author: zdgroff 01 March 2018 04:04:52PM 1 point [-]

I agree with Benito and others that this post would benefit from a deeper engagement with already-stated OPP policy (see, for instance, this recent interview with Holden Karnofsky: https://soundcloud.com/80000-hours/21-holden-karnofsky-open-philanthropy), but I do think it is good to have this conversation.

There are definitely arguments for OPP's positions on the problems with academia, and I think taking a different approach may be worthwhile. At the same time, I am a bit confused about the lack of written explanations or the opposition to panels. There are ways to try to create more individualized incentives within panels. Re: written explanations, while it does make sense to avoid being overly pressured by public opinion, having to make some defense of a decision is probably helpful to a decision's efficacy. An organization can just choose to ignore public responses to its written grant justification and to listen only to experts' responses to a grant. I would think that some critical engagement would be beneficial.

Comment author: Jacy_Reese 21 February 2018 01:54:31PM 2 points [-]

That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.

Comment author: zdgroff 23 February 2018 07:50:20PM 1 point [-]

Isn't hedonium inherently as good as dolorium is bad? If it's not, can't we just normalize and then treat them as the same? I don't understand the point of saying there will be more hedonium than dolorium in the future, but the dolorium will matter more. They're vague and made-up quantities, so can't we just set it so that "more hedonium than dolorium" implies "more good than bad"?

Comment author: zdgroff 23 February 2018 07:00:34PM 1 point [-]

This is really fascinating. I think this is largely right and an interesting intellectual puzzle on top of it. Two comments:

1) I would think mission hedging is not as well suited to AI safety as it is to climate or animal activism because AI safety is not directly focused on opposing an industry. As has been noted elsewhere on this forum, AI safety advocates are not focused on slowing down AI development and in many cases tend to think it might be helpful, in which case mission hedging is counterproductive. I could also imagine a scenario in which AI problems also weigh down a company's stock. Maybe a big scandal occurs around AI that foreshadows future problems with AGI and also embarrasses AI developers.

2) As kbog notes, it doesn't seem clear that the growth in an industry one opposes means the marginal dollar is more effective. Even though an industry's growth increases the scale of a problem, it might lower its tractability or neglectedness by a greater amount.

Comment author: zdgroff 23 February 2018 01:45:48PM 4 points [-]

This matches what I saw in Ghana when I lived there for a few months. Interestingly, I lived with someone in the agriculture corps, an initiative by U.S. ag to promote its image by helping developing countries. I think it probably has the effect of putting them more firmly on a factory farming path, sadly.

The ag corps member would always talk about how terrible animal husbandry in Ghana was, and I was pretty shocked. After all, at least the chickens, goats, and sheep roamed freely. She noted that but said that they eat garbage, and their slaughter is frequently botched. To me, eating garbage seemed like a very small harm next to living your life in a tiny stall or extremely crowded barn. At the end of the discussion, I realized that she thought their animal husbandry was worse than that in the U.S. because her college ag classes had equated good animal husbandry practices with standard industrial practices, so sloppy practices–even if they allowed for more freedom–were worse in her view.

Comment author: zdgroff 07 February 2018 08:12:38PM 0 points [-]

This is an excellent and thorough analysis.

Comment author: zdgroff 07 February 2018 07:55:25PM 2 points [-]

Wow, this is a fascinating post, and so short! I think a write-up on the historiography on this would be really useful. This is really important for EAs, particularly those focused on the long term or systemic change, and it could use a detailed treatment.

View more: Next