Comment author: throwaway2 03 August 2018 05:52:30PM *  20 points [-]

Thanks for making this post, it was long overdue.

Further facts

  • Connection Theory has been criticized as follows: "It is incomplete and inadequate, has flawed methodology, and conflicts well established science." The key paper has been removed from their websites and the web archive but is still available at the bottom of this post.
  • More of Geoff Anders's early work can be seen at https://systematicphilosophy.com/ and https://philosophicalresearch.wordpress.com/. (I hope they don't take down these websites as well.)
  • Former Leverage staff have launched a stablecoin cryptocurrency called Reserve (formerly "Flamingo"), which was backed by Peter Thiel and Coinbase.
  • In 2012-2014, they ran THINK.
  • The main person at LEAN is closely involved with Paradigm Academy and helps them recruit people.

Recruitment transparency

  • I have spoken with four former interns/staff who pointed out that Leverage Research (and its affiliated organizations) resembles a cult according to the criteria listed here.
  • The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as "participating organizations," implying they're equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)
  • (low confidence) I've heard through the grapevine that the EA Summit 2018 wasn't coordinated with other EA organizations except for LEAN and Charity Science.

Overall, I am under the impression that a majority of EAs think that Leverage is quite culty and ineffective. Leverage staff usually respond by claiming that their unpublished research is valuable, but the insiders mentioned above seemed to disagree.

If someone has strong counterevidence to this skeptical view of Leverage, I would be very interested and open to changing my mind.

Comment author: Jacy_Reese 04 August 2018 07:22:29AM *  21 points [-]

Just to add a bit of info: I helped with THINK when I was a college student. It wasn't the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn't predict that), but Leverage's involvement with it was professional and thoughtful. I didn't get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.

Comment author: kbog  (EA Profile) 16 April 2018 05:29:31PM 2 points [-]

I haven't seen many debates about this within EA. New people are sometimes confused about the issue, but aside from that, pretty much everyone seems to recognize the expected economic impact of vegetarianism.

Comment author: Jacy_Reese 18 April 2018 01:30:30PM 5 points [-]

I get it pretty frequently from newcomers (maybe in the top 20 questions for animal-focused EA?), but everyone seems convinced by a brief explanation of how there's still a small chance of big purchasing changes even if every small consumption change doesn't always lead to a purchasing change.

Comment author: MetricSulfateFive 23 February 2018 08:09:36PM 3 points [-]

He defines hedonium/dolorium as the maximum positive/negative utility you can generate with a certain amount of energy:

"For example, I think a given amount of dolorium/dystopia (say, the amount that can be created with 100 joules of energy) is far larger in absolute moral expected value than hedonium/utopia made with the same resources."

Comment author: Jacy_Reese 27 February 2018 05:36:12PM 1 point [-]

Exactly. Let me know if this doesn't resolve things, zdgroff.

Comment author: saulius  (EA Profile) 27 February 2018 01:01:56AM 1 point [-]

But humanity/AI is likely to expand to other planets. Won't those planets need to have complex ecosystems that could involve a lot of suffering? Or do you think it will all be done with some fancy tech that'll be too different from today's wildlife for it to be relevant? It's true that those ecosystems would (mostly?) be non-naturogenic but I'm not that sure that people would care about them, it'd still be animals/diseases/hunger.etc. hurting animals. Maybe it'd be easier to engineer an ecosystem without predation and diseases but that is a non-trivial assumption and suffering could then arise in other ways.

Also, some humans want to spread life to other planets for its own sake and relatively few people need to want that to cause a lot of suffering if no one works on preventing it.

This could be less relevant if you think that most of the expected value comes from simulations that won't involve ecosystems.

Comment author: Jacy_Reese 27 February 2018 05:35:03PM 1 point [-]

Yes, terraforming is a big way in which close-to-WAS scenarios could arise. I do think it's smaller in expectation than digital environments that develop on their own and thus are close-to-WAS.

I don't think terraforming would be done very differently than today's wildlife, e.g. done without predation and diseases.

Ultimately I still think the digital, not-close-to-WAS scenarios seem much larger in expectation.

Comment author: Brian_Tomasik 22 February 2018 09:23:57PM 7 points [-]

I tend to think of moral values as being pretty contingent and pretty arbitrary, such that what values you start with makes a big difference to what values you end up with even on reflection. People may "imprint" on the values they receive from their culture to a greater or lesser degree.

I'm also skeptical that sophisticated philosophical-type reflection will have significant influence over posthuman values compared with more ordinary political/economic forces. I suppose philosophers have sometimes had big influences on human politics (religions, Marxism, the Enlightenment), though not necessarily in a clean "carefully consider lots of philosophical arguments and pick the best ones" kind of way.

Comment author: Jacy_Reese 27 February 2018 05:32:07PM 1 point [-]

I'd qualify this by adding that the philosophical-type reflection seems to lead in expectation to more moral value (positive or negative, e.g. hedonium or dolorium) than other forces, despite overall having less influence than those other forces.

Comment author: Lukas_Gloor 26 February 2018 07:23:25AM 4 points [-]

I think that there's an inevitable tradeoff between wanting a reflection process to have certain properties and worries about this violating goal preservation for at least some people. This blogpost is not about MCE directly, but if you think of "BAAN thought experiment" as "we do moral reflection and the outcome is such a wide circle that most people think it is extremely counterintuitive" then the reasoning in large parts of the blogpost should apply perfectly to the discussion here.

That is not to say that trying to fine tune reflection processes is pointless: I think it's very important to think about what our desiderata should be for a CEV-like reflection process. I'm just saying that there will be tradeoffs between certain commonly mentioned desiderata that people don't realize are there because they think there is such a thing as "genuinely free and open-ended deliberation."

Comment author: Jacy_Reese 27 February 2018 05:29:35PM 2 points [-]

Thanks for commenting, Lukas. I think Lukas, Brian Tomasik, and others affiliated with FRI have thought more about this, and I basically defer to their views here, especially because I haven't heard any reasonable people disagree with this particular point. Namely, I agree with Lukas that there seems to be an inevitable tradeoff here.

Comment author: ateabug 22 February 2018 11:03:10PM *  2 points [-]

Random thought: (factory farm) animal welfare issues will likely eventually be solved by cultured (lab grown) meat when it becomes cheaper than growing actual animals. This may take a few decades, but social change might take even longer. The article even suggests technical issues may be easier to solve, so why not focus more on that (rather than on MCE)?

Comment author: Jacy_Reese 23 February 2018 04:20:42PM *  1 point [-]

I just took it as an assumption in this post that we're focusing on the far future, since I think basically all the theoretical arguments for/against that have been made elsewhere. Here's a good article on it. I personally mostly focus on the far future, though not overwhelmingly so. I'm at something like 80% far future, 20% near-term considerations for my cause prioritization decisions.

This may take a few decades, but social change might take even longer.

To clarify, the post isn't talking about ending factory farming. And I don't think anyone in the EA community thinks we should try to end factory farming without technology as an important component. Though I think there are good reasons for EAs to focus on the social change component, e.g. there is less for-profit interest in that component (most of the tech money is from for-profit companies, so it's less neglected in this sense).

Comment author: Ben_West  (EA Profile) 22 February 2018 06:49:53PM 2 points [-]

AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators

This is something I also struggle with in understanding the post. it seems like we need:

  1. AI creators can be convinced to expand their moral circle
  2. Despite (1), they do not wish to be convinced to expand their moral circle
  3. The AI follows this second desire to not be convinced to expand their moral circle

I imagine this happening with certain religious things; e.g. I could imagine someone saying "I wish to think the Bible is true even if I could be convinced that the Bible is false".

But it seems relatively implausible with regards to MCE?

Particularly given that AI safety talks a lot about things like CEV, it is unclear to me whether there is really a strong trade-off between MCE and AIA.

(Note: Jacy and I discussed this via email and didn't really come to a consensus, so there's a good chance I am just misunderstanding his argument.)

Comment author: Jacy_Reese 22 February 2018 07:02:31PM *  2 points [-]

Hm, yeah, I don't think I fully understand you here either, and this seems somewhat different than what we discussed via email.

My concern is with (2) in your list. "[T]hey do not wish to be convinced to expand their moral circle" is extremely ambiguous to me. Presumably you mean they -- without MCE advocacy being done -- wouldn't put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it's being conflated with, "they actively oppose" or "they would answer 'no' if asked, 'Do you think your values are wrong when it comes to which moral beings deserve moral consideration?'"

I think they don't actively oppose it, they would mostly answer "no" to that question, and it's very uncertain if they will put the wide-MC-leading values into an aligned AI. I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

This leads me to think that you only need (2) to be true in a very weak sense for MCE to matter. I think it's quite plausible that this is the case.

*Wide-MC meaning an extremely wide moral circle, e.g. includes insects, small/weird digital minds.

Comment author: Pablo_Stafforini 22 February 2018 12:31:49PM *  8 points [-]

The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.

Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority of farmed animal advocacy over global poverty along these two dimensions is a sufficient reason for not working on global poverty, why isn't the superiority of wild animal advocacy over farmed animal advocacy along those same dimensions not also a sufficient reason for not working on farmed animal advocacy?

Comment author: Jacy_Reese 22 February 2018 02:53:57PM *  4 points [-]

I personally don't think WAS is as similar to the most plausible far future dystopias, so I've been prioritizing it less even over just the past couple of years. I don't expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it's possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).

I think if one wants something very neglected, digital sentience advocacy is basically across-the-board better than WAS advocacy.

That being said, I'm highly uncertain here and these reasons aren't overwhelming (e.g. WAS advocacy pushes on more than just the "care about naturogenic suffering" lever), so I think WAS advocacy is still, in Gregory's words, an important part of the 'far future portfolio.' And often one can work on it while working on other things, e.g. I think Animal Charity Evaluators' WAS content (e.g. ]guest blog post by Oscar Horta](https://animalcharityevaluators.org/blog/why-the-situation-of-animals-in-the-wild-should-concern-us/)) has helped them be more well-rounded as an organization, and didn't directly trade off with their farmed animal content.

Comment author: Gregory_Lewis 21 February 2018 10:24:11PM 18 points [-]

Thank you for writing this post. An evergreen difficulty that applies to discussing topics of such a broad scope is the large number of matters that are relevant, difficult to judge, and where one's judgement (whatever it may be) can be reasonably challenged. I hope to offer a crisper summary of why I am not persuaded.

I understand from this the primary motivation of MCE is avoiding AI-based dystopias, with the implied causal chain being along the lines of, “If we ensure the humans generating the AI have a broader circle of moral concern, the resulting post-human civilization is less likely to include dystopic scenarios involving great multitudes of suffering sentiences.”

There are two considerations that speak against this being a greater priority than AI alignment research: 1) Back-chaining from AI dystopias leaves relatively few occasions where MCE would make a crucial difference. 2) The current portfolio of ‘EA-based’ MCE is poorly addressed to averting AI-based dystopias.

Re. 1): MCE may prove neither necessary nor sufficient for ensuring AI goes well. On one hand, AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators. On the other, even if the AI-designers have the desired broad moral circle, they may have other crucial moral faults (maybe parochial in other respects, maybe selfish, maybe insufficiently reflective, maybe some mistaken particular moral judgements, maybe naive approaches to cooperation or population ethics, and so on) - even if they do not, there are manifold ways in the wider environment (e.g. arms races), or in terms of technical implementation, that may incur disaster.

It seems clear to me that, pro tanto, the less speciesist the AI-designer, the better the AI. Yet for this issue to be of such fundamental importance to be comparable to AI safety research generally, the implication is of an implausible doctrine of ‘AI immaculate conception’: only by ensuring we ourselves are free from sin can we conceive an AI which will not err in a morally important way.

Re 2): As Plant notes, MCE does not arise from animal causes alone: global poverty, climate change also act to extend moral circles, as well as propagating other valuable moral norms. Looking at things the other way, one should expect the animal causes found most valuable from the perspective of avoiding AI-based dystopia to diverge considerably from those picked on face-value animal welfare. Companion animal causes are far inferior from the latter perspective, but unclear on the former if this a good way of fostering concern for animals; if the crucial thing is for AI-creators not to be speciest over the general population, targeted interventions like ‘Start a petting zoo at Deepmind’ look better than broader ones, like the abolition of factory farming.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

Notwithstanding the above, the approach outlined above has a role to play in some ideal ‘far future portfolio’, and it may be reasonable for some people to prioritise work on this area, if only for reasons of comparative advantage. Yet I aver it should remain a fairly junior member of this portfolio compared to AI-safety work.

Comment author: Jacy_Reese 21 February 2018 11:47:25PM *  5 points [-]

Those considerations make sense. I don't have much more to add for/against than what I said in the post.

On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society. I'm not relatively very worried about, for example, far future dystopias where dog-and-cat-like-beings (e.g. small, entertaining AIs kept around for companionship) are suffering in vast numbers. And environmentalism is typically advocating for non-sentient beings, which I think is quite different than MCE for sentient beings.

I think the better competitors to farmed animal advocacy are advocating broadly for antispeciesism/fundamental rights (e.g. Nonhuman Rights Project) and advocating specifically for digital sentience (e.g. a larger, more sophisticated version of People for the Ethical Treatment of Reinforcement Learners). There are good arguments against these, however, such as that it would be quite difficult for an eager EA to get much traction with a new digital sentience nonprofit. (We considered founding Sentience Institute with a focus on digital sentience. This was a big reason we didn't.) Whereas given the current excitement in the farmed animal space (e.g. the coming release of "clean meat," real meat grown without animal slaughter), the farmed animal space seems like a fantastic place for gaining traction.

I'm currently not very excited about "Start a petting zoo at Deepmind" (or similar direct outreach strategies) because it seems like it would produce a ton of backlash because it seems too adversarial and aggressive. There are additional considerations for/against (e.g. I worry that it'd be difficult to push a niche demographic like AI researchers very far away from the rest of society, at least the rest of their social circles; I also have the same traction concern I have with advocating for digital sentience), but this one just seems quite damning.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

I agree this is a valid argument, but given the other arguments (e.g. those above), I still think it's usually right for EAAs to focus on farmed animal advocacy, including Sentience Institute at least for the next year or two.

(FYI for readers, Gregory and I also discussed these things before the post was published when he gave feedback on the draft. So our comments might seem a little rehearsed.)

View more: Next