This is a special post for quick takes by Babel. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Proposal: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.

  • My definition of epistemic health infrastructure: a social, digital, or organizational structure that provides systematic safeguards against one or more epistemic health issues, by regulating some aspect of the intellectual processes within the community.
    • They can have different forms (social, digital, organizational, and more) or different focuses (individual epistemics, group epistemology, and more), but one thing that unites them is that they're reliable structures and systems, rather than ad hoc patches.

(note: in order to keep the shortform short I tried to be curt when writing the content below; as a result the tone may come out harsher than I intended)

We talk a lot about epistemic health, but have massively underinvested in infrastructure that safeguards epistemic health. While things like EA Forum and EAG and social circles at EA hubs are effective at spreading information and communicating ideas, to my knowledge there has been no systematic attempt at understanding (and subsequently improving) how they affect epistemic health.

Examples of things not-currently-existing that I consider epistemic health infrastructure: 

  • (Not saying that these are the most valuable ones; just that they fall into this category. They are examples and are only examples.)
  • mechanisms to poll/aggregate community opinions (e.g. a more systematized version of Nathan Young's Polis polls) on all kinds of important topics, with reliable mechanisms to execute actions according to poll results
  • something like the CEA community health team, but focusing on epistemic health, with better-defined duties and workflows, and with better transparency
  • EA forum features and (sorting/recommending/etc.) algorithms aiming to minimize groupthink/information cascades
  • (Many proposals from other community members about improving community epistemic health also fall into this category. I won't repeat them here.)

I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests. Please do PM me if you're interested!

Four podcasts on animal advocacy that I recommend:

  • Freedom of Species (part of 3CR radio station)
    Covers a wide range of topics relevant to animal advocacy, from protest campaigns to wild animal suffering to VR. More of its episodes are on the "protest campaigns" end which is less popular in EA, but I think it's good to have an alternative perspective, if only for some diversification.
  • Knowing Animals (hosted by Josh Milburn)
    An academic-leaning podcast that focuses on Critical Animal Studies, which IMO is like the academic equivalent of animal advocacy. Most guests are academics in philosophy, humanities and social sciences. (and btw, one episode discussed wild animal suffering, and I liked that episode quite a lot)
  • The Sentience Institute Podcast
    EA-aligned. Covers topics ranging from alt proteins to animal-focused impact investing to local animal advocacy groups to  digital sentience.
  • Animal Rights: The Abolitionist Approach Commentary (by Gary L. Francione)
    A valuable perspective that's not commonly seen in EA. Recommended for diversification.

Off-topic: I also recommend the Nonlinear Library podcasts; they turn posts on EA Forum and other adjacent forums (LW, AF) to audio. There're different versions that form a series, including a version containing all-time top posts of EA Forum. There's also a version containing the latest posts meeting a not-very-high karma bar - I use that version to keep track of EA news, and it saved me a lot of time.

Hypothesis: in the face of cluelessness caused by flow-through effects, "paving path for future progress" may be a robust benefit of altruistic actions.

Epistemic status: off-the-cuff thoughts, highly uncertain, a hypothesis instead of a conclusion

(In this short-form I will assume a consequentialist perspective.)

Take slavery abolition as an example. The abolition of slavery seems obviously positive at the object level. But when we take into account second-order effects, things become less clear (e.g. the meater-eater problem). However, I think the bad second-order effects (if any) can plausibly be outweighed by one big second-order benefit: that the abolition of slavery paves the way for future moral progress, including (but not limited to) those around our treatment of animals. For example, it seems likely to me that in a world with slavery, it would be much harder to advocate for the rights of human minorities, of animals, and of digital sentience.

I guess this applies to many other cases too, including cases irrelevant to moral progress but relevant to some other kind of progress. This hypothesis might not change how we act by much, as we usually tend to ignore hard-to-evaluate second-order effects. This hypothesis may provide a reason why sometimes an action is justified despite of seemingly negative second-order effects, but I also worry that it may be abused, as a rationalization for ignoring flow-through effects.

Summary: This is a slightly steelmanned version of an argument for creating a mass social movement as an effective intervention for animal advocacy (which I think is neglected by EA animal advocacy), based on a talk by people at Animal Think Tank. (Vote on my comment below to indicate if you think it's worth expanding into a top-level post)

link to the talk; alternative version with clearer audio, whose contents - I guess - are similar, but I'm not sure. (This shortform doesn't cover all content of the talk, and has likely misinterpreted something in the talk; I recommend you to listen to the full talk)

Epistemic status: An attempt at steelmaning the arguments, though I didn't really try hard - I just wrote down some arguments that occur to me.

The claim: Creating a mass social movement around animals, is more effective than top-to-bottom interventions (e.g. policy) and other interventions like vegan advocacy, at least on current margins.

  • This is not to say policy work isn't important. Just that it comes into the picture later.
  • My impression is that the track record of mass movements in creating change is no less impressive than that of policy reforms, but EA seems to have completely neglected the former.

A model of mass movements:

  • Analogous to historic movements like the civil rights movement in the US, and recent movements like Extinction Rebellion. Both examples underwent exponential growth, which will be explained in the next bullet point.
  • You start with a pool of people in the movement, and these people go out and try to grab attention for the movement, using tactics like civil disobedience and protests. Exposure to the ideas leads to more people thinking about them, which in turn leads to more people joining. With the enlarged people pool, you start the cycle again. This then leads to an exponentially growing pool.
  • After the movement is large enough and has enough influence, policy reforms and other interventions aimed at the top of society will become viable.
    • Research showed that few, if any, movements failed after reaching a size threshold of 3.5% of the entire population.
  • Many movements died down because their base number of exponentiation is smaller than 1, but successful movements can have much higher base number. 
    • Other interventions like vegan outreach and policy work may also have similar exponential growth, but it's plausible that their base numbers are much less likely to be >1 (or to be very high) when compared with mass social movements.

Strategies for mass movements:

  • Strategy is super important! 
    • Start from your ultimate goal (e.g. stop animal exploitation), and then set milestones for achieving this goal, and then design concrete actions and campaigns in service of milestones.
  • One key point is "escalation" - how to make the movement grow exponentially starting from the initial pool
    • You need to be momentum-driven: convert the attention you get to new movement members, seize more attention with your enlarged membership, and repeat the cycle
    • You need to force people in the general public to take sides, possibly by non-violent disruptions and making salient sacrifices (e.g. arrests)
    • You may need to show concrete demands (rather than abstract ones) that resonate with people
  • Another key point is "absorption" - when large numbers of new members join the movement, how to rapidly and effectively absorb them
    • Decentralized movements can absorb more rapidly. (e.g. people trained can go off independently and train other new people)
  • There's no silver bullet; we still need deep thinking and discussions and coordination to guide our strategy.

Do you think it's worth expanding into a top-level post? Please vote on my comment below.

Statement: This shortform is worth expanding into a top-level post.

Please cast upvote/downvote on this comment to indicate agreement/disagreement to the above statement. Please don't hesitate to cast downvotes.

If you think it's valuable, it'll be really great if you are willing to write this post, as I likely won't have time to do that. Please reach out if you're interested - I'd be happy to help by providing feedback etc., though I'm no expert on this topic.

A counter-argument: Here it is argued that the research supporting the 3.5% figure may not apply to the animal advocacy context.

Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.

Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a "robustness across future worlds" dimension to the ITN framework.

Epistemic status: low confidence

In cause/intervention exploration, evaluation and prioritization, EA might be neglecting alternative future scenarios, e.g.

  • alternative scenarios of the natural environment: If the future world experienced severe climate change or environmental degradation (which has serious downstream socioeconomic effects), what are the most effective interventions now to positively influence such a world? 
  • alternative scenarios of social forms: If the future world isn't a capitalist world, or is different from the current world in some other important aspect, what are the most effective interventions now to positively influence such a world? 
  • ...

This is not about pushing for certain futures to realize. Instead, it's about what to do given that future. Therefore, arguments against pushing for certain futures (e.g. low neglectedness) do not apply.

For example, an EA might de-prioritize pushing for future X due to its low neglectedness, but if they think X has a non-trivial probability to realize, and its realization has rich implications for cause/intervention prioritization, then whenever doing prioritization, they need to think about "what I should do in a world where X would be realized". This could mean:

  • finding causes/interventions that are robustly impactful across future scenarios, or
  • finding causes/interventions that specifically target future X.

In theory, the consideration of alternative futures should be captured by the ITN framework, but in practice it's usually not. Therefore it could be valuable to add one more dimension to the ITN framework: "robustness across future worlds".

Also, there're different dimensions on which futures can differ. EA tends to have already considered the dimensions that are related to EA topics (e.g. which trajectory of AI is actualized), but tends to ignore the dimensions that aren't. But this is unreasonable, as EA-topic-related dimensions aren't necessarily the dimensions in which futures have the largest variance.

Finally, note that in some future worlds, it's easier to have high altruistic impact than in other worlds. For example in a capitalist world, altruists seem to be at quite a disadvantage to profit-seekers; in some alternative social forms, altruism plausibly becomes much easier and more impactful, while in some other social forms, it may become even harder. In such cases, we may want to prioritize the futures that have the most potential for current altruistic interventions.

Epistemic status: I only spent 10 minutes thinking about this before I started writing.

Idea: Funders may want to pre-commit to awarding whoever accomplished a certain goal. (e.g. maybe some funder like Open Phil can commit to awarding a pool of money to people/orgs who reduce meat consumption to a certain level, and the pool will be split in proportion to contribution)

Detailed considerations:

This can be seen as a version of retroactive funding, but it's special in that the funder makes a pre-commitment.

(I don't know a lot about retroactive funding/impact markets, so please correct me if I'm wrong on the comparisons below)

Compared to other forms of retroactive funding, this leads to the following benefits:

  • less prebuilt infrastructure is needed
  • provides stronger incentives to prospective "grantees"
  • better funder coordination
  • better grantee coordination

... but also the following detriments:

  • much less flexibility
  • perhaps stronger funding centralization
  • potentially unhealthy competition between grantees

Compared to classical grant-proposal-based funding mechanisms, this leads to the following benefits:

  • better grantee coordination
  • stronger incentives for grantees
  • more flexibility (i.e. grantees can use whatever strategy that works, rather than whatever strategy the funder likes)

... but also the following detriments:

  • lack of funds to kickstart new projects that otherwise (ie if without funding) wouldn't be started
  • perhaps stronger funding centralization
  • potentially unhealthy competition between grantees

Important points:

  • The goals should probably be high-level but achievable, while being strategy-agnostic (i.e. you can use whatever morally acceptable strategies to achieve the goal). Otherwise, you lose a large part of the value from pre-committed awards - sparkling creativity from prospective grantees.
  • If your ultimate goal is too large and you need to decompose it into subgoals and award the subgoals, make sure your subgoals are dispersed across a diverse range of tracks/strategies. For example, if your ultimate goal is to reduce meat consumption, you may want to set subgoals on the alt protein track, as well as on the vegan advocacy track, and various other tracks.
  • Explicitly emphasize that foundation-building work will be awarded, rather than awarding only the work that completed the one last step to the goal.
  • Attribute contribution using an open and transparent research process. Maybe crowdsource opinions from a diverse group of experts.
    • Such research will be hard. This is IMO one of the biggest barriers to this approach, but I think it applies to other versions of retroactive funding/impact markets too.

One doubt on superrationality:

(I guess similar discussions must have happened elsewhere, but I can't find them. I am new to decision theory and superrationality, so my thinking may very well be wrong.)

First I present an inaccruate summary of what I want to say, to give a rough idea:

  • The claim that "if I choose to do X, then my identical counterpart will also do X" seems to (don't necessarily though; see the example for details) imply there is no free will. But if we in deed assume determinism, then no decision theory is practically meaningful.

Then I shall elaborate with an example:

  • Two AIs with identical source codes, Alice and Bob, are engaging in a prisoner's dillema.
  • Let's first assume they have no "free will", i.e. their programs are completely deterministic.
    • Suppose that Alice defects, then Bob also defects, due to their identical source code.
    • Now, we can vaguely imagine a world in which Alice had cooperated, and then Bob would also cooperate, resulting in a better outcome.
    • But that vaguely imagined world is not coherent, as it's just impossible that, given the way her source code was written, Alice had cooperated.
    • Therefore, it's practically meaningless to say "It would be better for Alice to  cooperate".
  • What if we assume they have free will, i.e. they each have a source of randomness, feeding random numbers into their programs as input?
    • If the two sources of randomness are completely independent, then decisions of Alice and Bob are also independent. Therefore, to Alice, an input that leads her to defect is always better than an input that leads her to cooperate - under both CDT and EDT.
    • If, on the other hand, the two sources are somehow correlated, then it might in deed be better for Alice to receive an input that leads her to cooperate. This is the only case in which superrationality is practically meaningful, but here the assumption of correlation is quite a strong claim and IMO dubious:
      • Our initial assumption on Alice and Bob is only that they have identical source codes. Conditional on Alice and Bob having identical source codes, it seems rather unlikely that their inputs would also be somehow correlated.
      • In the human case: conditional on my counterpart and I having highly similar brain circuits (and therefore way of thinking), it seems unreasonable to assert that our "free will" (parts of our thinking that aren't deterministically explainable by brain circuits) will also be highly correlated.

After writing this down, I'm seeing a possible response to the argument above:

  • If we observe that Alice and Bob had, in the past, made similar decisions under equivalent circumstances, then we can infer that:
    • There's an above-baseline likelihood that Alice and Bob have similar source codes, and
    • There's an above-baseline likelihood that Alice and Bob have correlated sources of randomness.
    • (where the "baseline" refers to our prior)

 However:

  • It still rests on the non-trivial metaphysical claim that different "free wills" (i.e. different sources of randomness) could be correlated.
  • The extent to which we update our prior (on the likelihood of correlated inputs) might be small, especially if we consider it unlikely that inputs could be correlated. This may lead to a much smaller weight of superrational considerations in our decision-making.
More from Babel
Curated and popular this week
Relevant opportunities