This is a special post for quick takes by Pete Rowlett. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I was talking with a new university group organizer recently, and the topic of heavy-tailed impact came up.  Here I’ll briefly explain what heavy tails are and what I think they imply about university group community building.

What’s a heavy tail?

In certain areas, the (vast) majority of the total effect comes from a (small) minority of the causes.  In venture capital, for example, a fund will invest in a portfolio of companies.  Most are expected to fail completely.  A small portion will survive but not change significantly in value.  Just one or two will hopefully grow a lot, not only compensating for the failures, but returning the value of the fund multiple times over.  These one or two companies can determine the overall return of the fund.

How does this apply to community building?

A few people that come out of your university group may well end up being responsible for the vast majority of your group’s impact.  Those people may be extraordinarily high earners, top AI safety researchers, or strong leaders who build up effective animal advocacy organizations.  Group members who aren’t in this category can certainly end up having meaningful impact, but they are not the primary drivers of the “return” of your “portfolio.”

If you could just find those top people and do everything possible to make sure they ended up succeeding, that would be the best thing to do.  The problem is, you don’t know who is going to be on the tail.  You don’t know for sure if interpretability or RLHF is a more promising alignment direction, or if people should be working on fish or insect welfare.  You don’t know who is going to earn a bunch of money or who would actually donate it (well) once they do.

The goal is to find and support people who could plausibly end up being on the tail end of impact, just as the venture capitalist invests in all the companies that have a shot at increasing a lot in value very quickly.

To me, this means starting with broad outreach for introductory programs, with some special focus on groups that likely have extra talented people (Stamps Scholars at Georgia Tech, for example).  It’s important not to select too harshly yet, because many people who have a serious shot at being on the tail are not in these groups, especially if you’re already at an institution that selects for a higher baseline level of talent.  Also, the cost of missing out on a big hit is much higher than the cost of cultivating someone who doesn’t end up having much of an impact.  This type of broad outreach also gets rid of some of the icky elitism feelings people sometimes have when talking about heavy-tailed impact.

Introductory programs are great because 1) they help participants understand the project of effective altruism and 2) they help facilitators figure out who might end up on the tail.  Those who show up, do the readings, and engage thoughtfully and critically with ideas are all worth investing in.  The important idea here is that it’s probably not worth trying to invest in people who don’t fit in that category.  Design your programming to support those with interest, an open mind, and a desire to learn.  Others may attend the occasional social or discussion event, which is absolutely fine, but don’t waste your time trying to convince them to do more seminars just to have more people participating.  These people may eventually grow and change in ways that make them more interested in doing impactful work. My guess is that having introduced someone to EAs and EA thinking meaningfully increases the probability that they engage with the community if and when this occurs, without trying to push ideas on them before they’re ready. This increased likelihood of engagement with the community makes them more likely to end up on the tail end of impact.

What this doesn’t mean

I want to emphasize again that the idea that community building is heavy-tailed doesn’t mean that you should find only the best students at your university to join the introductory program.  If you think you can predict who will end up being the most engaged participants, and you don’t want the less engaged to ruin the atmosphere for the others, form groups based on expected engagement and still provide a cohort for the bottom group.  Only cut applicants who didn’t answer your questions or seem problematic.  Running a marginal cohort is super low cost, and you could very well find someone great.

You can, if you want to, still maintain a perception of selectivity and/or formality through an application process and consistent, high-quality communication.  And the selectivity thing can still be accurate – you’re just picking people to be in the strongest cohorts instead of picking people to accept.

Rawls’ veil of ignorance supports maximizing expected value

One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.

The thought experiment begins with a group of rational agents in the “original position”. Here they have no knowledge of who or what they will be when they enter the world. They could be any race, gender, species, or thing. Because they don’t know who or what they will be, they have no unfair biases, and should be able to design a just society and make just decisions.

Now for two expected value thought experiments from the Cambridge EA introductory seminar discussion guide. Suppose that a disease, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:

Version A…

  1. Save 400 lives, with certainty [EV +400/-100]
  2. Save 500 lives, with 90% probability; save no lives, 10% probability [EV +450/-50]

Version B…

  1. 100 people die, with certainty [EV +400/-100]
  2. 90% chance no one dies; 10% chance 500 people die [EV +450/-50]

Now imagine that you’re an agent behind the veil of ignorance. You could enter the world as any of the 500 individuals. What do you want the decision-maker to choose? In both versions of the thought experiment, option 1 gives you an 80% chance of surviving, but option 2 gives you a 90% chance. The clear choice is option 2.

This framework bypasses the common objection that it’s wrong to take risks with others’ lives by turning both options into a risk.  In my experience, part of this objection often has to do with understandable feelings of discomfort with risk-taking in high-stakes scenarios.  But here risk-taking is the altruistic approach, so a refusal to accept risk would ultimately be for the emotional benefit of the decider.  This topic can also lead to discussion about the meaning of altruism, which is a highly relevant idea for intro seminar participants.

This argument isn’t new (reviewers noted that John Harsanyi was the first to make this argument, and Holden Karnofsky discusses it in his post on one-dimensional ethics), but I hope you find this short explanation useful for your own thinking and for your communication of effective altruism.

Destroying viruses in at-risk labs

Thanks to Garrett Ehinger for feedback and for writing the last paragraph.

Military conflict around or in the vicinity of biological research laboratories could substantially increase the risk of releasing a dangerous pathogen into the environment. The fighting and mass movement of refugees combine with other risk factors to magnify the potential ramifications of this risk.  Garrett Ehinger elaborates on this issue in his excellent Chicago Tribune piece, and proposes the creation of nonaggression treaties for biological labs in war zones as additional pillars to shore up biosecurity norms. 

This seems like a great option, but I think there may be a more prompt technical solution as well.  Viruses, bacteria, and other dangerous materials in at-risk labs could be stored in containers that have built-in methods to destroy their contents. A strong heating element could be integrated into the storage compartment of each virus and activated by scientists at the lab if a threat seems imminent.  Vibration sensors could also automatically activate the system in case of a bombing or an earthquake.  This solution would require funding and engineering expertise.  I don’t know how much convincing labs would need to integrate it into their existing setups.

If labs might consider the purchase and implementation of entirely new heating elements with their existing containers to be too tall of an order, there are other alternatives.  For example, “autoclaves” (the chemist's equivalent of a ceramic kiln or furnace) are already commonplace in many biological laboratories for purposes such as medium synthesis or equipment sterilization.  There could be value for these labs in developing SOPs and recommendations for the safe disposal of risky pathogens via autoclaves.  This solution would be quicker and easier to implement, but in an emergency situation, could require slightly more time to safely destroy all the lab’s pathogens.

Different group organizers have widely varying beliefs that affect what work they think is valuable.  From certain perspectives, work that’s generally espoused by EA orgs looks quite negative.  For example, someone may believe that the harms of global health work through the meat eater problem dominate the benefits of helping reduce human suffering and saving lives.  Someone may believe that the expected value of the future with humans is negative, and as such, biosecurity work that reduces human extinction risk is net-negative.  In this post I’ll briefly consider how this issue can affect how CBs do their work.

Obligations to others

Since many major EA orgs and community members provide support to groups, there may be obligations to permit and/or support certain areas of work in the group.  Open Phil, for example, funds EA groups and supports biosecurity work.  There’s no mandate that organizers conduct any particular activities, but it’s unclear to me what degree of support for certain work is merited.  It currently seems to me that there is no obligation to support work in any given area (e.g. running a biosecurity seminar), but there may be an obligation to not prevent another organizer from engaging in that activity.  This seems like a simple solution, but there is some moral conflict when one organizer is providing background support such as managing finances, conducting outreach, and running social events that facilitate the creation and success of the controversial work.

Deferring

CBs could choose to accept that we (generally) aren’t philosophy PhDs or global priorities researchers and weigh the opinions of those people and the main organizations that employ them heavily.  This sort of decision making attempts to shift responsibility to other actors and can contribute to the problem of monolithic thinking.

Gains from trade

Maybe the organizers of groups A, B, and C, think that the meat eater problem makes global health work net negative, but the organizers of groups D, E, and F prioritize humans more, which makes global health look positive.  If everyone focuses on their priorities, organizers from A, B, and C miss out on great animal welfare promoters from D, E, and F, and organizers from D, E, and F miss out on great global health supporters from A, B, and C.  On the other hand, if everyone agrees to support and encourage both priorities, everyone’s group members get into their comparative advantage areas and everyone is better off.  This plan does ignore counteracting forces between interventions and the possibility that organizers will better prepare people for areas that they believe in.  Coordinating this sort of trade also seems quite difficult.

Conclusion

I don’t see a simple way to solve these issues.  My current plan is to reject the “deferring” solution, not prevent other organizers from working on controversial areas, accept that I’ll be providing them with background support, and focus on making, running, and sharing programming that reflects my suffering-focused values.

Curated and popular this week
Relevant opportunities