Hide table of contents

Here are some areas where I've felt for a long time that fellow EA community members are making systematic mistakes.

Summary: 1. Don't worry much about diet change 2. Be generally cynical or skeptical about AI ethics and safety initiatives that are not closely connected to the core long-run issue of AGI alignment and international cooperation 3. Worry more about object-level cause prioritization and charity evaluation, worry less about meta-level methodology 4. Be more ruthless in promoting Effective Altruism.

Over-emphasis on diet change

EAs seem to place continuously high emphasis on adopting vegan, vegetarian and reducetarian diets.

However, the benefits of going vegan are equivalent to less than a nickel per day donated to effective charities. Other EAs have raised this this point before; the only decent response given at the time was that the estimates for the effectiveness of animal charities were likely over-optimistic. However, in the linked post I took the numbers displayed by ACE in 2019, and scaled them back a few times to be conservative, so it would be tough to argue that they are over-optimistic. I also used conservative estimates of climate change charities to offset the climate impacts, and also toyed with using climate change charities to offset animal suffering by using the fungible welfare estimates (I didn't post that part but it's easy to replicate). In both cases, still the vegan diet is only as good as donations of pennies per day, suggesting that there is nothing particularly optimistic about animal charity ratings, it's just the nature of individual consumption decisions to make a tiny impact. And then we have to contend with other effective charities like x-risk and global poverty alleviation possibly being better than animal and climate change charities. Therefore, this response is now very difficult to substantiate.

The basic absolute merit of veganism is of course not being debated here - it saves a significant number of animals, which is sufficient to prefer that a generic member of society be vegan (given current farming practices at least).

However, the relative impact of other efforts seems to be much much higher, so there are other implications. First, putting public emphasis on being vegan/vegetarian is a bad choice, compared to placing that emphasis on donations (or career changes, etc). This study suggests that nudges to "turn off the lights" etc can reduce people's support for a carbon tax, as they feel like there is an alternative and easier solution for the environment besides legislation. What if a similar effect applies to animal welfare legislation or donations? The effect goes away when people know just how little of an impact they are actually having, but such messages are rarely given when it comes to veg*n activism - even when EAs are doing it. In addition to a possibly detrimental impact on the political attitudes and donation habits of our audience (committed EAs themselves, almost certainly, are not so vulnerable to these nudges) there is a risk that it reduces the popular appeal of the EA movement. While veg*nism seems to be significantly more accepted in public discourse now than it was ~10 years ago, it's still quite controversial.

Second, actually being vegan/vegetarian may be a bad choice for someone who is doing productive things with their career and donations. If a veg*n diet is slightly more expensive, more time consuming, or less healthy, then adopting it is a poor choice. Of course, many people have adequately pointed out that veg*n diets need not be more expensive, time consuming, or unhealthy than omnivorous diets. However, it's substantially more difficult to make them satisfy all three criteria at the same time. As for expense and time consumption - that's really something for people to decide for themselves, based on their local food options and habits. As for health:

Small tangent on the healthiness of vegan/vegetarian diets

I am not a nutritionist but my very brief look at the opinions of expert and enthusiast nutritionists and the studies they cite has told me that the healthiest diet is probably not vegetarian.

First, not all animal products are equal, and the oft-touted pro-veg*n studies overlook these differences. Many of the supposed benefits of veg*n diets seem to come from the exclusion of processed meat, which is meat that has been treated with modern preservatives, flavorings, etc. This is really backed up by studies, not just anti-artificial sentiment. Good studies looking at the health impacts of unprocessed meat (which, I believe, generally includes ground beef) are rare. I've only found one, a cohort study, and it did find that unprocessed red meat increased mortality, but not as much as processed red meat. Whether unprocessed white meat and fish have detrimental impacts seems like a very open question. And even when it comes to red meat, nutritional findings that were backed by similarly strong evidence as this have been overturned in the past, I believe. Then there are a select few types of meat which seem particularly healthy, like sardines, liver and marrow, and there is still less reason to believe that they are harmful. Moving on to dairy products, it seems that fermented dairy products are significantly superior to nonfermented ones.

Second, vegan diets miss out on creatine, omega-3 fat in its proper EHA/DHA form, Vitamin D, taurine, and carnosine. Dietary intake of these is not generally necessary for a basically decent life as far as I know, but being fully healthy (longest working life + highest chance of living to a longevity horizon + best cognitive function) is a different story, and these chemicals are variously known or hypothesized to be beneficial. You can of course supplement, but at the cost of extra time and money - and that's assuming that you remember to supplement. For some people who are simply bad at keeping habits - me, at least - supplementing for an important nutrient just isn't a reliable option; I can set my mind to do it but I predictably fail to keep up with it.

Third, vegan/vegetarian diets reduce your flexibility to make other healthy changes. As an omnivore, it's pretty easy for me to minimize or avoid unhealthy foods such as store-bought bread (with so many preservatives, flavorings etc) and fortified cereal. As a vegetarian or vegan, this would be significantly more difficult. When I was vegan and when I was vegetarian, both times I made it work by eating some less-than-healthy foods, otherwise I would have had to face greater time and/or money spent on putting my diet together.

Finally, nutritional science is frankly a terrible mess, and not necessarily due to ill motives and practices on the part of researchers (though there is some of that) but also because of just how difficult it is to tease out correlation from causation in this business. There's a lot that we don't understand, including chemicals that may play a valuable health role but haven't been properly identified as such. Therefore, in the absence of clear guidance it's wise to defer to eating (a) a wide variety of foods, which is enhanced by including animal products, and (b) foods that we evolved to eat, which has usually included at least a small amount of meat.

For these reasons, I weakly feel that the healthiest diet will include some meat and/or fish, and feel it more strongly if we consider that someone is spending only a limited amount of time and money on their diet. Of course that doesn't mean that a typical Western omnivorous diet is superior to a typical Western veg*n diet (it probably isn't).

Too much enthusiasm for AI ethics

The thesis of misaligned AGI risk, developed by researchers like Yudkowsky and Bostrom, has motivated a rather wide range of efforts to establish near-term safety and ethics measures in AI. The idea is that by starting conversations and institutions and regulatory frameworks now, we're going to be in a better position to build safe AGI in the future.

There is some value in that idea, but people have taken it too far and willingly signed onto AI issues without a clear benefit for long-run AI safety or even for near-term AI use in its own right. (I've been guilty of this.) The problem is a lack of good reason to believe that better outcomes are achieved when people put a greater emphasis on AI ethics. Most people outside of EA do not engage in robust consequentialist analysis for ethics. One example would be the fact that Google's ethics board was dissolved because of outrage against the inclusion of the conservative Kay Coles James, larger on the basis of her views on gender politics; an EA writing for Vox, Kelsey Piper, mildly fanned the flames by describing (but, commendably, not endorsing) the regular outrage while simultaneously taking Google to task for not assigning substantial power to the ethics board. Yet it's not really clear if a powerful ethics board - especially one which is composed only of people approved by Google's constituency - is desirable, as I shall argue. An example of AI ethics boards in action would be an ethics report produced by the ethics board at the policing technology company Axon, which recommended against using facial recognition technology on body cams. While it purports to perform a "cost-benefit analysis", and included the participation of one Miles Brundage who is affiliated with the EA community, the recommendation was developed on a wholly rhetorical and intuitive basis without any quantification nor explicit qualitative comparison of costs and benefits. It had a dubious and partisan emphasis on improving the relative power and social status of racial minorities as opposed to a cleaner emphasis on improving aggregate welfare, and an utterly bizarre omission of the benefit that facial recognition tech could make it easier to identify suspects and combat crime. My attempts to question two of the authors about some of these problems led nowhere.

EAs have piled onto the worries over "killer robots" without adequate supporting argument. I have seen EAs circulate half-baked fears of suicide drones making it easy to murder people (just carry a tennis racket, or allow municipalities to track or ban drone flights if they so choose) or assassinate political leaders (they already speak behind bullet-resistant plexiglass sometimes, this is not a problem) or overwhelm defenses (just use turrets with lasers or guns; every measure has a countermeasure). As I argued here, introducing AI into international warfare does not seem bad overall. This point was generally accepted; the remaining quarrel was that AI could facilitate more totalitarian rule as the government could take domestic actions without the consent of human police/militaries. I think this argument is potentially valid but unsolved; maybe stronger policing is better for countries, it needs more investigation. These robots will be subject to democratic oversight and approval, not totalitarian command. When unethical police behavior is restrained, it is almost always done by public outrage and oversight, not by freethinking police officers disobeying their orders.

For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, grievances, limitations, and incompetency. To be clear, I don't think weaponized AI would make the risks of genocide or ethnic cleansing smaller, there just seems to be no good reason to expect it to make the risks bigger.

On top of all this, few seem to have seriously grappled with the fact that we only have real influence in the West, and producing fewer AI weapons mainly just means fewer AI weapons in the West. You can wish for a potent international treaty, but even if that pans out (history suggests it probably won't) it doesn't change the fact that EAs and other activists are incorrectly calling to stop AI weapon development now. And better weapons for the West does mean better global outcomes - especially now that the primary question for Western strategic thinkers is probably not about expanding or even maintaining a semblance of Western global hegemony, but just determining how much Western regional security and influence can be saved from falling victim to rising Russian, Chinese and other challenges. But even when the West was engaging in very dubious wars of global policing (Vietnam, Iraq) it still seems that winning a bad war would have been much better than losing a bad war. Even Trump's recently speculated military adventures in Venezuela and Iran, if they had occurred, would be less bad if they resulted in American victory than American defeat. True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success - it would just partially counterbalance them. (Piper, writing for Vox, did mention improved military capability as a benefit of AI weapons.)

So generally speaking, giving more power to philosophers and activists and regulators to restrict the development and applications of AI doesn't seem to lead anywhere good in the short or medium run. EA-dominated institutions would be mostly trustworthy to do it well (I hesitate slightly because of FLI's persistent campaigning against AI weaponry), but an outside institution/network with a small amount of EA participation (or even worse, no EA participation) is a different story.

The real argument for near-term AI oversight is that it will lead to better systems in the long run. But I am rather skeptical that, in the long run, we will suffer from a dearth of public scrutiny of AI ethics and safety. AI ethics and safety for current systems is not neglected; arguably it's over-emphasized at the expense of liberty and progress. Why think it will be neglected in the future? As AI advances and proliferates, it will likely gain more public attention, and by the time that AGI comes around, we may well find ourselves being restrained by too much caution and interference from activists and philosophers. Of course Bostrom and Yudkowsky's thesis on AGI misalignment will not be so neglected when people see AI on the verge of surpassing humans! Yes, AI progress can be unexpectedly rapid, so there may be some neglect, but there will still be less neglect than there is now. And faster AGI rollout could be preferable because AI might reduce global risk, or because Bostrom's 'astronomical waste' argument for great caution at the expense of growth is flawed. I think it likely is, because it relies on the debatable assumptions of (a) existential risks being concentrated in the near/medium term future and (b) a logistic (as opposed to exponential) growth in the value of humanity as time goes by. Tyler Cowen has argued for putting growth as comparably important to risk management. Nick Beckstead puts more doubts on the astronomical waste argument. Therefore even AGI/ASI rollout should arguably follow the status quo or be accelerated, so more ethics/safety oversight and regulation on the margin will possibly be harmful.

To be sure, international institutions for cooperation on AI and actual alignment research, ahead of time, are both robustly good things where we can reliably expect society to err on the side of doing too little. But the other stuff has minimal or possibly negative value.

Top-heavy emphasis on methodology at the expense of object level progress (edit: OK, few people are actually inhibited by this, not a big deal)

It pains me to see so much effort going into writeups and arguments along the lines of EA needs more of [my favorite type of research] or EA needs to rely less on quantitative expected value estimates and so on. This is often cheap criticism which doesn't really lead anywhere but intractable arguments, and can weaken the reputation of the EA movement. This seems reminiscent of the perennial naive-science versus philosophy-of-science wars, but where most science fields seem to have fifty scientists for every philosopher of science, we seem to have two or three EA researchers for every methodology-of-EA philosopher. Probably an exaggeration but you get the point.

EA has made exactly one major methodological step forward since its beginnings, which was identifying the optimizer's curse about eight years ago, something which had the benefit of a mathematical proof. I can't think of any meta level argument that has substantially contributed to the EA cause prioritization and charity evaluation process since then. I at least have not benefited from other such arguments. To be clear, such inquiry is better than nothing. But what's much better is for people to engage in real, object level arguments about causes and charities. If you think that EA can benefit by paying more attention to, say, psychoanalytic theory, then great! Don't tell us or berate us about it; instead, lead by example, use psychoanalytic theory, and show us what it says about a charity or cause area. If you're right about the value of this research methodology, then this should be easy for you to do. And then we will see your point and we'll know how to look into this research for more ideas. This is very similar to Noah Smith's argument on the two-paper rule. It's a much more epistemically and socially healthy way of doing things. And along the way, we can get directly useful information about cause areas that we may be missing. Until then, don't write me off as an ideologue just because I'm not inclined to spend my limited free time struggling through Deleuze and Guattari.

Not ruthless enough

This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve. It needs to be taken very seriously. In my biased opinion, this validates some of my longtime suspicions that EAs are not doing enough to actively promote EA as something to be allied with. We've been excessively nice and humble to criticisms, and allowed outsiders' ideas to dominate public conversations about EA. We've over-estimated the popular appeal that comes from being unusually nice and deferential, neglected the popular appeal that comes from strength and condemnation, imagined everything in terms of 'mistake theory' instead of developing a capacity to wield 'conflict theory', and assumed that the popular human conception of "ethics" and "niceness" was as neurotic, rigid and impartial as the upper class urban white Bay Area/Oxford academic conception of "ethics" and "niceness". In today's world, people don't care how "ethical" or "nice" you are if you are on the wrong team, and people who don't have a team won't be motivated to action unless you give them one.

I can't spell out more precisely what I think EAs should do differently, not because I'm trying to be coy about some unspeakable subversive plot, but because every person needs to look in their own life and environment to decide for themselves what they should do to develop a more powerful EA movement, and this is going to vary person to person. Generally speaking, I just think EAs should have a change in mindset and take leaves out of the books of more powerful social movements. We should absolutely be very nice and fair to each other, and avoid some of the excesses of hostility displayed by other social movements, but there's more to the issue than that.

52

0
0

Reactions

0
0

More posts like this

Comments43
Sorted by Click to highlight new comments since: Today at 1:03 PM

Given that ruthlessness has downside risks, maybe we should brainstorm a number of new ideas for movement growth (assuming movement growth is, in fact, valuable) instead of jumping straight to ruthlessness?

In today's world, people don't care how "ethical" or "nice" you are if you are on the wrong team, and people who don't have a team won't be motivated to action unless you give them one.

This is a terrible incentive gradient. I would much rather we make an EA project out of changing or mitigating this incentive gradient than give in to it.

Yes, we could have a large number of people who call themselves "EAs", and all they care about is whether you are on the right team... but would it be an EA movement worth the name?

Please read this post: https://www.effectivealtruism.org/articles/hard-to-reverse-decisions-destroy-option-value/

I work for CEA, but these views are my own.

Ruthlessness comment:

Short version of my long-winded response: I agree that promotion is great and that we should do more of it if we see growth slowing down, but I don't see an obvious reason why promotion requires "ruthlessness" or more engagement with criticism.

  • I'm in favor of promoting EA, and perhaps being a bit less humble than we have been in our recent public communication. I really like cases where someone politely but firmly rebukes bad criticism. McMahan 2016 is the gold standard, but Will MacAskill recently engaged in some of this himself.
  • At the same time, I've had many interactions, and heard of many more interactions, where someone with undeniable talent explained that EA came across to them as arrogant or at least insufficiently humbled, to an extent that they were more reluctant to engage than they would have been otherwise.
    • Sometimes, they'd gotten the wrong idea secondhand from critics, but they were frequently able to point to specific conversations they'd had with community members; when I followed up on some of those examples, I saw a lot of arrogant, misguided, and epistemically sketchy promotion of EA ideas.
    • The latter will happen in any movement of sufficient size, but I think that a slight official move toward ruthlessness could lead to a substantial increase in the number of Twitter threads I see where someone responds to a reasonable question/critique with silly trolling rather than gentle pushback or a link to a relevant article.
  • How frequently are people who take action for their "teams" actually doing something effective? For all the apparent success of, say, extremist American political movements, they are much smaller and weaker in reality than the media paints them as. Flailing about on Twitter rarely leads to policy change; donor money often goes to ridiculous boondoggle projects rather than effective movement-building efforts. I can't think of many times I've seen two groups conflict-theorying at each other and thought "ah, yes, I see that group X is winning, and will therefore get lasting benefit from this fight".
    • So far, EA has done a pretty good job of avoiding zero-sum fights in favor of quiet persuasion on the margins, and in doing so, we've moved a lot of money/talent/attention.
    • If we start picking more fights (or at least not letting fights die down), this ties up our attention and tosses us into a very crowded market.
      • Can EA get any appreciable share of, say, anti-Trump dollars, when we are competing with the ACLU and Planned Parenthood?
      • Will presidential candidates (aside from Andrew Yang) bother to call for more aid funding or better AI policy when constituencies around issues like healthcare and diversity are still a hundred times our size?

It is likely that popular appeal can help EA achieve some of its aims; it could grow our talent pool, increase available fundraising dollars, and maybe help us push through some of our policy projects.

On the other hand, much of the appeal that EA already has is tied to the way it differs from other social movements. Being "nice" in a Bay Area/Oxford sense has helped us attract hundreds of skilled people from around the world who share that particular taste (and often wound up moving to Oxford or the Bay Area). How many of these people would leave, or never be found at all, if EA shifted in the direction of "ruthlessness"?

----

But this all feels like I'm nitpicking at one half of your point. I'm on board with this:

Every person needs to look in their own life and environment to decide for themselves what they should do to develop a more powerful EA movement, and this is going to vary person to person.

Some people are really good at taking critics apart, and more power to them. Even more power to people who can produce wildly popular pro-EA content that brings in lots of new people; Peter Singer has been doing this for decades, and people like Julia Galef and Max Roser and Kelsey Piper are major assets.

But "being proud of EA and happy to promote it" doesn't have to mean "getting into fights". Total ignorance of EA is a much larger (smaller?) bottleneck to our growth than "misguided opposition that could be reversed with enough debate".

So far, the "official"/"formal" EA approach to criticism has been a mix of "polite acknowledgement as we stay the course", "crushing responses from good writers", and "ignoring it to focus on changing the world". This seems basically fine.

What leads you to believe that the problem of "growth tapering off" is linked to "insufficient ruthlessness" rather than "insufficient cheerful promotion without reference to critics"?

  • This need not be about ruthlessness directed right at your interlocutor, but rather towards a distant or ill-specified other.
  • I think it would be uncontroversial that a better approach is not to present yourself as authoritative, but instead present a conception of general authority in EA scholarship and consensus, and demand that it be recognized, engaged with, cited and so on.
  • Ruthless content drives higher exposure and awareness in the very first place.
  • There seems like an inadequate sticking rate of people who are just exposed to EA, consider for instance the high school awareness project.
  • Also, there seems like a shortage of new people who will gather other new people. When you just present the nice message, you just get a wave of people who may follow EA in their own right but don't go out of their way to continue pushing it further. Because it was presented to them merely as part of their worldview rather than as part of their identity. (Consider whether the occasionally popular phrase "aspiring Effective Altruist" obstructs one from having an real EA identity.) How much movement growth is being done by people who joined in the recent few years compared to the early core?
For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, grievances, limitations, and incompetency.

I am not a historian, but during the Nazi regime, The Netherlands had among the highest percentages of Jews killed in all of Western Europe. I remember historians blaming this on the Dutch having thorough records of who the Jews were and where they lived. Access to information is definitely a big factor in how succesful a genocidal regime can be.

The worry is not so much about killer robots enacting a mass murder campaign. The worry is that humans will use facial recognition algorithms to help state-sanctioned ethnic cleansing. This is not a speculative worry. There are a lot of papers on Uyghur facial recognition.

But who is talking about banning facial recognition itself? It is already too widespread and easy to replicate.

Just in the past weeks, San Francisco, Oakland and Cambridge.

Okay, very well then. But if a polity wanted to do something really bad like ethnic cleansing, they would just allow facial recognition again, and get it easily from elsewhere. If a polity is liberal and free enough to keep facial recognition banned then they will not tolerate ethnic cleansing in the first place.

It's like the Weimar Republic passing a law forbidding the use of Jewish Star armbands. Could provide a bit of beneficial inertia and norms, but not much besides that.

As per my initial comment, I'd compare it to pre-WWII Netherlands banning government registration of religion. It could safe tens of thousands of people from deportation and murder.

OK, sounds like the biggest issue is not the recognition algorithm itself (can be replicated or bought quickly) but the acquisition of databases of people's identities (takes time and maybe consent earlier on). They can definitely come together, but otherwise, consider the possibilities (a) a city only uses face recognition for narrow cases like comparing video footage to a known suspect while not being able to do face-rec for the general population, and (b) a city has profiles and the ability to identify all its citizens for some other purpose but just doesn't have the recognition algorithms (yet).

It seems like a big distinction between the two lies in how quickly they could be rolled out. A pre-WWII database of religion would have taken a long time to create, so pre-emptively not creating one significantly inhibited the Germans, while the US already had the census data so could intern the Japanese. But it doesn't seem likely that not using facial recognition now would make it significantly harder to use later.

I found this to be thought-provoking and I'm glad you posted it. With that in mind, this list of points will skew a bit critical, as I'm more interested to see responses in cases where I disagree.

Diet change comment:

  • I haven't seen much general EA advocacy for going veg*n, even from organizations focused on animal welfare (has a single person tried to make this argument on the Forum since the new version was launched?).
    • Instead, I think that most veg*n people in EA are doing so out of a personal desire to avoid action they see as morally wrong, rather than because they have overinflated estimates of how much good they are accomplishing. I made a poll to test this assumption.
    • Anecdotally, I basically agree with your numerical argument. I eat a lot of cheese, some other dairy products, and occasionally meat (maybe once or twice a month, mostly when I'm a guest at the house of a non-vegetarian cook, so that they don't have to worry about accommodating me). But I still eat less meat than I used to (by a factor of at least ten), for a variety of EA-ish reasons:
      • I feel worse about minor immoral action than I used to.
      • I'm surrounded by people who have much stronger views about animal suffering than I do and who eat veg*n diets.
      • I enjoy supporting (read: taste-testing) meat alternatives.
      • I think there's a chance that Future Me will someday look back at Past Me with Future Ethics Glasses and be annoyed about the meat I was consuming.

it's pretty easy for me to minimize or avoid unhealthy foods such as ... fortified cereal

Sorry for the tangent to the main point of the post, but is fortified cereal bad? I had assumed that public health authorities + food companies were adding useful nutrients than most people's diets lacked.

kbog
5y16
0
0

To be sure it is better than unfortified cereal (ceteris paribus), but they usually have a lot of refined grains + added sugar.

Methodology comment:

  • I've been saying this in various comments for a long time, and I was glad to see the point laid out here in more detail.
    • My comments often look like: "When you say that 'EA should do X', which people and organizations in EA are you referring to? What should we do more/less of in order to do less/more of X? What are cases where X would clearly be useful?"
  • I'm a big fan of the two-paper rule, and will try to remember to apply it when I respond to methodology-driven posts in the future.

Regarding this claim:

EA has made exactly one major methodological step forward since its beginnings, which was identifying the optimizer's curse about eight years ago, something which had the benefit of a mathematical proof.

I appreciate that you went on to qualify this statement, but I'd still have appreciated some more justification. Namely, what are some popular ideas that many people thought were a step forward, but that you believe were not?

If methodological ideas generally haven't been popular, EA wouldn't be emphasizing methodology; if they were popular, I'd be curious to see any other writing you've done on reasons you don't think they helped. (I realize that would be a lot of work, and it may not be a good use of your time to satisfy my curiosity.)

When I look at the top ~50 Forum posts of all time (sorted by karma), I only see one that is about methodology, and it's not as much prescriptive as it is descriptive ("EA is biased towards some methodologies, other methodologies exist, but I'm not actively recommending any particular alternatives"). Almost all the posts are about object-level research or community work, at least as far as I understand the term "object-level".

I can only think of a few cases when established EA orgs/researchers explicitly recommended semi-novel approaches to methodology, and I'm not sure whether my examples (cluster thinking, epistemic modesty) even count. People who recommend, say, using anthropological methods in EA generally haven't gotten much attention (as far as I can recall).

I am also thinking of how there has been more back-and-forth about the optimizer's curse, people saying it needs to be taken more seriously etc.

I don't think that the prescriptive vs descriptive nature really changes things, descriptive philosophizing about methodology is arguably not as good as just telling EAs what to do differently and why.

I grant that #3 on this list is the rarest out of the 4. The established EA groups are generally doing fine here AFAIK. There is a CSER writeup on methodology here which is perfectly good: https://www.cser.ac.uk/resources/probabilities-methodologies-and-evidence-base-existential-risk-assessments-cccr2018/ it's about a specific domain that they know, rather than EA stuff in general.

On your final point: I've often been torn on the question of "how big should EA get?" (cf. Buck Shlegeris' point about EA saying 'small and weird'). For what it's worth, I asked Peter Singer this and he emphatically said we should be trying to grow the movement as much as possible.

Relatedly, I often notice that most EAs are media-shy. I can recall a handful of occasions where an EA (individual or org) had the chance to speak with the press and declined for fear of a negative outcome. Maybe it's time to embrace the limelight?

I can't speak for any individual, but being careful in how one engages with the media is prudent. Journalists often have a larger story they are trying to tell over the course of multiple articles and they are actively cognitively biased towards figuring out how what you're saying confirms and fits in with that story (or goes against it such that you are now Bad because you're not with whatever force for Good is motivating their narrative). This isn't just an idle worry either: I've talked to multiple journalists and they've independently told me as much straight out, e.g. "I'm trying to tell a story, so I'm only interested if you can tell me something that is about that story".

Keeping quiet is probably a good idea unless you have media training so you know how to interact with journalists. Otherwise you function like a random noise generator that might accidentally generate noise that confirms what the journalist wanted to believe anyway and if you don't endorse whatever the journalist believes you've just done something that works against your own interests and you probably didn't even realize it!

sky
5y28
0
0

[Note: I’m a staff member at CEA]

I have been thinking a lot about this exact issue lately and agree. I think that as EA is becoming more well-known in some circles, it’s a good time to consider if — at a community level — EA might benefit from courting positive press coverage. I appreciate the concern about this. I also think that for those of us without media training (myself included), erring on the side of caution is wise, so being media-shy by default makes sense.

I think that whether or not the community as a whole or EA orgs should be more proactive about media coverage is a good question that we should spend time thinking about. The balance of risks and rewards there is an open question.

At an individual level though, I feel like I’ve gotten a lot of clarity recently on best practices and can give a solid recommendation that aligns with Gordon’s advice here.

For the past several months, I’ve sought to get a better handle on the media landscape, and I’ve been speaking with journalists, media advisors, and PR-type folks. Most experts I’ve spoken to (including journalists and former journalists) converge on this advice: For any individual community member or professional (in any movement, organization, etc), it is very unwise to accept media engagements unless you’ve had media training and practice.

I’m now of the mind that interview skills are skills like any other, which need to be learned and practiced. Some of us may find them easier to pick up or more enjoyable than others, but very few of us should expect to be good at interviews without preparation. Training, practice, and feedback can help someone figure out their skills and comfort level, and then make informed decisions if and when media inquiries come up.

To add on to Gordon’s good advice for those interested, here is a quick summary of what I’ve learned about the knowledge and skills required for media engagements:

  • General understanding of a journalist’s role, an interviewee’s role, and journalistic ethics (what they typically will and will not do; what you can and cannot ask or expect when participating in a story)
  • An understanding of the story’s particular angle and where you do or don’t fit
  • Researching the piece and the journalist’s credibility in advance, so that you can…
    • evaluate and choose opportunities where your ideas are more likely to be understood or represented accurately versus opportunities where you’re more likely to be misrepresented; and
    • predict the kinds of questions you’re likely to be asked so that you can practice meaningful responses. (Even simple questions like “what is EA?” can be surprisingly hard to answer briefly and well).
  • Conveying key ideas in a clear, succinct way so that the most important things you want to say are more likely to be what is reported
    • This includes the tricky business of predicting the ways in which certain ideas might be misunderstood by a variety of audiences and practicing how to convey points in a way that avoids such misunderstandings
  • Clearly understanding the scope of your own expertise and only speaking about related issues, while referring questions outside your expertise to others

I think having more community members with media training could be useful, but I also think only some people will find it worth their time to do the significant amount of preparation required.

This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received. I’d be happy to have people’s feedback on this resource!

This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received. I’d be happy to have people’s feedback on this resource!

This seems to be a private document. When I try to follow that link I get a page asking for me to log in to Google Drive with a @centreforeffectivealtruism.org Google account, which I don't have (I'm already logged into Google with two other Google accounts, so those don't seem to give me enough permission to access this document).

Maybe this document is intended to be private right now, but if it's allowed to be accessed outside CEA it doesn't seem that you currently can.

Thanks, Gordon; I've fixed the sharing permissions so that this document is public.

In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received.

The Media Training Bible is also good for this.

See On the construction of beacons (a):


Finally, some advice for geeks, founders of subcultures, constructors of beacons. Make your beacon as dim as you can get away with while still transmitting the signal to those who need to see it. Attracting attention is a cost. It is not just a cost to others; it increases the overhead cost you pay, of defending this resource against predatory strategies. If you have more followers, attention, money, than you know how to use right now - then either your beacon budget is unnecessarily high, or you are already being eaten.
However, in the linked post I took the numbers displayed by ACE in 2019, and scaled them back a few times to be conservative, so it would be tough to argue that they are over-optimistic. I also used conservative estimates of climate change charities to offset the climate impacts, and also toyed with using climate change charities to offset animal suffering by using the fungible welfare estimates (I didn't post that part but it's easy to replicate).

With a skeptical prior, multiplying by factors like this might not be enough. A charity could be 100s of times (or literally any number of times) less cost-effective than the EV without such a prior if the evidence is weak, and if there are negative effects with more robust evidence than the positive ones, these might come to dominate and turn your positive EV negative. From "Why we can’t take expected value estimates literally (even when they’re unbiased)":

I have seen some using the EEV framework who can tell that their estimates seem too optimistic, so they make various “downward adjustments,” multiplying their EEV by apparently ad hoc figures (1%, 10%, 20%). What isn’t clear is whether the size of the adjustment they’re making has the correct relationship to (a) the weakness of the estimate itself (b) the strength of the prior (c) distance of the estimate from the prior. An example of how this approach can go astray can be seen in the “Pascal’s Mugging” analysis above: assigning one’s framework a 99.99% chance of being totally wrong may seem to be amply conservative, but in fact the proper Bayesian adjustment is much larger and leads to a completely different conclusion.

On the other hand, the more direct effects of abstaining from specific animal products rely largely on estimates of elasticities, which are much more robust.

"I have seen EAs circulate half-baked fears of suicide drones making it easy to murder people (just carry a tennis racket, or allow municipalities to track or ban drone flights if they so choose) or assassinate political leaders (they already speak behind bullet-resistant plexiglass sometimes, this is not a problem)" - Really not that easy. A tennis racket? Not like banning drones stops someone flying a drone from somewhere else. And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?

Maybe EA should grow more, but I don't think that the issue is that we are "not ruthless enough". Instead I'd argue that meta is currently undervalued, at least in terms of donations.

Yes, the "slaughterbots" video produced by Stuart Russell and FLI presented a dystopian scenario about drones that could be swatted down with tennis rackets. Because the idea is that they would plaster to your head with an explosive.

Not like banning drones stops someone flying a drone from somewhere else.

Yes, but it means that on the rare occasion that you see a drone, you know it's up to no good and then you will readily evade or shoot it down.

And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?

No... but so what? I don't travel in an armored limousine either. If someone really wants to kill me, they can.

More donations for movement growth: I would tentatively agree.

Anecdote re: ruthlessness:

During my recent undergrad, I was often openly critical of the cost effectiveness of various initiatives being pushed in my community. I think anyone who has been similarly ruthless is probably familiar with the surprising amount of pushback and alienation that comes from doing this. I think I may have convinced some small portion of people. I ended up deciding that I should focus on circumventing defensiveness by proactively promoting what I thought were good ideas and not criticizing other people's stupid ideas, which essentially amounts to being very nice.

I wonder how well a good ruthlessness strategy about public contexts generalizes to private contexts and vice versa.

Is veganism a foot in the door towards effective animal advocacy (EAA) and donation to EAA charities? Maybe it's an easier sell than getting people to donate while remaining omnivores, because it's easier to rationalize indifference to farmed animals if you're still eating them.

Maybe veganism is also closer to a small daily and often public protest than turning off the lights, and as such is more likely to lead to further action later than be used as an excuse to accomplish less overall.

Of course, this doesn't mean we should push for EAs to go vegan. However, if we want the support (e.g. donations) of the wider animal protection movement, it might be better to respect their norms and go veg, especially or only if you work at an EA or EAA org or are fairly prominent in the movement. (And, the norm itself against unnecessary harm is probably actually valuable to promote in the long-term.)

Finally, in trying to promote donating to animal charities face-to-face, will people take you more or less seriously if you aren't yourself vegan? I can see arguments each way. If you're not vegan, then this might reduce their fear of becoming or being perceived as a hypocrite if they donate to animal charities but aren't vegan, so they could be more likely to donate. On the other hand, they might see you as a hypocrite, and feel that if you don't take your views seriously enough to abstain from animal products, then they don't have to take your views seriously either.

Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.

For example, "True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success - it would just partially counterbalance them." My intuition says the opposite of this. I don't think it's at all clear (whether increasing the capability of the U.S. military is a good or bad thing).

I agree that object-level progress is to be preferred over meta-level progress on methodology.

Here's some support for that claim which I didn't write out.

There was a hypothesis called "risk homeostasis" where people always accept the same level of risk. E.g. it doesn't matter that you give people seatbelts, because they will drive faster and faster until the probability of an accident is the same. This turned out to be wrong; for instance people did drive faster, but not so much faster as to meet or exceed the safety benefits. The idea of moral hazard from victory leading to too many extra wars strikes me as very similar to this. It's a superficially attractive story that allows one to simplify the world and not have to think about complex tradeoffs as much. In both cases you are taking another agent and oversimplifying their motivations. The driver - just has a fixed risk constraint, and beyond that wants nothing but speed. The state - just wants to avoid bleeding too much, and beyond that threshold it wants nothing but foreign influence. But the driver has a complex utility function or maybe a more inconsistent set of goals about the relative value of more safety vs less safety, more speed vs less speed; therefore, when you give her some new capacities, she isn't going to spend all of it on going faster. She'll spend some on going faster, then some on being safer.

Likewise the state does not want to spend too much money, does not want to lose its allies and influence, does not want to face internal political turmoil, etc. When you give the state more capacities, it spends some of it on increasing bad conquests, but also spends some of it on winning good wars, on saving money, on stabilizing its domestic politics, and so on. The benefits of improved weaponry for the state are fungible, as it can e.g. spend less on the military while obtaining a comparable level of security.

Security dilemmas throw a wrench into this picture, because what improves security for one state harms the security of another. However in the ultimate theoretical case I feel that this just means that improvements in weaponry have neutral impact. Then in the real world, where some US goals are more positive sum in nature, the impacts of better weapons will be better than neutral.

Thanks for the response. That theory seems interesting and reasonable, but to my mind it doesn't constitute strong evidence for the claim. The claim is about a very complex system (international politics) and requires a huge weight of evidence.

I think we may be starting from different positions: if I imagine believing that the U.S. military is basically a force for good in the world, what you're saying sounds more intuitively appearing. However, I do not believe (nor disbelieve) this.

Well, I'm not trying to convince everyone that society needs a looser approach to AI. Just that this activism is dubious, unclear, plausibly harmful etc.

This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve.

Is this the right link? I don't see that claim in the post, but maybe I'm missing it.

[anonymous]5y3
0
0

There's an incorrect link in this sentence:

This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve.

The link goes to Noah Smith's blog post advocating the two paper rule.

(I'm not disagreeing with your overall point about the emphasis on the vegan diet)

You can of course supplement, but at the cost of extra time and money - and that's assuming that you remember to supplement. For some people who are simply bad at keeping habits - me, at least - supplementing for an important nutrient just isn't a reliable option; I can set my mind to do it but I predictably fail to keep up with it.

One way to make this easier could be to keep your supplements next to your toothbrush, and take them around the first time you brush your teeth in a day.

I actually have most of my supplements (capsules/pills) on my desk in front of or next to my laptop. I also keep my toothbrush and toothpaste next to my desk in my room.

I would usually put creatine powder in my breakfast, but I've been eating breakfast at work more often lately, so I haven't been consistent. Switching to capsules/pills would probably be a good idea.

I think you could keep your supplements under $2 a day. Some of these supplements you might want to take anyway, veg or not, too. So I don't think you'd necessarily be spending more on a vegan diet than an omnivorous one, if you're very concerned with cost, since plant proteins and fats are often cheaper than animal products. If you're not that concerned with cost in the first place, then you don't need to be that concerned with the cost of supplements.

There's a lot that we don't understand, including chemicals that may play a valuable health role but haven't been properly identified as such. Therefore, in the absence of clear guidance it's wise to defer to eating (a) a wide variety of foods, which is enhanced by including animal products, and (b) foods that we evolved to eat, which has usually included at least a small amount of meat.

You could also be bivalvegan/ostrovegan, and you don't need to eat bivalves every day; just use them to fill in any missing unknowns in your diet, so the daily cost can be reduced even if they aren't cheap near you. Bivalves also tend to have relatively low mercury concentrations among sea animals, and some are good sources of iron or omega-3.

Here's a potentially useful meta-analysis of studies on food groups and all-cause mortality, but the weaknesses you've already pointed out still apply, of course. See Table 1, especially, and, of course, the discussions of the limitations and strength of the evidence. They also looked at processed meats separately, but I don't think they looked at unprocessed meats separately.

Another issue with applying this meta-analysis to compare vegan and nonvegan diets, though, is that the average diet with 0 servings of beef probably has chicken in it, and possibly more than the average diet with some beef in it. Or maybe they adjusted for these kinds of effects; I haven't looked at the methodology that closely.

unhealthy foods such as store-bought bread (with so many preservatives, flavorings etc)

Do you think it's better to not eat any store-bought whole grain bread at all? I think there's a lot of research to support their benefits. See also the meta-analysis I already mentioned; even a few servings of refined grains per day were associated with reduced mortality. (Of course, you need to ask what people were eating less of when they ate more refined grains.)

How bad are preservatives and flavourings?

On being ruthless, do you think we should focus on framing EA as a moral obligation instead of a mere opportunity? What about using a little shaming, like this? I think the existence of the Giving Pledge with its prominent members, and the fact that most people aren't rich (although people in the developed world are in relative terms) could prevent this light shaming from backfiring too much.

I've long preferred expressing EA as a moral obligation and support the main idea of that article.

On point 4, I wonder if more EAs should use Twitter. There are certainly many options to do more "ruthless" communication there, and it might be a good way to spread and popularize ideas. In any case it's a pretty concrete example of where fidelity vs. popularity and niceness vs. aggressive promotion trade off.

Keep in mind that Twitter users are a non-representative sample of the population... Please don't accept kbog's proposed deal with the devil in order to become popular in Twitter's malign memetic ecosystem.

Absolutely, EAs shouldn't be toxic, inaccurate, or uncharitable on Twitter or anywhere else. But I've seen a few examples of people effectively communicating about EA issues on Twitter, such as Julia Galef and Kelsey Piper, at a level of fidelity and niceness far above the average for that website. On the other hand they are briefer, more flippant, and spend more time responding to critics outside the community than they would on other platforms.

I've recently started experimenting with that, I think it's good. And Twitter really is not as bad a website as people often think.

Yep, though I think it takes a while to learn how to tweet, whom to follow, and whom to tweet at before you can get a consistently good experience on Twitter and avoid the nastiness and misunderstandings it's infamous for.

There's a bit of an extended universe of Vox writers, economists, and "neoliberals" that are interested in EA and sometimes tweet about it, and I think it would be potentially valuable to add some people who are more knowledgeable about EA into the mix.

More from kbog
Curated and popular this week
Relevant opportunities