J

JacobS

143 karmaJoined

Comments
15

Scott Alexander has a very interesting response to this post on reddit: see here.

I wrote something about campaign contributions in federal US elections earlier this year. I could be wrong, but based on my (non-expert) survey of the campaign finance literature, it doesn't seem like donating to political campaigns has a very substantial impact on election outcomes (most of the time). The main takeaway is that spending and success are correlated, but the former doesn't cause the latter. Spending is simply a useful heuristic for the size/traction/etc. of a campaign.

This is very similar to the comment I was going to make.

I admit that it has crossed my mind that even a moderate EA lifestyle is unusually demanding, especially in the longterm, and therefore could make finding a longterm partner more difficult. However, I do resonate with that last bit – encouraging inter-EA dating also seems culty and insular to me, and I’d like to think that most of us could integrate EA (as a project and set of values) into our lives in way that allows us to have other interests, values, friends, and so on (i.e., our lives don’t have to entirely revolve around our EA-esque commitments!). I don’t see why an EA and a non-EA who were romantically compatible couldn’t find comfortable ways to compromise on lifestyle questions – after all, plenty of frugal people find love, and plenty of vegan people find love, whose to say a frugal vegan couldn’t find love?

Answer by JacobS3
0
0

There are two different angles on this question. One is whether the level of response in EA has been appropriate, the second is whether the level of response outside of EA (i.e., by society at large) has been appropriate.

I really don't know about the first one. People outside of EA radically underestimate the scale of ongoing moral catastrophes, but once you take those into account, it's not clear to me how to compare -- as one example -- the suffering produced by factory farming to the suffering produced by a bad response to coronavirus in developed countries (replace "suffering" with "negative effects" or something else if "suffering" isn't the locus of your moral concern). My guess is many of the best EA causes should still be the primary focus of EAs, as non-EAs are counterfactually unlikely to be motivated by them. I do think, however, that at the very beginning of the coronavirus timeline (January to early March), the massive EA focus on coronavirus was by-and-large appropriate, given how nonchalant most of society seemed to be about the coronavirus.

Now for the second one -- has the response of society been appropriate? I'm also under-informed here, but my very unoriginal answer is that the response to the coronavirus has been appropriate if you consider it proportional, not to the deadliness of the disease, but to (1) the infectivity of the disease (2) the corresponding inability of the healthcare system to handle a lot of infections. You wrote:

I read the news, too, but there’s something about the level of response to coronavirus given the very moderate deadliness— especially within EA— that just does not add up to me.

And it seems like you're probably not accounting for (1) and (2). It does not seem like a particularly deadly disease (when compared to other, more dangerous pathogens), but it is very easily spread, which is where the worry comes from.

Glad the alienation objection is getting some airtime in EA. I wanted to add two very brief notes in defense of consequentialism:

1) The alienation objection seems generalizable beyond consequentialism to any moral theory which (as you put it) inhibits you from participating in a normative ideal. I am not too familiar with other moral traditions, but it is possible for me to see how following certain deontological or contractualist theories too far also results in a kind of alienation. (Virtue ethics may be the safest here!)

2) The normative ideals that deal with interpersonal relationships are, as you mentioned, not the only normative ideals on offer. And while the ones that deal with interpersonal relationships may deserve a special weight, it’s still not clear how to weigh them relative to other normative ideals. Some of these other normative ideals may actually be bolstered by updating more in favor of following some kind of consequentialism. For example, consider the below quote from Alienation, Consequentialism, and the Demands of Morality by Peter Railton, which deeply resonated with me when I first read it:

Individuals who will not or cannot allow questions to arise about what they are doing from a broader perspective are in an important way cut off from their society and the larger world. They may not be troubled by this in any very direct way, but even so they may fail to experience that powerful sense of purpose and meaning that comes from seeing oneself as part of something larger and more enduring than oneself or one's intimate circle. The search for such a sense of purpose and meaning seems to me ubiquitous — surely much of the impulse to religion, to ethnic or regional identification (most strikingly, in the ‘rediscovery’ of such identities), or to institutional loyalty stems from this desire to see ourselves as part of a more general, lasting and worthwhile scheme of things. This presumably is part of what is meant by saying that secularization has led to a sense of meaninglessness, or that the decline of traditional communities and societies has meant an increase in anomie.

This was basically going to be my response -- but to expand on it, in a slightly different direction, I would say that, although maybe we shouldn't be more concerned about biorisk, young EAs who are interested in biorisk should update in favor of pursuing a career in/getting involved with biorisk. My two reasons for this are:

1) There will likely be more opportunities in biorisk (in particular around pandemic preparedness) in the near-future.

2) EAs will still be unusually invested in lower-probability, higher-risk problems than non-EAs (like GCBRs).

(1) means talented EAs will have more access to potentially high-impact career options in this area, and (2) means EAs may have a higher counterfactual impact than non-EAs by getting involved.

Some low-effort thoughts (I am not an economist so I might be embarrassing myself!):

  • My first inclination is something like "find the average output of the field per unit time, then find the average growth rate of a field, and then calculate the 'extra' output you'd get with a higher growth rate." In other words: (1) what is the field currently doing of value? (2) how much more value would that field produce if they did whatever they're currently doing faster?
    • It would be interesting to see someone do a quantitative analysis of the history of progress in some particular field. However, because so much intellectual progress has happened in the last ~300 years by so few people (relatively speaking), my guess is we might not have enough data in many cases.
  • The more something like the "great man theory" applies to a field (i.e. the more stochastic progress is), the more of a problem you have with this model. [Had an example here, removed it because I no longer think it's appropriate.]
  • With regard to that latter question (also your second set-up), I wonder how reliably we could apply heuristics for determining the EV of particular contributions (i.e. how much value do we usually get from papers in field Y with ~X citations?).

I dug up a few other places 80,000 Hours mentions law careers, but I couldn't find any article where they discuss US commercial law for earning-to-give. The other mentions I found include:

In their profile on US AI Policy, one of their recommended graduate programs is a "prestigious law JD from Yale or Harvard, or possibly another top 6 law school."

In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best ways to get positions in government & policy, which is one of our top priority areas.”

It's also mentioned in this article that Congress has a lot of HLS graduates.

You mentioned in the answer to another question that you made the transition from being heavily involved with social justice in undergrad to being more involved with EA in law school. This makes me kind of curious -- what's your EA "origin story"? (How did you find out about effective altruism, how did you first become involved, etc.)

Load more