D

det

238 karmaJoined Jan 2023

Comments
17

det
4d21
5
0
2

As an outsider to the field, here are some impressions I have:

  • NY declaration is very short and uses simple language, which makes it a useful tool for communicating with the public. Compare to this sentence from the Cambridge declaration:

The neural substrates of emotions do not appear to be confined to cortical structures. In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals.

  • The Cambridge declaration is over a decade old. Releasing a similar statement is another chance for media attention and indicates that the consensus hasn't shifted in the opposite direction.
  • The NY declaration places a clearer emphasis on "cephalopod mollusks, decapod crustaceans, and insects." 

    A footnote to the Cambridge declaration mentions "decapod crustaceans, cephalopod mollusks, and insects" in a somewhat confusing way: it says there's "very strong evidence" to support that these animals also "possess the neurological substrates of consciousness," but they aren't mentioned in the main declaration because there was not any presentation on them at the particular conference where it was signed.

    Insofar as the NY declaration is meant to support shrimp or insect welfare, this seems like a plus.
det
6d10
1
0
2
1

Feedback on third episode: Also really liked it! Felt different from the first two. Less free-wheeling, more clearly useful. (Still much more on the relaxed, informal side than main-feed 80k podcasts.)

Felt very useful to get an inside perspective on what 80k thinks its doing with career advising. I really appreciated Dwarkesh kicking the tires on the theory of change ("why not focus 100% on the tails?"), as well as the responses.

It wasn't entirely an easy listen. I identify with the common EA tropes of: trying to push myself to be more ambitious, but this doesn't come naturally so I end up often feeling bad about how non-agentic I am. Ex ante trying some things to see if I'm in the right tail of the distribution, figuring I'm probably not, and being kind of upset and adrift about it. 

I personally appreciate that 80k thinks a lot about doing right by people like me. It was somewhat hard to hear Dwarkesh focus so intently people at the tails, as if the other 99% of us are a rounding error, but I see the case for it and I'm not sure it's completely wrong. (I'm not supposed to be the primary beneficiary of 80k advising / other EA resources. If I voluntarily sign up to try being an ambitious altruist, and later feel bad about not (yet) succeeding, I'm not sure I get to blame anyone except myself.)

det
6d3
2
2
2

Feedback on first two episodes: I really enjoyed them, and was instantly sold this series. I felt like I was sitting in on a conversation with fun people having great conversations. Wasn't really sure what the impact case was for these, but they gave me a feeling I have at the best EA meetups: oh my gosh, these are my people. [1]

(Feedback on third episode in another comment)

  1. ^

    I have some reservations about this: the cultural characteristics that sets off the "my people" sense don't seem too strongly connected to doing the most good? So while I love finding "my people," it's strange that they are such a big fraction of EA, both at local meetups and apparently at 80k.

det
14d10
0
0

Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."

Doesn't seem like they have a website yet.

Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."

Doesn't seem like they have a website yet.

This seems relevant to any intervention premised on "it's good to reduce the amount of net-negative lives lived."

If factory-farmed chickens have lives that aren't worth living, then one might support an intervention that reduces the number of factory-farmed chickens, even if it doesn't improve the lives of any chickens that do come to exist. (It seems to me this would be the primary effect of boycotts, for instance, although I don't know empirically how true that is.)

I agree that this is irrelevant to interventions that just seek to improve conditions for animals, rather than changing the number of animals that exist. Those seem equally good regardless of where the zero point is.

det
1mo11
1
0

I wholeheartedly agree, and think we need to look elsewhere to apply this model.

Donor Lotteries unhealthily exhibit winner-take-all dynamics, centralizing rather than distributing power. If this individual makes a bad decision, then the impact of that money evaporates -- it's a very risky proposition.

A more robust solution would be to proportionally distribute the funds to everyone who joins, based on the amount they put in. This would democratize funding ability throughout the EA ecosystem and lead to a much healthier funding ecosystem.

The concrete suggestions here seem pretty wild, but I think the possible tension between computationalism and shrimp welfare is interesting. I don't think it's crazy to conclude "given x% credence on computationalism (plus these moral implications), I should reduce my prioritization of shrimp welfare by nearly x%."

That said, the moral implications are still quite wild. To paraphrase Parfit, "research in [ancient Egyptian shrimp-keeping practices] cannot be relevant to our decision whether to [donate to SWP today]." The Moral Law keeping a running tally of previously-done computations and giving you a freebie to do a bit of torture if it's already on the list sounds like a reductio.

A hazy guess is that something like "respecting boundaries" is a missing component here? Maybe there is something wrong with messing around with a water computer that's instantiating a mind, because that mind has a right to control its own physical substrate. Seems hard to fit with utilitarianism though.

Thanks for posting, these look super interesting!

I'm hoping to read (and possibly respond to) more, but I ~randomly started with the final article "Saving the World Starts at Home." 

My thoughts on this one are mostly critical: I think it fundamentally misunderstands what EA is about (due to relying too heavily on a single book for its conception of EA), and will not be persuasive to many EAs. But it raises a few interesting critiques of EA prioritization at the end.

Summary

  • The Most Good You Can Do has a list (referred to as "The List") of some prototypical EA projects; roughly: "earn to give, community building, working in government, research, organizing, organ donation."
    • Thesis of the piece: "Building a good home" should be on The List.
  • Some reasons it's good to build a good home: having a refuge (physical and psychological safety), showing hospitality to others, raising a family.
    • I was expecting to see discussion of externalities here; perhaps focusing on how creating a good home can boost effectiveness in other altruistic endeavors, or how there are more spillovers to society than might be expected. The latter shows up a bit, but this mostly discusses benefits to the people who physically enter your home.
  • Traditional EA priorities have been critiqued on the following grounds:
    • Demandingness
    • Motivational obstacles / they're psychologically difficult
    • Epistemic limits: the world is very complicated
    • Ineffectiveness
    • Grift
  • Building a good home is not subject to these criticisms: it's not overly demanding, it's intrinsically motivating (or at least more than traditional EA interventions), it clearly produces direct good outcomes and isn't subject to difficult-to-determine n-th order effects.
  • According to Singer, EAs don't need to maximize the good at all times, and don't have to be perfectly impartial. So it's not necessary to discuss whether this is among the most effective interventions in order to argue that this should be an EA priority -- effectively creating some good is enough.
    • (IMO this is simply a misunderstanding of EA, and undermines much of the article.)
  • Why do EAs ignore this issue? Some suggestions:
    • It's not effective enough to count as an EA priority. (The rest of the article is arguing against this point.)
    • Status: It's lower-status than other EA priorities, like donating lots of money to charity or producing interesting research
    • It's less amenable to calculation
    • EAs have a bias toward "direct" rather than "indirect" forms of benevolence
      • (This seems in tension with the point from earlier about how reading to your kid produces clear, direct value, in contrast to the unclear and more-prone-to-backfire approach of donating to Oxfam. I also think EAs are super willing to consider indirect benevolence, but I digress.)
    • Politics: "Building a home" is conservative-coded in the US, and EA is left-leaning.

What I liked best

I think the "status" and "politics" critiques of EA prioritization are useful and probably under-discussed. 

Certain fields (e.g. AI safety research) are often critiqued for being suspiciously interesting / high-status / high-paying, but this makes the case that even donating to GiveWell is a little suspicious in how much status it can buy. (But I think there are likely much more efficient ways to buy status; donating 1% of your income probably buys much more than 1/10 the status you'd get from donating 10%.)

I also think it's reasonably likely that there are some conservative-coded causes that EAs undervalue for purely political reasons (but I don't have any concrete examples at hand).

Critiques

There are a few fundamental issues with the analysis that cause this to fail to connect for me.

(this is a bit scattershot; I tried to narrow it down to a few points to prevent this from being 3x longer)

  • It's too anchored on Singer's description of EA in The Most Good You Can Do, rather than the current priorities of the community.
    • A recurring example is "should you work an extra hour at Starbucks to donate $10 to Oxfam, or spend that hour hosting friends or reading a story to your kid?" 
      • Oxfam is not currently a frequently-recommended charity in EA circles (it's not recommended by GWWC, although Singer's org The Life You Can Save does recommend it). 
      • I've never heard "work a low-wage job to give" advocated as a top EA recommendation, so this isn't a strong point of comparison.
  • It doesn't engage with the typical criteria for EA causes (e.g. the ITN framework), and especially fails to engage with on-the-margin thinking.
    • “Are we to believe that effective altruists think that if we had more bad homes, this would not affect how much people care about the global poor or give to charities? Surely not.”
      • The question of how big this impact is, or how much a marginal increase in "good homes" creates a marginal increase in charitable giving (and how that compares to other approaches to increasing donations) is not discussed.
    • "If large numbers of people were regularly giving much of their income to charity and donating their kidneys, these activities would not thereby cease being acts of effective altruism. So, home life cannot be excluded from the List simply because many people already do it."
      • Neglectedness is a key consideration for determining EA priorities: if there were no shortage of kidney donors, the argument for kidney-donation-as-effective-altruism would indeed be much weaker.
  • Rather than arguing directly that "building a good home" has positive externalities on par with the good done by other EA priorities, the main argument seems to be something like "this is technically compatible with the definition of effective altruism in TMGYCD."
    • From the conclusion of section VII: "Assuming home life is an effective way of [creating] great good for the world, then effective altruists should have no complaint about recommending it as one potential expression of effective altruism. ... [Otherwise,] the effective altruist commits to a very demanding view, one they should state and defend."
      • I think this conflates "demandingness" (asking people to sacrifice a lot) with "having a high bar for declaring something an EA intervention." For instance, you can recommend only the top 0.01% of charities, but still only ask people to give 10%.
      • EAs do state and defend the view that there should be a very high bar for what counts as an EA intervention.
det
3mo11
4
0

Two more nitpicky points:

hosts and guests of the 80k podcast laughing at the 'wokeness' of this or that when civil rights/feminism are being brought in a conversation

A google search turned up one instance of a guest discussing wokeness, which was Bryan Caplan discussing why not to read the news:

(15:45) But the main thing is they’re just giving this overwhelmingly skewed view of the world. And what’s the skew exactly? The obvious one, which I will definitely defend, is an overwhelming left-wing view of the world. Basically, the woke Western view is what you get out of almost all media. Even if you’re reading media in other countries, it’s quite common: the journalists in those other countries are the most Westernised, in the sense of they are part of the woke cult. So there’s that. That’s the one that people complain about the most, and I think those complaints are reasonable.
But long before anyone was using the word “woke,” there’s just a bunch of other big problems with the news. The negativity bias: bad, bad, bad, sad, sad, sad, angry, angry, angry.

This wasn't in the context of civil rights or feminism being discussed, and I couldn't find any other instances where that was the case. Rob doesn't comment on the "woke" bit here one way or another, and doesn't laugh during these paragraphs. So unless there's an example I missed, I think this characterization is incorrect.
 

posts on LessWrong talking about foetus's sentience without mentioning ONCE reproductive rights

This is probably an example of decoupling vs contextualizing norms clashing, but I don't think I see anything wrong here. Whether or not a fetus is sentient is a question about the world with some correct answer. Reproductive rights also concern fetuses, but don't have any direct bearing on the factual question; they also tend to provoke heated discussion. So separating out the scientific question and discussing it on its own seems fine.

Load more