N

nonn

263 karmaJoined Feb 2018

Posts
1

Sorted by New
3
nonn
· 2y ago · 1m read

Comments
27

Very cool!

random thought: could include some of Yoshua Bengio's or Geoffrey Hinton's writings/talks on AI risks concerns in week 10 (& could include Lecun for counterpoint to get all 3), since they're very-well cited academics & Turing Award Winners for deep learning

I haven't looked through their writings/talks to find the most directly relevant, but some examples: https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/ https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/

My experience is that it's more that group leaders & other students in EA groups might reward poor epistemics in this way.

And that when people are being more casual, it 'fits in' to say AI risk & people won't press for reasons in those contexts as much, but would push if you said something unusual.

Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I'm concerned about AI risk & to respond to various counterarguments.

No, though maybe you're using the word "intrinsically" differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.

I think any argument about creating people/etc is instrumental - will they or won't they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women's lives). TBH given your above arguments, I'm confused about the focus on abortion - it seems like you should be just as opposed to people choosing not to have children, and focus on encouraging/supporting people having kids.

For now, I think the ~main thing that matters is from a total-view longtermist perspective is making it through "the technological precipice", where risks of permanent loss of sentient life/our values is somewhat likely, so other total-view longtermist arguments flow through effects on this + influencing for good trajectory arguably. Since abortion access seems good for civilization trajectory (women can have children when the want, don't have their lives & health derailed, etc), more women involved in the development of powerful technology probably makes these fields more cautious/less rash, fewer 'unwanted children' [probably worse life outcomes], etc. Then abortion access seems good.

Maybe related: in general when maximizing, I think it's probably best to finding the most important 1-3 things, then focus on those things. (e.g. for temp of my house, focus on temp of thermostat + temp of outside + insulation quality, ignore body heat & similar small things)

I don't think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)

I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.

I guess I did mean aggregate in the 'total' well-being sense. I just feel pretty far from neutral about creating people who will live wonderful lives, and also pretty strongly disagree with the belief that restricting abortion will create more total well-being in the long run (or short tbh).

For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restricting abortion seems kinda bad for several of those things, and positive for none. So it seems like total-view longtermism, even ignoring all other reasons to think this, says abortion-restriction is bad.

I guess part of this is a belief that in the long-run, the number of morally-valuable lives & total wellbeing (e.g. in a 10 million years) is very uncorrelated or anti-correlated with near-term world population. (though I also think restricting abortion is one of the worst ways to go about increasing near-term population, even for those who do think near-term & very-long-term are pretty positively correlated)

abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value

I'm a bit confused by this statement. Is a world where people don't have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me

To be clear I don't think it's worth discussing abortion at length, especially considering bruce's comment. But I really don't think the number of people currently existing says much about well-being in the very long run (arguably negatively correlated). And even if you wanted to increase near-term population, reducing access to abortion is a very bad way to that, with lots of negative knock-on effects.

Agree that was a weird example.

Other people around the group (e.g. many of the non-Stanford people who sometimes came by & worked at tech companies) are better examples. Several weren't obviously promising at the time, but are doing good work now.

typo, imo. (in my opinion)

nonn
2y24
0
0

I'm somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill "set X is probably the most important stuff by a lot", where X doesn't include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people's talents & interests probably aren't as [relatively] valuable as they previously assumed.

That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don't even think this is conscious, just vague 'feels like this is wrong' when people say [thing I'm not the best at/dislike] is the most important. This is not to say set X doesn't have major problems

They might more often have useful community critiques imo, e.g. more likely to notice social blind spots that community leaders are oblivious to.

Also, I am concerned about motivated reasoning within the community, but don't really know how to correct for this. I expect the most-upvoted critiques will be the easy-to-understand plausible-sounding ones that assuage the problem above or social feelings, but not the correct ones about our core priorities. See some points here: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism

nonn
2y21
0
0

I'd add a much more boring cause of disillusionment: social stuff

It's not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does 'more impactful' things in community estimation (often genuinely more impactful!)

Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.

Load more