New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Marcus Daniell appreciation note @Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
36
harfe
19h
9
FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX". Changes: * Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing). * Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features. * More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say * We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things * Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing. * OpenPhil is hiring more internally Non-changes: * Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis) * Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him * Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.

Popular comments

Recent discussion

Summary

  1. Many views, including even some person-affecting views, endorse the repugnant conclusion (and very repugnant conclusion) when set up as a choice between three options, with a benign addition option.
  2. Many consequentialist(-ish) views, including many person-affecting
...
Continue reading
2
MichaelStJules
8h
Then, I think there are ways to interpret Dasgupta's view as compatible with "ethics being about affecting persons", step by step: 1. Step 1 rules out options based on pairwise comparisons within the same populations, or same number of people. Because we never compare existence to nonexistence — we only compare the same people or with the same number like in nonidentity — at this step, this step is arguably about affecting persons. 2. Step 2 is just necessitarianism on the remaining options. Definitely about affecting persons. These other views also seem compatible with "ethics being about affecting persons": 1. The view that makes (wide or narrow) necessitarian utilitarian comparisons pairwise while ignoring alternatives, so it gives A<A+, A+<Z, Z<A, a cycle. 2. Actualism 3. The procreation asymmetry Anyway, I feel like we're nitpicking here about what deserves the label "person-affecting" or "being about affecting persons".
1
Kaspar Brandner
5h
I wouldn't agree on the first point, because making Desgupta's step 1 the "step 1" is, as far as I can tell, not justified by any basic principles. Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+. Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?). The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options. Alternatively there is the regret argument, that we would "realize", after choosing A+, that we made a mistake, but that intuition seems not based on some strong principle either. (The intuition could also be misleading because we perhaps don't tend to imagine A+ as locked in). I agree though that the classification "person-affecting" alone probably doesn't capture a lot of potential intricacies of various proposals.

We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".

We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same a... (read more)

Summary

  • In this post, I hope to inspire other Effective Altruists to focus more on donation and commiserate with those who have been disappointed in their ability to get an altruistic job.
  • First, I argue that the impact of having a job that helps others is complicated. In
...
Continue reading

Thanks for writing this, Elijah. I agree that it’s really difficult to get an “EA job” (it took me five years). I wish this felt more normalized, and that there was better scoped advice on what EA jobseekers should do. I wrote about this last year and included a section on ways to contribute directly to EA projects even without an EA job. I'd also recommend Aaron Gertler's post on recovering from EA job rejection, probably my favorite ever EA Forum post.

On Aaron Bergman's comment about finding a higher paying role, certain tipped positions can be surprisin... (read more)

3
André Kirschner
10h
Thank you for sharing your story! I am working already for 5/6 years after graduating. Now I am getting more and more into EA and tried to find a meaningful job. It turns out that changing into a really meaningful position in another company is really hard.. Maybe comparable to your situation. As I have a specific experience now and most meaningful companies are having specific requirements of experiences (of course different than the ones I have) I got rejected in the first round some times already. So, rather than changing the job, I try to work on my bosses now so that I can deal with impactful things. The donation is at least something that is giving me the feeling of having an impact. When reading your text I remembered the career capital chapter in the 80,000hours book. Maybe that helps you to value also smaller impact now. As we have 80,000hours you don’t have to have the extraordinary impact now and live a perfect frugal live as in later stages of your career you will have a much higher impact. And knowing that you are preparing for that might get you through this situation in a better mood? All the best!!
2
Sara Carrillo- PE
10h
Is harder to find an EA job if you are from LATAM? Considering there are more opportunities for the USA and Europe in EA. I'm starting my search as a Project Management Professional in EA Jobs.  I try it!

People around me are very interested in AI taking over the world, so a big question is under what circumstances a system might be able to do that—what kind of capabilities could elevate an entity above the melange of inter-agent conflict and into solipsistic hegemony?

We theorize about future AI systems hiding their motives until they are in a position to take over the world, so they don’t get noticed and shut down beforehand.

But humans and their institutions aren’t very fixed. They might (arguably) have the same deep human values over time and space. But surface-level, sometimes they like little moustaches and the opera and delicate etiquette and sometimes they like ecstatic rock’n’roll re-negotiations of social reality. Sometimes they want big communal houses with their extended kin, and sometimes quiet condos. Eleven children or cushions that look like cats. The same person born in different...

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

This was originally posted on Nathaniel's and Nuno's substacks (Pending Survival and Forecasting Newsletter, respectively). Subscribe here and here!

Discussion is also occurring on LessWrong here (couldn't link the posts properly for technical reasons).

Introduction

When the...

Continue reading

I think the main reason that EA focuses relatively little effort on climate change is that so much money is going to it from outside of EA. So in order to be cost effective, you have to find very leveraged interventions, such as targeting policy, or addressing extreme versions of climate change, particularly resilience, e.g. ALLFED (disclosure, I'm a co-founder).

Carl Robichaud mentioned in his EAGxVirtual talk that the nuclear risk space is funding constrained. Dylan Matthews has also written about this at Vox.

There also seems to be a consensus that nuclear risk is higher than it has been in the recent past - with the Russia/Ukraine...

Continue reading

I would suggest the Back from the Brink campaign in the United States (www.preventnuclearwar.org) or the International Campaign to Abolish Nuclear Weapons (https://www.icanw.org/) 

Both organizations are bringing a grassroots advocacy approach to push for multilateral efforts to prevent nuclear war. Grassroots advocacy is the most critically underfunded sector in the nuclear security space. 

I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".

Changes:

  • Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are
...
Continue reading
  • Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
  • Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future

For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms?

Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?

For both of these comments, I want a more explicit sense of what the alternative was.

Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you'd expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.

It's likely that no single answer is "the" sole answer. For instance, it's likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will's recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn't a major factor.

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading

For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:

Macrostrategy is the study of how present-day actions may influence the long-term future of humanity.[1]

Macrostrategy as a field of research was pioneered by Nick Bostrom, and it is a core focus area of the Future of Humanity Institute.[2] Some authors distinguish between "foundational" and "applied" global priorities research.[3] On this distinction, macrostrategy may be regarded as closely related to the former. It is concerned with the assessment of

... (read more)
5
DominikPeters
8h
From Bostrom's website, an updated "My Work" section reads:
4
Arepo
8h
That's sad. For anyone interested in why they shut down (I'd thought they had an indefinitely sustainable endowment!), the archived version of their website gives some info:

Super broad question, I know.

I've been going down the rabbit hole of critical psychiatry lately and I'm finding it fascinating. Parts of it seem convincing and anecdotally align with my (admittedly extensive) interactions with the psychiatric system. But the evidence in...

Continue reading
Answer by huwApr 18, 20246
2
0

G'day Marissa! I'm admittedly not the best-versed in psychiatry specifically, since I've focused more on psychotherapy in the past. My general vibe from reading & research I've done is that (for pharmacotherapy only, can't speak to crisis care):

... (read more)

At Giving What We Can, we're hoping to speak to people who are interested in taking the Giving What We Can Pledge at some point, but haven't yet.

We're conducting 45 min calls to understand your journey a bit more, and we'll donate $50 to a charity of your choice on our ...

Continue reading

Oh I thought I responded to this already!

I'd like to say that people often have very good reasons for not pledging, that are sometimes visible to us, and other times not - and no one should feel bad for making the right choice for themselves! 

I do of course think many more people in our community could take the GWWC Pledge, but I wouldn't want people to do that at the expense of them feeling comfortable with making that commitment.

We should respect other people's journeys, lifestyles and values in our pursuits to do good.

And thanks Lizka for sharing your previous post in this thread too! Appreciate you sharing your perspective!