New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Marcus Daniell appreciation note @Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
36
harfe
21h
10
FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX". Changes: * Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing). * Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features. * More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say * We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things * Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing. * OpenPhil is hiring more internally Non-changes: * Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis) * Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him * Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.

Popular comments

Recent discussion

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading

I'm awestruck, that is an incredible track record. Thanks for taking the time to write this out.

These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.

2
Chris Leong
7h
For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:
7
DominikPeters
10h
From Bostrom's website, an updated "My Work" section reads:

People around me are very interested in AI taking over the world, so a big question is under what circumstances a system might be able to do that—what kind of capabilities could elevate an entity above the melange of inter-agent conflict and into solipsistic hegemony?

We...

Continue reading

NIce post!

We might then expect a lot of powerful attempts to change prevailing ‘human’ values, prior to the level of AI capabilities where we might have worried a lot about AI taking over the world. If we care about our values, this could be very bad. 

This seems like a key point to me, that it is hard to get good evidence on. The red stripes are rather benign, so we are in luck in a world like that. But if the AI values something in a more totalising way (not just satisficing with a lot of x's and red stripes being enough, but striving to make all hum... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

U.S. Secretary of Commerce Gina Raimondo announced today additional members of the executive leadership team of the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST). Raimondo named Paul Christiano as Head of AI

...
Continue reading

Is this a use case for Reproducible Builds?

huw commented on harfe's quick take 8h ago

FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/

Continue reading

Thank you! I framed it as a question for this reason ❤️

Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."

Doesn't seem like they have a website yet.

Except they should maximize confusion by calling it the "Macrostrategy Interim Research Initiative" ;)

Summary

  1. Many views, including even some person-affecting views, endorse the repugnant conclusion (and very repugnant conclusion) when set up as a choice between three options, with a benign addition option.
  2. Many consequentialist(-ish) views, including many person-affecting
...
Continue reading
2
MichaelStJules
10h
Then, I think there are ways to interpret Dasgupta's view as compatible with "ethics being about affecting persons", step by step: 1. Step 1 rules out options based on pairwise comparisons within the same populations, or same number of people. Because we never compare existence to nonexistence — we only compare the same people or with the same number like in nonidentity — at this step, this step is arguably about affecting persons. 2. Step 2 is just necessitarianism on the remaining options. Definitely about affecting persons. These other views also seem compatible with "ethics being about affecting persons": 1. The view that makes (wide or narrow) necessitarian utilitarian comparisons pairwise while ignoring alternatives, so it gives A<A+, A+<Z, Z<A, a cycle. 2. Actualism 3. The procreation asymmetry Anyway, I feel like we're nitpicking here about what deserves the label "person-affecting" or "being about affecting persons".
1
Kaspar Brandner
8h
I wouldn't agree on the first point, because making Desgupta's step 1 the "step 1" is, as far as I can tell, not justified by any basic principles. Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+. Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?). The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options. Alternatively there is the regret argument, that we would "realize", after choosing A+, that we made a mistake, but that intuition seems not based on some strong principle either. (The intuition could also be misleading because we perhaps don't tend to imagine A+ as locked in). I agree though that the classification "person-affecting" alone probably doesn't capture a lot of potential intricacies of various proposals.

We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".

We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same a... (read more)

Summary

  • In this post, I hope to inspire other Effective Altruists to focus more on donation and commiserate with those who have been disappointed in their ability to get an altruistic job.
  • First, I argue that the impact of having a job that helps others is complicated. In
...
Continue reading

Thanks for writing this, Elijah. I agree that it’s really difficult to get an “EA job” (it took me five years). I wish this felt more normalized, and that there was better scoped advice on what EA jobseekers should do. I wrote about this last year and included a section on ways to contribute directly to EA projects even without an EA job. I'd also recommend Aaron Gertler's post on recovering from EA job rejection, probably my favorite ever EA Forum post.

On Aaron Bergman's comment about finding a higher paying role, certain tipped positions can be surprisin... (read more)

3
André Kirschner
12h
Thank you for sharing your story! I am working already for 5/6 years after graduating. Now I am getting more and more into EA and tried to find a meaningful job. It turns out that changing into a really meaningful position in another company is really hard.. Maybe comparable to your situation. As I have a specific experience now and most meaningful companies are having specific requirements of experiences (of course different than the ones I have) I got rejected in the first round some times already. So, rather than changing the job, I try to work on my bosses now so that I can deal with impactful things. The donation is at least something that is giving me the feeling of having an impact. When reading your text I remembered the career capital chapter in the 80,000hours book. Maybe that helps you to value also smaller impact now. As we have 80,000hours you don’t have to have the extraordinary impact now and live a perfect frugal live as in later stages of your career you will have a much higher impact. And knowing that you are preparing for that might get you through this situation in a better mood? All the best!!
2
Sara Carrillo- PE
13h
Is harder to find an EA job if you are from LATAM? Considering there are more opportunities for the USA and Europe in EA. I'm starting my search as a Project Management Professional in EA Jobs.  I try it!

This was originally posted on Nathaniel's and Nuno's substacks (Pending Survival and Forecasting Newsletter, respectively). Subscribe here and here!

Discussion is also occurring on LessWrong here (couldn't link the posts properly for technical reasons).

Introduction

When the...

Continue reading

I think the main reason that EA focuses relatively little effort on climate change is that so much money is going to it from outside of EA. So in order to be cost effective, you have to find very leveraged interventions, such as targeting policy, or addressing extreme versions of climate change, particularly resilience, e.g. ALLFED (disclosure, I'm a co-founder).

Carl Robichaud mentioned in his EAGxVirtual talk that the nuclear risk space is funding constrained. Dylan Matthews has also written about this at Vox.

There also seems to be a consensus that nuclear risk is higher than it has been in the recent past - with the Russia/Ukraine...

Continue reading

I would suggest the Back from the Brink campaign in the United States (www.preventnuclearwar.org) or the International Campaign to Abolish Nuclear Weapons (https://www.icanw.org/) 

Both organizations are bringing a grassroots advocacy approach to push for multilateral efforts to prevent nuclear war. Grassroots advocacy is the most critically underfunded sector in the nuclear security space.