V

Vaipan

298 karmaJoined Working (0-5 years)

Participation
5

  • Completed the In-Depth EA Virtual Program
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Comments
88

Well you said it: STEM is what makes the very big difference here. A 'leftwing' STEM will not have the same priorities at all than a social science student, so this leftwing label is very misleading, no matter how much people like to use it here to claim that EA is leftist. 

A STEM student will have much more contempt towards protests, and what you conveniently forget to say is that STEM students are in general earning much more and come from much more privileged backgrounds. It's all about resources and how they are distributed, and so these students are in much less need to go out in the streets. So it's easier to look down on protests and think that these protests are just noisy and useless. 

So my answer still stands and explains why EA is not protest-friendly.

Answer by Vaipan0
0
1

Protests are usually done by those in dire need of change: minorities, poor people, people whose identity is attacked, etc. AI risks are overwhelmingly highlighted by rich white male engineers: not those who usually have a reason to go out in the streets. And as Geoffrey says, who despise those who do--it's easier to mock those who struggle when you don't, assuming that they make unnecessary noise because you don't feel at all part of their fight.

And now EAs realize that profit is taking over safety concerns--it took a lot of time! It was painful to read Altman's praising until the board shuffling at OpenAI. It's been years that people protest because greed and unequal distribution of money make their own lives poorer and harder; but now  greed causes survival risks that also extend to rich engineers, so they have to do something. 

Of course. It is much easier for privileged individuals to relate to the suffering of minds that do no exist yet compared to the very real suffering of people and animals today that force you to confront your emotions and uneasiness towards those who have so little when you have so much. 

The divide between gender and cause-area is obvious (not just from this study but also from my own EA group!). Women in general care much more about GHD and animal welfare and dislike fixing technological issues with yet another technology; they want more systemic change. That some privileged men who benefit from the current status quo do not want to change the current power dynamics and prefer to think about future beings who do not have a voice yet to feel useful is hard to deny.

Sadly I have not seen any research mixing gender dynamics and longtermist urgency.

I agree. We have to take into account that 80k strongly pushed for careers in AI safety, encouraged field building specifically for AI safety, and the job board has become increasingly dominated by AI safety job offers. And the trend is not likely to be reversed soon. 

However, that does not keep people outside of EA to obtain jobs in the GHD field (which is not just development economics, as someone wrote one day);  they are just not accounted for. And if the movement keep giving opportunities and funding specifically towards AI safety, sure we'll get less and less GHD people. So it's still impressive, taking all this funding concentration, that we get so many EAs that still consider GHD as the most pressing cause-area. 

It is always appalling to see tech lobbying power shut down all the careful work done by safety people.

Yet the article highlights a very fair point: that safety people have not succeeded at being clear and convincing enough about the existential risks posed by AI. Yes, it's hard, yes it's a lot about speculations. But that's exactly where impact lies : trying to have a consistent and pragmatic discourse about AI risks, that is not uselessly alarmist or needlessly vague.

The state of the EA community is a good example of that. I often hear that yes, risks are high, but what risks exactly, and how can they be quantified? Impact measurement is awfully vague when it comes to AI safety (and a minor measure, AI governance).

I wish this was more well-known and read in the EA community. So far I have not seen any credible objections to these three compelling arguments. Perils or not perils, these arguments are still valid on their own. 

Hey Joseph, 

I am exactly in the same boat, very specialized path and lack of financial visibility. I also work for an EA org, which means that I chose a pay cut (and the role is time-constrained in terms of funding) compared to other jobs that could be safer (consulting, etc). 

But recently, I've been thinking about the fact that donating is a bit like starting a new sport class or any new habit; if you don't start, you'll never start (except under ideal conditions but that rarely happens!). Accepting a bit of risk to accomplish something that you care a lot about makes sense for me, which is why I will start giving soon. There will never be a threshold of financial safety where I'll feel completely safe, so waiting will not do good to me. 

Also, inflation means that all my careful savings are losing value right now, so I'm realizing that I would be better off spending a part of it now rather than wait and see their value slowly disappearing. 

This is only my choice; I just wanted to comment since I am a bit in the same case but came to think differently about it recently. Also just want to empathize with your situation. Sometimes I feel bad when I see that some of my colleagues have been giving for ten years, but again, we clearly were not given the same set of circumstances at birth. 

Thanks for saying it, though! Because it feels validating to hear it, instead of having this internal voice that hammers that time is being wasted and that I'm letting everyone and everything down. I might do just that!

Load more