EB

Ethan Beri

32 karmaJoined

Posts
1

Sorted by New

Comments
9

I'm not really too concerned with quality - I'm just much more worried about time. Actually, I think a lot of stuff on EA Forum/LW is really good, and I've learned a lot here. It just seems like an awful lot of people write an awful lot of stuff, and I'm not really sure everyone needs to spend so long writing. That said, I'm not sure how you'd fix this other than implementing something like the existing quick takes feature.

I've recently been writing a long-form post, and I realised that it's taking a while. I was sort of struck by the thought: is everyone doing this? When I see people talk about spending too much time on the forum (and I usually don't - I think I've only seen two or three people say this), it's usually to do with doom scrolling, or something like that. But might people also be spending a lot of time just writing stuff? I'm not sure of the concrete opportunity cost here, but I'm sure there's some. I'm not especially well-versed in the "meta trap" thing - I think this was a debate people had before I got interested in EA - but it seems like this is way it could (or does!) happen. Thoughts?

After a quick google, I'm pleasantly surprised by how much this sort of thing seems to happen - thanks for the pointer!

Hey, thanks for your comment! I hadn't really realised the extent to which someone can study full-time while also skilling up in research engineering - that definitely makes me feel more willing to go for PPE. 

Re your third paragraph, I wouldn't have a year off - it'd just be like doing a year of PPE, followed by three years of CS & philosophy. I do have a scholarship, and would do the first year of PPE anyway in case I didn't get into CS & phil.

Either way, your first point does point me more in the direction of just sticking with PPE :)

Hey - I’d be really keen to hear peoples' thoughts on the following career/education decision I'm considering (esp. people who think about AI a lot):

  • I’m about to start my undergrad studying PPE at Oxford.
  • I’m wondering whether re-applying this year to study CS & philosophy at Oxford (while doing my PPE degree) is a good idea.
    • This doesn’t mean I have to quit PPE or anything. 
    • I’d also have to start CS & philosophy from scratch the following year.
  • My current thinking is that I shouldn’t do this - I think it’s unlikely that I’ll be sufficiently good to, say, get into a top 10 ML PhD or anything, so the technical knowledge that I’d need for the AI-related paths I’m considering (policy, research, journalism, maybe software engineering) is either pretty limited (the first three options) or much easier to self-teach and less reliant on credentials (software engineering).
    • I should also add that I’m currently okay at programming anyway, and plan to develop this alongside my degree regardless of what I do - it seems like a broadly useful skill that’ll also give me more optionality.
    • I do have a suspicion that I’m being self-limiting re the PhD thing - if everyone else is starting from a (relatively) blank slate, maybe I’d be on equal footing? 
      • That said, I also have my suspicions that the PhD route is actually my highest-impact option: I’m stuck between 1) deferring to 80K here, and 2) my other feeling that enacting policy/doing policy research might be higher-impact/more tractable.
      • They’re also obviously super competitive, and seem to only be getting more so.
  • One major uncertainty I have is whether, for things like policy, a PPE degree (or anything politics-y/economics-y) really matters. I’m a UK citizen, and given the record of UK politicians who did PPE at Oxford, it seems like it might?

What mistakes am I making here/am I being too self-limiting? I should add that (from talking to people at Oxford) I’ll have quite a lot of time to study other stuff on the side during my PPE degree. Thanks for reading this, if you’ve got this far! I’d greatly appreciate any comments.

Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI's AI Governance Research Agenda?) be useful?

We're currently at a weird point where there's a lot of interest in AI - news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I'm well aware that this sort of thing can backfire, and I'm aware that most people are highly sceptical of trying not to "politicise" issues like these, but it might be a good idea.

If it was written by, say, Toby Ord - or anyone sufficiently detached from American left/right politics, with enough prestige, background, and experience with writing books like these - I feel like it might be really valuable.

It might also be more approachable than other books covering AI risk, like, say, Superintelligence. It might also seem a little more concrete, because it might cover scenarios that are easier for most people to imagine/scenarios that are more near-term, and less "sci-fi".

Thoughts on this? 

This is an older post now, so I have no idea if anyone will see this, but it seems to me that you almost need "pockets" of cultishness in the broader EA movement. This follows up from Geoffrey Miller's final sentence on his comment, about how a lot of impactful movements do seem a bit cultish. Peter Thiel writes really well about why some start-ups seem cultish (and why they should be) in Zero to One, and I think I agree with him: it does seem to me that a sense of unity/mission-alignment and we're-better-than-everyone-else can produce extraordinary results. Sometimes this is extraordinarily bad (like Adam Neumann and WeWork) and sometimes it's extraordinarily good (like Steve Jobs and Apple, Jack Dorsey and Twitter, Bill Gates and Microsoft, etc), where certain people can motivate others to tirelessly perform extremely high-value work.

Obviously, cultishness has major downsides. One is external: for example, proponents of the environmental movement were frequently dismissed as hippies before environmentalism went mainstream, and I wouldn't want the same to happen to any EA. The second is internal: as you've noted, problems like sexual harassment and abuse come up, which is obviously extremely traumatising for the victims involved.

I'd say that one thing I'm saddened by is the relative lack of public awareness of EA or the big EA causes (x-risk, global health, animal welfare, etc), and in a way, it may require us to become more cultish to solve that problem. There's a kind of optimal stopping problem at play here in that once EA becomes more cultish, it's hard to make it less cultish (at least in the eyes of the non-EA public) but if EA is too non-cultish, I fear that we won't appropriately be able to spread the word. I'm also afraid that a lot of our community-building efforts aren't very high-leverage, and seem like they often fizzle out, particularly at universities. It's great that we have such a huge collection of smart people working on important stuff, but we might need a few cult leader personalities (or, to use Ayn Rand's words, "prime movers") to really move the needle.

One thing I've been thinking about - which perhaps flies in the face of what I've just said about spreading awareness - is the need to prevent reputational tail risk for the EA movement altogether. For example, can we spread awareness of key issues without mentioning EA, and can we get more people to commit their careers to doing good without mentioning EA? In some sense, using the blanket term "EA" is a blessing and a curse: a blessing in that it's a very versatile calling card to put in social media bios, introductory blurbs, etc (e.g. "...I'm really into effective giving...") but also that it creates huge collateral damage for hit-piece journalists in case anything seriously bad does happen (like when environmentalists would get called dope-smoking hippies).

Like everything, there's a crucial balancing act here: how can we be rational, but also be highly motivated and aligned? Curious to know people's thoughts on this one, because I still feel like EA (as a community) is in the early days, and could become so much more.