A

Arepo

4362 karmaJoined

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
625

Topic contributions
17

Very strong agree. The 'cons' in the above list are not clearly negatives from an overall view of 'make sure we actually do the most good, and don't fall into epistemic echo chambers' perspective.

I don't know if they're making a mistake - my question wasn't meant to be rhetorical.

I take your point about capacity constraints, but if no-one else is stepping up, it seems like it might be worth OP stepping up their capacity constraints.

I continue to think the EA movement systematically underestimates the x-riskiness of nonextinction events in general and nuclear risk in particular by ignoring much of the increased difficulty of becoming interstellar post-destruction/exploitation of key resources. I gave some example scenarios of this here (see also David's results) - not intended to be taken too seriously, but nonetheless incorporating what I think are significant factors that other longtermist work omits (e.g. in The Precipice, Ord defines x-risk very broadly, but when he comes to estimate the x-riskiness of 'conventional' GCRs, he discusses them almost entirely in terms of their probability of making humans immediately go extinct, which I suspect constitutes a tiny fraction of their EV loss).

You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic - as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I'm less clear, but also have the sense that people would have believed something similar about calculators before they appeared.

I'm not asserting that this is obviously the most likely outcome, just that I don't see convincing reasons for thinking it's extremely unlikely.

It doesn't seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like 'how do I become world leader?' gives in-depth practical advice, but which never itself outputs anything other than token predictions.

nuclear security is getting almost no funding from the community, and perhaps only ~$30m of philanthropic funding in total.

Do we know why OP aren't doing more here? They could double that amount and it would barely register on their recent annual expenditures.

I'm curious which direction the disagree voters are disagreeing - are they expressing the view that quantifying people like this at all is bad, or that if you're going to do it, this is a more effective way?

For what it's worth, I sympathise with the need to make some hard prioritisation decisions - that's what EA is about, after all. Nonetheless, it seems like the choice to focus on top universities has been an insufficiently examined heuristic. After all, the following claim...

top universities are the places with the highest concentrations of people who ultimately have a very large influence on the world.

... is definitely false unless the only categorisation we're doing of people is 'the university they go to'. We can subdivide people into any categories we have data on, and while 'university' provides a convenient starting point for a young impact-focused organisation, it seems like a now-maturing impact-focused organisation should aspire to do better. 

For a simple example, staying focused on universities, most university departments receive their own individual rankings, which are also publicly available (I think the final score for the university is basically some weighted average of these, possibly with some extra factors thrown in). 

I'm partially motivated to write this comment because I know of someone who opted to go to the university with the better department for their subject, and has recently found out that, by opting to go to the university with the lower overall ranking, they're formally downgraded by both immigration departments and EA orgs.

So it seems like EA orgs could do better simply by running a one-off project that pooled departmental rankings and prioritising based on that. It would probably be a reasonably substantial (but low skill) one-off cost with a slight ongoing maintenance cost, but if 'finding the best future talent' is so important to EA orgs, it seems worth putting some ongoing effort into doing it better. [ETA - apparently there are some premade rankings that do this!]

This is only one trivial suggestion - I suspect there are many more sources of public data that seem like they could be taken into account to make a fairer and (which IMO is equivalent) more accurate prioritisation system. Since as the OP points out, selecting for the top 100 universities is a form of strong de facto prejudice against people from countries that don't host one, it might be also worth adding some multiplier to people at the top departments in their country - and so on. There might be quantifiable considerations that have nothing to do with university choice.

Having said that, if CEA or any other org does do something like this, I hope they'll

a) have the courage to make unpopular weighting decisions when the data clearly justifies them and

b) do it publicly, open sourcing their weighted model, so that anyone interested can see that the data does clearly justify it - hopefully avoiding another PELTIVgate.

When did he get feedback from Kings? Googling it, the only thing I can see is that he was invited to an event which the Swedish king was also at.

Also, most of Bostrom's extra-academic prestige is based on a small handful of the papers listed. That might justify making him something like a public communicator of philosophy, but it doesn't obviously merit sponsoring an entire academic department indefinitely.

To be clear, I have no strong view on whether the university acted reasonably a) in the abstract or b) according to incentives in the unique prestige ecosystem which universities inhabit. But I don't think listing a handful of papers our subgroup approves of is a good rationale for claiming that it did neither.

Fwiw I downvoted this post because it doesn't say anything substantial about what you think CSER and Leverhulme are doing wrong, so it just comes across as abuse. 

Load more