A

Arepo

4351 karmaJoined

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
620

Topic contributions
17

I'm curious which direction the disagree voters are disagreeing - are they expressing the view that quantifying people like this at all is bad, or that if you're going to do it, this is a more effective way?

For what it's worth, I sympathise with the need to make some hard prioritisation decisions - that's what EA is about, after all. Nonetheless, it seems like the choice to focus on top universities has been an insufficiently examined heuristic. After all, the following claim...

top universities are the places with the highest concentrations of people who ultimately have a very large influence on the world.

... is definitely false unless the only categorisation we're doing of people is 'the university they go to'. We can subdivide people into any categories we have data on, and while 'university' provides a convenient starting point for a young impact-focused organisation, it seems like a now-maturing impact-focused organisation should aspire to do better. 

For a simple example, staying focused on universities, most university departments receive their own individual rankings, which are also publicly available (I think the final score for the university is basically some weighted average of these, possibly with some extra factors thrown in). 

I'm partially motivated to write this comment because I know of someone who opted to go to the university with the better department for their subject, and has recently found out that, by opting to go to the university with the lower overall ranking, they're formally downgraded by both immigration departments and EA orgs.

So it seems like EA orgs could do better simply by running a one-off project that pooled departmental rankings and prioritising based on that. It would probably be a reasonably substantial (but low skill) one-off cost with a slight ongoing maintenance cost, but if 'finding the best future talent' is so important to EA orgs, it seems worth putting some ongoing effort into doing it better. [ETA - apparently there are some premade rankings that do this!]

This is only one trivial suggestion - I suspect there are many more sources of public data that seem like they could be taken into account to make a fairer and (which IMO is equivalent) more accurate prioritisation system. Since as the OP points out, selecting for the top 100 universities is a form of strong de facto prejudice against people from countries that don't host one, it might be also worth adding some multiplier to people at the top departments in their country - and so on. There might be quantifiable considerations that have nothing to do with university choice.

Having said that, if CEA or any other org does do something like this, I hope they'll

a) have the courage to make unpopular weighting decisions when the data clearly justifies them and

b) do it publicly, open sourcing their weighted model, so that anyone interested can see that the data does clearly justify it - hopefully avoiding another PELTIVgate.

When did he get feedback from Kings? Googling it, the only thing I can see is that he was invited to an event which the Swedish king was also at.

Also, most of Bostrom's extra-academic prestige is based on a small handful of the papers listed. That might justify making him something like a public communicator of philosophy, but it doesn't obviously merit sponsoring an entire academic department indefinitely.

To be clear, I have no strong view on whether the university acted reasonably a) in the abstract or b) according to incentives in the unique prestige ecosystem which universities inhabit. But I don't think listing a handful of papers our subgroup approves of is a good rationale for claiming that it did neither.

Fwiw I downvoted this post because it doesn't say anything substantial about what you think CSER and Leverhulme are doing wrong, so it just comes across as abuse. 

This seems like quite an in-group perspective. From the perspective of a generic philosophy faculty, that looks like a very small list of papers for a department that was running for nearly two decades. Without knowing their impact factor (which I'd guess was higher than average, but not extreme) it's hard to say whether this was reasonable from a prestige perspective.

Thanks for the support :)

Is there any other marketing that is going along with these posts? e.g. posting on Slack channels, Facebook groups, or the like? I think that could potentially multiply the benefits and help get the word out even more. 

I tend to link the posts in the main Facebook groups and the EA Anywhere Slack channel, but nothing too lavish - it takes a nontrivial amount of time to do this, and I don't have a lot of spare bandwidth.

I see the Mental Health Navigator is mentioned under the Mental Health section, but I suggest that the Coaching session also have a reference to it at the top mentioning that there are a bunch of coaches on that resource as well, in the same Providers list. 

As in there are life/other types of coaches on MHN?

More strongly, I don't think it makes sense to only list a few coaches explicitly by name in this post. There are a ton of coaches who specifically target EAs on the Mental health Navigator, virtually all (if not all) of whom meet all the criteria. The MHN is intended to be the single source of truth list and provides useful features like filtering. It also only lists providers who have some review/recommendation (or if they've listed themselves, that has been reviewed by a gatekeeper at the Mental Health Navigator). To have another list here could only ever amount to a less complete and as such potentially confusing or less valuable list. I myself am a leadership coach to EAs and would love to be added to this list, but, I also know of a couple dozen other coaches who would also love to get more visibility, many of whom will never discover this post. I'd rather help out the entire coaching community, and also provide a more valuable interface to EAs seeking coaches, which helps out the entire EA community. 

I'm not sure how I feel here. The coaches section has definitely got a bit bloated.  I could do some rotation to highlight individuals, but that sounds quite a pain in the butt. I'm also a little wary of deferring to a single other resource, since I'm generally worried about EA groupthink that comes from deferred epistemics. Maybe a reasonable approach would be to list coaches only if they fit the opt-in criterion and for some (non-egregious) reason (e.g. that they don't deal with mental health) aren't listed on MHN?

Happy to be persuaded there's a better approach, though. If you do think that's reasonable, feel free to reach out to any coaches who that would include, or just link me to their websites (including you, though it sounds like you're on there?).

Nitpick: You use "or" a couple of times in your criteria. I believe in both cases the "or" conjoins only the bullet with the single adjacent bullet. But just to make it a tiny bit clearer what two things are conjoined by "or", you could use indenting, or include both things in the same line item. 

Good suggestion, thanks.

Do you know what kind of management she does of it? Can anyone add themselves, or does she curate it in some way?

I've put them all in a sequence, whose link is at the very top, but I guess they need something more visible?

I hadn't seen this until now. I still hope you'll do a follow up on the most recent round, since as I've said (repeatedly) elsewhere, I think you guys are the gold standard in the EA movement about how to do this well :)

One not necessarily very helpful thought:

Our work trial was overly intense and stressful, and unrepresentative of working at GWWC.

is a noble goal, but somewhat in tension with this goal:

In retrospect, we could have ensured this was done on a time-limited basis, or provided a more reasonable estimate.

It's really hard to make a strictly timed test, especially a sub-one-day one unstressful/intense.

This isn't to say you shouldn't do the latter, just to recognise that there's a natural tradeoff between two imperatives here. 

Another problem with timing is that you don't get to equalise across all axes, so you can trade one bias for another. For example, you're going to bias towards people who have access to an extra monitor or two at the time of taking the test, whose internet is faster or who are just in a less distracting location.

I don't know that that's really a solvable problem, and if not, the timed test seems probably the least of all evils, but again it seems like a tradeoff worth being aware of.

The dream is maybe some kind of self-contained challenge where you ask them to showcase some relevant way of thinking in a way in time isn't super important, but I can't think of any good version of that.

Load more