Hide table of contents

This post reflects my own personal opinions and not the opinions of the orgs I’m mentioning.

The argument goes:
If you get an application for you [job opening, grant program, etc] and you take some amount of time evaluating the application, it can’t take that much extra time to just write down the reason for your decision. You made the decision, so you know why you made it. So why then isn’t feedback better integrated in more application processes? Why do many EA orgs say they don’t have time to give personal feedback to applicants? 

I have heard this argument several times in writing and in person, and I have thought this exact thing many times. It makes sense in theory. But unfortunately reality is more messy, or at least sometimes it’s more messy. 
 

Two types of applications processes 

I’ve been on both sides of applications. I’ve handled applications for various events, I was involved in one hiring at AISS before leaving, and most recently I’ve been on the evaluation side of applications for both research leads and team members for AISC. All the applications I’ve dealt with on the evaluator side can neatly be divided in two categories:

  • Evaluative application - There is a fixed bar. If you live up to this requirement you’re in.
  • Competitive applications - There is a fixed (or fixed-ish) number of acceptances, and therefore the success of the application primarily depends on how they compare with other applications.

For evaluative applications it is actually very little extra work to provide feedback once the evaluation is done. The argument that started this post lines up with reality. For competitive applications, everything is a mess.
 

Competitive applications

Imagine you want to recruit one or only a few people. You have many more applicants than you are able to accept, so you skim the list for the top candidates which you will take the time to read more carefully. One application is giving you a negative vibe that you can’t really put the finger on why this is. It just feels that this person would be a bad personal fit. You could probably try to articulate it, but it would come out as super judgmental. You’re not sure that your judgement is correct, but it is something that you would have to look into before accepting them, which seems like a lot of extra work. On the other hand, there are lots of other people who are similarly qualified you can pick instead. 

In a competitive application process there is typically not enough time to give everyone a fair evaluation. The focus is more on finding some safe and good enough people to accept. For most rejections the reason is simply that the application did not stand out enough. In the few cases where this is not the case, it can be hard to articulate. Vibe based judgement like the one described above probably doesn't generalise to other evaluators anyway which means the feedback is of low value.

How to give feedback to rejected applications (my opinion)
For Competitive applications I think in general trying to give individual feedback is often more trouble than it’s worth. A better approach is to give general feedback. Let people know how competitive the application process turned out to be, and what specifically it took to get accepted this time. 
 

Evaluative applications

When evaluating competitive applications I will spend most of my time on the top applications, but for evaluative applications I will spend most of my time on the applications that are on the edge of being accepted or rejected. It’s also just easier to evaluate applications when the bar for acceptance isn’t shifting based on my evaluation of other applications. 

When doing evaluative applications, if it is not clear to me why I’m accepting or rejecting someone, then I’m not done with their application. For this reason, telling a rejected applicant exactly why their application got rejected is not hard. 

How to give feedback to rejected applications (my opinion)
Tell people exactly why their application got rejected. 

I don’t like giving people negative feedback. It doesn't feel nice. Here’s a re-framing trick I’ve come up with to overcome this aversion. I imagine myself in their position and think about what I would want if I were them. I notice that I would like to know why my application got rejected. After this thought experiment, telling them the feedback no longer feels like being rude, it feels like being helpful. 
 

Transparency

For both types of applications, I think it is worth the effort to be transparent about the application process, preferably before people apply. I’ve not always lived up to this ideal, but it’s something I want to improve for next time I’m doing applications for something.

I both had to deal with applications I could not evaluate because they did not provide enough information, and applications that wasted my time by being unnecessarily lengthy. I consider both of these failure modes to be at least partly to blame on me and my colleagues for not being clearer about our expectations for applications. If the applicant understands the application process and evaluation criteria, and can tailor their application based on this information, this saves time for everyone.

There are some exceptions where you do want to ask a trick question, such that explaining the evaluation criteria would undermine the point of the question. 
 

Some more reflections/opinions

I much prefer evaluative applications over competitive applications, even to the extent that I sometimes designed an application process to be evaluative, even though this would probably mean missing out on top applications. For example, I’ve used “first come first serve” for one event, i.e. the first 20 applicants that met the minimum bar got accepted. At another event where the number of spots were somewhat flexible but not infinitely so, I kept the number of applications down by not advertising too widely, in order to not have to reject anyone out of lack of capacity.

Most times I’ve evaluated competitive applications, I didn’t feel super motivated to spend a lot of time optimising the application process, to make sure I got the absolute best candidates. It seems to me that improving the accuracy of the application process adds a lot of work for everyone involved, and you still end up with the same number of successful applicants. Good applicants will still be rejected. I feel more motivated to spend that effort on expanding the programs and events I’m involved with, to make room for more participants. 

Applications are very low bandwidth. This is a problem I did not understand until I had some experience being on the evaluator side. When applying it’s so easy to fall for the illusions of transparency and think that your application conveys much more information than is actually there. But in reality the application leaves out sooo much. Because of this, I think that focusing too much on refining the application process is hubris. I don’t believe anyone is good at this. The best we can do is to give as many people as possible the opportunity to participate and contribute. If you disagree and if you know how to set up a great application process, then please message me and teach me your magic.

Edit: The above paragraph is making a stronger statement than I believe on reflection. I would like to clarify, except I notice that my opinions here is unstable. I don't know yet what I think on reflection. Sorry about this. I still welcome discussions and pushback on the paragraph as written. 

I know there are bottlenecks and trade-offs. I’m not saying that your EA org needs to hire everyone. Obviously that would be a bad idea. In most cases, orgs should hire carefully and avoid risky hires, even if that means often turning away good candidates.

When it comes to grants, I’d be excited to see grants programs that give lower amounts, but to a larger number of people. 

As for events, I want bigger EAGs and many more online events.

Also, not all opportunities happen via applications. One of the best ways to make sure lots of EA opportunities are accessible to lots of people, is to be a distributed and open network.

I’m noticing I’m starting to drift off on a tangent at this point in my writing, so I will end here. 



 

27

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I think that focusing too much on refining the application process is hubris. I don’t believe anyone is good at this. The best we can do is to give as many people as possible the opportunity to participate and contribute. If you disagree and if you know how to set up a great application process, then please message me and teach me your magic.

Hi there! I think I disagree with you. :) I have some broad ideas about setting up a great application process. I guess a high-level summary would be something like:

  • know what you are looking for
  • know what criteria/traits/characteristics/skill/etc. predict what you are looking for
  • have methods you can use to assess/measure those criteria
  • assess the applicants using those methods

The implementation of it can be quite complicated and the details will vary massively depending on circumstances, but at a basic level that is what it is: know what you are looking for, and measure it. I think this is a lot harder in a small organization, but there are still aspects that can be used.

I don't want anyone to think that I am an expert who knows everything about applications. I'm just a guy that reads about this kind of thing and thinks about this kind of thing. Then in early 2023 I started to learn a bit about organizational behavior and industrial-organizational psychology. But I'd be happy to bounce around ideas if you'd like to have a call to explore this topic more.

I think the type of application you have in mind is when you're hiring for a specific role? I think you're right that there are circumstances where good evaluation is possible. Looking at what I wrote, I am making a too strong claim. 

The type of applications I had in mind for that paragraph is things like accepting people to an AI Safety research program, or grants, and stuff like that. Although probably there are some lessens from hiring for specific roles that can be generalised to those situations. 

Hmmmm. I'm wondering what part of the "selecting people for a job" model is transferrable and applicable to the "selecting people for a research program, grant, etc."

In those circumstances, I'm guessing that there are specific criteria you are looking for, and it might just be a matter of shifting away from vibes & gut feelings and towards trying to verbalize/clarify what the criteria are. I'm guessing that even if you won't have performance reviews for these people (like you would with employees), you still have an idea as to what counts as success.

Here is a hypothetical that might be worth exploring (this is very rough and was written in only a few minutes fairly off the top of my head, so don't take t too seriously):

The next cohort for the AI Safety Camp is very large (large enough to be a good sample size for social science research), and X years in the future you look at all the individuals from that cohort to see what they are doing. The goals of AI Safety Camp are to provide people with both the motivation to work on AI safety and the skills to work on AI safety, so let's see A) how many people in the cohort are working on AI safety, and B) how much they are contributing or how much of a positive impact they are having. Then we look at the applications that they submitted X years ago to join AI Safety camp, and see what criteria those applications have that they have.

I'm not good enough at data analysis to be able to pull much info, but there likely would be differences (of course, in reality it would have to be a pretty big sample size in order for any effects to not be overwhelmed by the random noise of life that has happened in the intervening X years). So although this little thought experiment is a bit silly and simplistic, we can still imagine the idea.

Thanks for writing this analysis! I agree with a most of it. One other argument I've heard for not providing feedback after job applications is that it carries legal risk. What the risk is specifically, I don't know – perhaps a candidate could sue you for discrimination if you inadvertently choose the wrong words? A way to mitigate this risk is the phrase ‘You didn't demonstrate… [mastery of Python, facility in calming down angry customers, whatever]’. It avoids making statements about the candidate in general and instead focuses on their behaviour during the application process, which you've observed. [1] It can be addition to your points about how to give feedback.

[1] https://www.manager-tools.com/2013/07/you-did-not-demonstrate-part-1-hall-fame-guidance

One thing that I think is present here but perhaps not stated forcefully enough is that there should always been a better way to message than a bare bones "sorry, you were not accepted" rejection. I'm not entirely sure if I agree with your analysis, but given my lack of experience on the other side I wish to bunt on that to say that the least you can do is provide general feedback. It really sucks to not get any feedback on a rejection for an application you thought might make it somewhere. Here's a few things that have made for good rejections in my experience:

  • Listing the number of spots and number of applicants
  • Providing general characteristics of applications that were successful, and noting trends in those that were unsuccessful if possible
  • Giving resources one can engage with to possibly become a competitive applicant in the future (i.e. sending someone who got rejected from a role in Nuclear research to Aird's compilation of nuclear research questions, and encouraging them to engage with them)

Post forthcoming on my thoughts on this from an extensive end user side, but thanks for opening up dialogue, much appreciated.

Curated and popular this week
Relevant opportunities