W

weeatquince

6144 karmaJoined Sep 2014

Comments
465

Antony, If you are looking for early stage funding and support for your charity or a project if it you could consider applying to the charity entrepreneurship program when applications re-open in a few months. There is an option to apply with your own idea.

See https://www.charityentrepreneurship.com/

(Disclaimer commenting in a personal capacity)

Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.

– – 

Quality of our reports

I would like to push back a bit on Joey's response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pledge).

I don’t have strong evidence for thinking this. Mostly I am going of the amount of errors that incubates find in the reports. In each cohort we have ~10 potential founders digging into ~4-5 reports for a few weeks. I estimate there is on average roughly 0.8 non-trivial non-major errors (i.e. something that would change a CEA by ~20%) and 0 major errors highlighted by the potential founders. This seems in the same order of magnitude to the number of errors GiveWell get on scrutiny (e.g. here).

And ultimately all our reports are tested in the real world by people putting the ideas in practice. If our reports do not line up to reality in any major way we expect to find out when founders do their own research or a charity pivots or shuts down, as MHI has done recently.

One caveat to this is that I am more confident about the reports on the ideas we do recommend than the other reports on non-recommended ideas which receive less oversight internally as they are less decision relevant for founders, and receive less scrutiny from incubates and being put into action. 

I note also that in this entire critique and having skimmed over the threads here no-one appears to have pointed out any actual errors in any CE report. So I find it hard to update on anything written here. (The possible exception is me, in this post, pointing to the MHI case which does seem unfortunately to have shut down in part due to an error in the initial research.)

So I think our quality of research is comparable to other orgs, but my evidence for this is weak and I have not done a thorough benchmarking. I would be interested in ways to test this. It could be a good idea for CE to run a change our mind context like GiveWell in order to test the robustness of our research. Something for me to consider. It could also be useful (although I doubt worth the error) to have some external research evaluator review our work and benchmark us against other organisations.

 

[EDIT: To be clear talking here about quality in terms of number of mistakes/errors. Agree our research is often shorter and as such is more willing to take shortcuts to reach conclusions.]

 

– – 

That said I do agree that we should make it very very clear in all our reports the context of who the report is written for and why and what the reader should take from the report. We do this in the introduction section to all our reports and I will review the introduction for future reports to make sure this is absolutely clear.

I went though the old emails today and I am confident that my description accurately captured what happened and that everything I said can be backed up.

Another animal advocacy research organization supposedly found CE plagiarizing their work extensively including in published reports, and CE failed to address this.

Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.

I believe this refers to an incident that happened in 2021. CE had an ongoing relationship with an animal advocacy policy organisation occasionally providing research to support their policy work. We received a request for some input and over the next 24 hours we helped that policy organisation draft a note on the topic at hand. In doing so a CE staff member copy and pasted text from a private document shared by another animal advocacy research organisation. This was plagiarism and should not have happened. I would like to note: firstly that this did not happen in the course of our business as usual research process but in a rushed piece of work that bypassed our normal review process, and secondly that this report was not directly published by us and it was not made clear to the CE staff member involved that the content was going to be made into a public report (most other work for that policy organisation was just used privately) although we should of course have considered this possibility. These facts of course do not excuse our mistake here but are relevant for assessing the risk that this was any more than a one-off mistake.

I was involved in responding when this issue came to light. On the day the mistake was realised we: acknowledged the mistake, apologised to the injured party, pulled all publicity for the report, drafted an email to the policy org asking to have the person who's text was copied added as a co-author (the email was not sent until the following day after as we waited for approval from the injured party). The published report was updated. Over the next three weeks we carried out a thorough internal risk assessment including reviewing all past reports by the same author. The other animal rights research organisation acknowledged they were satisfied with the steps taken. We found no cases of plagiarism in any other reports (the other research org agreed with this assessment), although one other tweak was made to a report to make acknowledgment more clear. 

FWIW I find mildlyanonymous' description of this event to be somewhat misleading referring to multiple "reports" and claiming "CE failed to address this".

 

CE's reports on animal welfare consistently contain serious factual errors ... noticing immediately that it had multiple major errors, sharing that feedback, and having it ignored due to their internal time capping practices.

I don’t know what this is about. I know of no case where we have ignored feedback. We are always open to receiving feedback on any of our reports from anyone at any time. I am very sorry if I or any CE staff ignored you and I open to hearing more about this, and/or hearing about any errors you have spotted in our research. If you can share any more information on this I can look into it, please contact me (I will PM you my email address, note I am about to go on a week's leave). It is often the case if we receive minor critical feedback after a report is published we do not go back and edit our report but note the feedback in our Implementation Note for future Charity Entrepreneurship founders, maybe that is what happened.

This is a great post and captured something that I feel. Thank you for writing it Michelle!!

Thank you for a nuanced and interesting reply.

Thank you so much for an excellent post.

I just wanted to pick up on one of your suggested lessons learned that, at least in my mind, doesn’t follow directly from the evidence you have provided.

You say:

These wins suggest a few lessons. ... the value of cross-party support. Every farm animal welfare law I’m aware of, globally, passed with cross-party support. ... We should be able to too: there are many more conservative animal lovers than liberal factory farmers.

To me, there are two very opposing ways you could take this. Animal-ag industry is benefiting from cross party support so:
A] Animal rights activists need to work more with the political right so that we get cross-party support too, essentially depoliticising animal rights policy, with the aim of animal activists also getting the benefits of cross party support.
B] Animal rights activists need to work more with the political left so that supporting animal farming is an unpalatable opinion or action for anyone on the left to hold, essentially politicising animal rights policy, with the aim of industry loosing the benefits of cross-party support.

Why do you suggest strategy A] depolticisation? Working with conservative animal lovers. Do you have any evidence this is the correct lesson to draw?

I have not yet done much analysis of this question but my initial sense from the history of social change in the US is that the path to major change though an issue becoming highly politicised and championed by one half of the political spectrum is likely to be the quicker (albeit less stable) route to success, and in some cases where entrenched interests are very strong, might be the only path to success (e.g. with slavery). I worry the a focus on depoliticsiation could be a strategic blunder. I have been pondering this for a while and am keen to understand what research, evidence and reasoning there is for keeping animal rights depoliticsed.

Have you considered blinded case work / decision making? Like one person collects the key information annonomises it and then someone else decides the appropriate responce without knowing the names / orgs of the people involved.

Could be good for avoiding some CoIs. Has worked for me in the past for similar situations.

Thank you Saulius. Very helpful to hear. This sounds like a really positive story of good management of a difficult situation. Well done to Marcus.

If I read between the lines a bit I get the impression that maybe more junior (be that less competent or just newer to the org) managers at Rethink with less confidence in their actions not rocking the Rethink<->funder relationship were perhaps more likely to put unwelcome pressure on researchers about what to publish. Just a hypothesis, so might be wrong. But also the kind of thing good internal policies, good onboarding, good senior example setting, or just discussions of this topic, can all help with. 

[This comment is no longer endorsed by its author]Reply
Load more