Comment author: Michael_PJ 27 October 2017 12:53:04AM *  7 points [-]

Maybe voters on the EA forum should be blinded to the author of a post until they've voted!

Comment author: mhpage 27 October 2017 02:24:45AM 10 points [-]

Variant on this idea: I'd encourage a high status person and a low status person, both of whom regularly post on the EA Forum, to trade accounts for a period of time and see how that impacts their likes/dislikes.

Variant on that idea: No one should actually do this, but several people should talk about it, thereby making everyone paranoid about whether they're a part of a social experiment (and of course the response of the paranoid person would be to actually vote based on the content of the article).

Comment author: Askell 26 October 2017 10:42:34PM 25 points [-]

An example of a particular practice that I think might look kind of innocuous but can be quite harmful to women and minorities in EA is what I'm going to call "buzz talk". Buzz talk involves making highly subjective assessments of people's abilities, putting a lot of weight in those assessments, and communicating them to others in the community. Buzz talk can be very powerful, but the beneficiaries of buzz seem to disproportionately be those that conform to a stereotype of brilliance: a white, upper class male might be "the next big thing" when his black, working class female counterpart wouldn't even be noticed. These are the sorts of small, unintentional behaviors that I that it can be good for people to try to be conscious of.

I also think it's really unfortunate that there's such a large schism between those involved in the social justice movement and people who largely disagree with this movement (think: SJWs and anti-SJWs). The EA community attracts people from both of groups, and I think it can cause people to see this whole issue through the lens of whatever group they identify with. It might be helpful if people tried to drop this identity baggage when discussing diversity issues in EA.

Comment author: mhpage 26 October 2017 10:51:34PM *  15 points [-]

I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.

My guess re the mechanism: Because we don't have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.

My advice would be:

  1. When assessing someone's talent, focus on the content of what they're saying/writing, not the general feeling you get from them.

  2. When discussing how talented someone is, always explain the basis of your view (e.g., I read a paper they wrote; or Bob told me).

Comment author: mhpage 06 October 2017 11:46:00PM *  3 points [-]

Thanks for doing these analyses. I find them very interesting.

Two relatively minor points, which I'm making here only because they refer to something I've seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:

  1. I don't think AI is a "cause area."
  2. I don't think there will be a non-AI far future.

Re the first point, people use "cause area" differently, but I don't think AI -- in its entirety -- fits any of the usages. The alignment/control problem does: it's a problem we can make progress on, like climate change or pandemic risk. But that's not all of what EAs are doing (or should be doing) with respect to AI.

This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.

So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there's a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I'd love to see more EAs foraging those carrots.

In response to comment by Peter_Hurford  (EA Profile) on EAGx Relaunch
Comment author: Maxdalton 25 July 2017 07:44:32AM *  10 points [-]

The main reason that we could not interview more people for EA Grants at this stage is that we had a limited amount of staff time to conduct interviews, rather than because of funding constraints.

I think you are right that the number of excellent EA Grants proposals suggests that small projects are often currently restricted by receiving funding. However, I think that this is less because there is not enough money, and more because there aren't good mechanisms for matching small projects up with money. You could say it is funding-mechanism-constrained. Of course, EA Grants is trying to address this. This was a smaller trial round, to see how promising the project is, and work out how to run it well. We will reassess after we've completed this round, but I think that it's very possible that we will scale the program up, to begin to address this issue.

[I'm working for CEA on EA Grants.]

In response to comment by Maxdalton on EAGx Relaunch
Comment author: mhpage 26 July 2017 05:49:07PM 7 points [-]

Max's point can be generalized to mean that the "talent" vs. "funding" constraint framing misses the real bottleneck, which is institutions that can effectively put more money and talent to work. We of course need good people to run those institutions, but if you gave me a room full of good people, I couldn't just put them to work.

Comment author: mhpage 24 February 2017 10:59:22AM 13 points [-]

and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.

This is my concern (which is not to say it's Open Phil's responsibility to solve it).

Comment author: joshjacobson  (EA Profile) 07 December 2016 10:16:30PM *  10 points [-]

I find it difficult to evaluate CEA especially after the reorganization, but I did as well beforehand.

The most significant reason is that I feel CEA has been exceedingly slow to embrace metrics regarding many of its activities, as an example, I'll speak to outreach.

Big picture metrics: I would have expected one of CEA's very first activities, years ago when EA Outreach was established, to begin trying to measure subscription to the EA community. Gathering statistics on number of people donating, sizes of donations, number that self-identify as EAs, percentage that become EAs after exposure to different organizations/media, number of chapters, size of chapters, number that leave EA, etc.

Obviously, some of these are difficult, and others involve assumptions, gaining access to properties other organizations run, or gathering data yourselves, but I would expect to see a concerted effort to do so, at least in part. The community has embraced Fermi Estimates where appropriate, and these metrics could be estimated with much more information than those often require.

So a few years in, I find it a bit mindblowing that I'm unaware of an attempt to do this by the only organization that has had teams dedicated specifically to the improvement and growth of the movement. Were these statistics gathered, we'd be much better able to evaluate outreach activities of CEA, which are now central to its purpose as an organization.

With regard to metrics on specific CEA activities, I've also been disappointed by the seeming lack of measurement (though this may be a transparency issue, more on this later). For example, there have been repeated instances where outreach has actively turned people off in ways that I've been told have been expressed to CEA. Multiple friends who applied to the Pareto Fellowship felt like it was quite unprofessionally run and potential speakers at EA Global mentioned they'd found some of the movement's actions immature. In each instance, I'm aware of them becoming significantly less engaged as a result.

At times concerns such as these have been acknowledged, but given the level of my (admittedly highly anecdotal) exposure to them, it feels like they have mostly not been examined to see if they were at a magnitude that should give pause. It would be nice to see them fully acknowledged through quantification, so we could understand if these were a small minority (which does matter of course regardless) or actually of great concern. Quantification could involve, for example, getting feedback on the process from all of those who applied to the Pareto Fellowship or EA Global or all of those who considered them. I do believe that some satisfaction measurements for EAGx and EA Global did in fact come out recently; I was glad to see those and also hope that they are just seen as starting points rather than as representing the majority of CEA’s growth in measurement.

Other examples of where quantification could be helpful is in the relative prominence of various communication vehicles. The cause prioritization tool, for example, is quite prominently shared, but has its success been measured? Have any alternatives been considered? Measuring and sharing this could be beneficial both for CEA’s decision making as well as for the community understanding what works best for their own outreach activities.

The second most significant reason I find CEA tough to evaluate, which is interconnected to much of what I said regarding the first, is that I feel transparency, especially around decision making, is lacking. I feel that other EA organizations better document why they are pursuing much of what they do, but CEA too often feels like a collection of projects without central filtering / direction. I do believe the reorganization may have been to target a similar feeling, but new projects such as EA Concepts, after the reorganization have similarly seemed to come out of nowhere and without justification of their resourcing. It'd be quite helpful to better understand the set of projects CEA considers and how its decision making leads to what we observe. So many of us have been exposed to the book giveaway… what was the decision making behind doing it? Should taking such a proactive action make us update that CEA has found a quite effective promotion vehicle, or was it a trial to determine effects of distribution?

CEA has taken initial steps toward improvement, with the monthly updates, and I'd like to see them greatly expand and specifically address decision making.

Could CEA speak to its planned approach to growing measurement and transparency moving forward?

I have many additional strong feelings and beliefs in favor of CEA as a donation target, had many strong anecdotal experiences, and have a few beliefs that give me great pause as well. But I think measurement and transparency could do a great deal toward putting those in proper context.

Comment author: mhpage 09 December 2016 10:44:00AM 5 points [-]

Hey Josh,

As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.

I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend to. Before the reorganization, this responsibility didn’t fall squarely within any team’s jurisdiction which was part of the problem. (For example, Giving What We Can collected a lot of this data for a subset of the effective altruism community.) This is a priority for us.

Regarding measuring CEA activities, internally, we test and measure everything (particularly with respect to community and outreach activities). We measure user engagement with our content (including the cause prioritization tool), the newsletter, Doing Good Better, Facebook marketing, etc., trying to identify where we can most cost-effectively get people most deeply engaged. As we recently did with EAG and EAGx, we’ll then periodically share our findings with the effective altruism community. We will soon share our review of the Pareto Fellowship, for example.

Regarding transparency, our monthly updates, project evaluations (e.g., for EAG and EAGx, and the forthcoming evaluation of the Pareto Fellowship), and the fundraising document linked in this post are indicative of the approach we intend to take going forward. Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.

Lastly, regarding centralized decision-making, that was the primary purpose of the reorganization. As we note in the fundraising document, we’re still in the process of evaluating current projects. I don’t think the EA Concepts project is to the contrary: that was simply an output of the research team, which it put together in a few weeks, rather than a new project like Giving What We Can or the Pareto Fellowship (the confusion might be the result of using "project" in different ways). Whether we invest much more in that project going forward will depend on the reception and use of this minimum version.

Regards, Michael

Comment author: MichaelDickens  (EA Profile) 07 December 2016 05:10:38AM 1 point [-]

I don't believe organizations should post fundraising documents to the EA Forum. As a quick heuristic, if all EA orgs did this, the forum would be flooded with posts like this one and it would pretty much kill the value of the forum.

It's already the case that a significant fraction of recent content is CEA or CEA-associated organizations talking about their own activities, which I don't particularly want to see on the EA Forum. I'm sure some other people will disagree but I wanted to contribute my opinion so you're aware that some people dislike these sorts of posts.

Comment author: mhpage 07 December 2016 08:55:38PM 4 points [-]

This document is effectively CEA's year-end review and plans for next year (which I would expect to be relevant to people who visit this forum). We could literally delete a few sentences, and it would cease to be a fundraising document at all.

Comment author: MichaelDickens  (EA Profile) 06 December 2016 04:33:39AM 3 points [-]

Some of the articles seem like they emphasize weird things. First example I noticed was the page on consuming animal products has three links to fairly specific points related to eating animals but no links to articles that present an actual case for veg*anism, and the article itself does not contain a case. This post is the sort of thing I'm talking about.

Comment author: mhpage 06 December 2016 09:01:25AM 1 point [-]

Fixed. At least with respect to adding and referencing the Hurford post (more might also be needed). Please keep such suggestions forthcoming.

Comment author: [deleted] 30 November 2016 04:19:04PM *  5 points [-]

The amount of money employees at EA organisations can give is fairly small

Agreed. Is there any evidence employee donation is a significant problem, or that it will become one in the near future? If not, and given there is no obvious solution, I suggest focusing on higher priorities (e.g. VIP outreach).

Thanks to Max Dalton, Sam Deere, Will MacAskill, Michael Page, Stefan Shubert, Carl Shulman, Pablo Stafforini, Rob Wiblin, and Julia Wise for comments and contributions to the conversation.

I think too many (brain power x hours) have been expended here.

Sorry to be a downer, just trying to help optimize.

Comment author: mhpage 05 December 2016 09:59:06PM 0 points [-]

This came out of my pleasure budget.

Comment author: mhpage 31 August 2016 09:30:50AM *  2 points [-]

As you explain, the key tradeoff is organizational stability vs. donor flexibility to chase high-impact opportunities. There are a couple different ways to strike the right balance. For example, organizations can try to secure long-term commitments sufficient to cover a set percentage of their projected budget but no more, e.g., 100% one year out; 50% two years out; 25% three years out [disclaimer: these numbers are not considered].

Another possibility is for donors to commit to donating a certain amount in the future but not to where. For example, imagine EA organizations x, y, and z are funded in significant part by donors a, b, and c. The uncertainty for each organization comes from both (i) how much a, b, and c will donate in the future (e.g., for how long do they plan to earn to give?), and (ii) to which organization (x, y, or z) will they donate. The option value for the donors comes primarily* from (ii): the flexibility to donate more to x, y, or z depending on how good they look relative to the others. And I suspect much (if not most) of the uncertainty for x, y, and z comes from (i): not knowing how much "EA money" there will be in the future. If that's the case, we can get most of the good with little of the bad via general commitments to donate, without naming the beneficiary. One way to accomplish this would be an EA fund.

  • I say "primarily" because there is option value in being able to switch from earning to give to direct work, for example.

View more: Next