Comment author: rohinmshah  (EA Profile) 13 October 2017 12:10:46AM *  4 points [-]

EA Berkeley seemed more positive about their student-led EA class, calling it “very successful”, but we believe it was many times less ambitious

Yeah, that's accurate. I doubt that any of our students are more likely to go into prioritization research as a result of the class. I could name a few people who might change their career as a result of the class, but that would also be a pretty low number, and for each individual person I'd put the probability at less than 50%. "Very successful" here means that a large fraction of the students were convinced of EA ideas and were taking actions in support of them (such as taking the GWWC pledge, and going veg*n). It certainly seems a lot harder to cause career changes, without explicitly selecting for people who want to change their career (as in an 80K workshop).

We implicitly predicted that other team members would also be more motivated by the ambitious nature of the Project, but this turned out not to be the case. If anything, motivation increased after we shifted to less ambitious goals.

We observed the same thing. In the first iteration of EA Berkeley's class, there was some large amount of money (probably ~$5000) that was allocated for the final project, and students were asked to propose projects that they could run with that money. This was in some sense even more ambitious than OxPrio, since donating it to a charity was a baseline -- students were encouraged to think of more out-of-the-box ideas as well. What ended up happening was that the project was too open-ended for students to really make progress on, and while people proposed projects because it was required to pass the course, they didn't actually get implemented, and we used the $5000 to fund costs for EA Berkeley in future semesters.

Comment author: rohinmshah  (EA Profile) 30 September 2017 05:56:31AM 2 points [-]

CEA distributed £20,000 per hour worked by the grants team, whereas we estimate Open Phil distributes ~£600 per hour.

Those numbers are switched around, right?

Comment author: DonyChristie 11 July 2017 08:12:32PM 0 points [-]

I'm interested to know whether Antigravity investments is really needed when EAs have the option of using the existing investment advice that's out there.

Trivial inconveniences.

Comment author: rohinmshah  (EA Profile) 14 July 2017 05:30:26AM 3 points [-]

Is Antigravity Investments less of an inconvenience than Wealthfront or Betterment?

(I agree that roboadvisors are better than manual investing because they reduce trivial inconveniences, if that's what you were saying. But I think the major part of this question is why not be a for-profit roboadvisor and then donate the profits.)

Comment author: ThomasSittler 23 May 2017 10:37:49AM 1 point [-]

Hi Rohin, thanks for the comment! :) My hunch is also that 80,000 Hours and most organisations have diminishing marginal cost-effectiveness. As far as I know from our conversations, on balance this is Sindy's view too.

The problem with qualitative considerations is that while they are in some sense useful standing on their own, they are very difficult to aggregate into a final decision in a principled way.

Modelling the potential for growth quantitatively would be good. Do you have a suggestion for doing so? The counterfactuals are hard.

Comment author: rohinmshah  (EA Profile) 25 May 2017 02:04:49AM 0 points [-]

Actually I was suggesting you use a qualitative approach (which is what the quoted section says). I don't think I could come up with a quantitative model that I would believe over my intuition, because as you said the counterfactuals are hard. But just because you can't easily quantify an argument doesn't mean you should discard it altogether, and in this particular case it's one of the most important arguments and could be the only one that matters, so you really shouldn't ignore it, even if it can't be quantified.

Comment author: rohinmshah  (EA Profile) 14 May 2017 12:56:54AM 5 points [-]

Attracting more experienced staff with higher salary and nicer office: more experienced staff are more productive which would increase the average cost-effectiveness above the current level, so the marginal must be greater than the current average.

Wait, what? The costs are also increasing, it's definitely possible for marginal cost effectiveness to be lower than the current average. In fact, I would strongly predict it's lower -- if there's an opportunity to get better marginal cost effectiveness than average cost effectiveness, that begs the question of why you don't just cut funding from some of your less effective activities and repurpose it for this opportunity.

Given the importance of such considerations and the difficulty of modelling them quantitatively, to holistically evaluate an organization, especially a young one, there is an argument for using a qualitative approach and “cluster thinking”, in addition to a quantitative approach and “sequential thinking.”

Please do, I think an analysis of the potential for growth (qualitative or quantitative) would significantly improve this post, since that consideration could easily swamp all others.

Comment author: rohinmshah  (EA Profile) 19 March 2017 04:31:55AM 2 points [-]

Robin Shah

*Rohin Shah (don't worry about it, it's a ridiculously common mistake)

I also find Ben Todd's post on focusing on growth rather than upfront provable marginal impact to be promising and convincing

While I generally agree with the argument that you should focus on growth rather than upfront provable marginal impact, I think you should take the specific argument comparing with vegetarians with many grains of salt. That's speculative enough that there are lots of similarly plausible arguments in both directions, and I don't see strong reasons to prefer any specific one.

For example: Perhaps high growth is bad because people don't have deep engagement and it waters down the EA movement. Perhaps vegetarianism is about as demanding as GWWC, but vegetarianism fits more peoples values than GWWC (environmentalism, animal suffering, health vs. caring about everyone equally). Perhaps GWWC is as demanding and as broadly applicable as vegetarianism, but actually it took hundreds of years to get 1% of the developed world to be vegetarian and it will take similar amounts of effort here. And so on.

I think looking at specifically how a metacharity plans to grow, and how well their plans to grow have worked in the past, is a much better indicator than these sorts of speculative arguments. (The speculative arguments are a good way to argue against "we have reached capacity", which I think was how Ben intended them, but they're not a great argument for the meta charity.)

Comment author: MichaelDello 14 March 2017 11:25:59AM 2 points [-]

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

Comment author: rohinmshah  (EA Profile) 14 March 2017 05:14:55PM 1 point [-]

^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

Comment author: rohinmshah  (EA Profile) 12 March 2017 04:05:33AM 5 points [-]

My main question when I read the title of this post was "Why do I expect that there are ethical issues that require a fast reaction time?" Having read the body, I still have the same question. The bystander effect counts, but are there any other cases? What should I learn from this besides "Try to eliminate bystander effect?"

But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast.

Examples?

Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences.

Examples?

Any time doing good takes place in an adversarial environment, this concept is likely to apply.

Examples? One example I came up with was negative publicity for advocacy of any sort, but you don't make any decisions about ethics in that scenario.

Comment author: RobBensinger 07 February 2017 11:00:02PM 8 points [-]

Anonymous #32(e):

I'm generally worried about how little most people actually seem to change their minds, despite being in a community that nominally holds the pursuit of truth in such high esteem.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

There are also really strong founder effects in regional EA groups. That is, locals of one area generally seem to converge on one or two causes or approaches being best. Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

Excepting a handful of people who have switched cause areas, it seems like EA as a brand serves more to justify what one is already doing and optimize within one's comfort zone in it, as opposed to actually changing minds.

To fix this, I'd want to lower the barriers to changing one's mind by, e.g., translating the arguments for one cause to the culture of a group often associated with another cause, and encouraging thought leaders and community leaders to be more open about the ways in which they are uncertain about their views so that others are comfortable following suit.

Comment author: rohinmshah  (EA Profile) 08 February 2017 06:22:29PM 3 points [-]

I agree that this is a problem, but I don't agree with the causal model and so I don't agree with the solution.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

I'd guess that the majority of the people who take the EA Survey are fairly new to EA and haven't encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality "tips and tricks" to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that's pretty typical.

TL;DR I don't think there's good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.

Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

I'd propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments -- we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)

In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you'll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they've barely heard of other areas.

Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.

Comment author: Linch 04 January 2017 05:36:59AM 6 points [-]

It might be worth doing a pre-registered experiment of people who have never directly talked to non-GWWC members about GWWC before, or at least not in the last 6 months.

Say we get 10 of such people, and they each message 5-20 of their friends. If our initial models are correct (and I agree with you that from the outside view this looks unusually high), we would expect 5-20 new pledges to come out of this.

Do you happen to be somebody in this category? If so, would you be interested in participating in such an experiment?

Comment author: rohinmshah  (EA Profile) 05 January 2017 12:02:02AM 0 points [-]

from the outside view this looks unusually high

I would have said this a little over a year ago, but I'm less surprised by it now and I do expect it would replicate. I also expect that it becomes less effective as it scales (I expect the people who currently do it are above average at this, due to selection effects), but not by that much.

This is based on running a local EA group for a year and constantly being surprised by how much easier it is to get a pledge than I thought it would be.

View more: Next