Comment author: DonyChristie 11 July 2017 08:12:32PM 1 point [-]

I'm interested to know whether Antigravity investments is really needed when EAs have the option of using the existing investment advice that's out there.

Trivial inconveniences.

Comment author: rohinmshah  (EA Profile) 14 July 2017 05:30:26AM 2 points [-]

Is Antigravity Investments less of an inconvenience than Wealthfront or Betterment?

(I agree that roboadvisors are better than manual investing because they reduce trivial inconveniences, if that's what you were saying. But I think the major part of this question is why not be a for-profit roboadvisor and then donate the profits.)

Comment author: ThomasSittler 23 May 2017 10:37:49AM 1 point [-]

Hi Rohin, thanks for the comment! :) My hunch is also that 80,000 Hours and most organisations have diminishing marginal cost-effectiveness. As far as I know from our conversations, on balance this is Sindy's view too.

The problem with qualitative considerations is that while they are in some sense useful standing on their own, they are very difficult to aggregate into a final decision in a principled way.

Modelling the potential for growth quantitatively would be good. Do you have a suggestion for doing so? The counterfactuals are hard.

Comment author: rohinmshah  (EA Profile) 25 May 2017 02:04:49AM 0 points [-]

Actually I was suggesting you use a qualitative approach (which is what the quoted section says). I don't think I could come up with a quantitative model that I would believe over my intuition, because as you said the counterfactuals are hard. But just because you can't easily quantify an argument doesn't mean you should discard it altogether, and in this particular case it's one of the most important arguments and could be the only one that matters, so you really shouldn't ignore it, even if it can't be quantified.

Comment author: rohinmshah  (EA Profile) 14 May 2017 12:56:54AM 5 points [-]

Attracting more experienced staff with higher salary and nicer office: more experienced staff are more productive which would increase the average cost-effectiveness above the current level, so the marginal must be greater than the current average.

Wait, what? The costs are also increasing, it's definitely possible for marginal cost effectiveness to be lower than the current average. In fact, I would strongly predict it's lower -- if there's an opportunity to get better marginal cost effectiveness than average cost effectiveness, that begs the question of why you don't just cut funding from some of your less effective activities and repurpose it for this opportunity.

Given the importance of such considerations and the difficulty of modelling them quantitatively, to holistically evaluate an organization, especially a young one, there is an argument for using a qualitative approach and “cluster thinking”, in addition to a quantitative approach and “sequential thinking.”

Please do, I think an analysis of the potential for growth (qualitative or quantitative) would significantly improve this post, since that consideration could easily swamp all others.

Comment author: rohinmshah  (EA Profile) 19 March 2017 04:31:55AM 2 points [-]

Robin Shah

*Rohin Shah (don't worry about it, it's a ridiculously common mistake)

I also find Ben Todd's post on focusing on growth rather than upfront provable marginal impact to be promising and convincing

While I generally agree with the argument that you should focus on growth rather than upfront provable marginal impact, I think you should take the specific argument comparing with vegetarians with many grains of salt. That's speculative enough that there are lots of similarly plausible arguments in both directions, and I don't see strong reasons to prefer any specific one.

For example: Perhaps high growth is bad because people don't have deep engagement and it waters down the EA movement. Perhaps vegetarianism is about as demanding as GWWC, but vegetarianism fits more peoples values than GWWC (environmentalism, animal suffering, health vs. caring about everyone equally). Perhaps GWWC is as demanding and as broadly applicable as vegetarianism, but actually it took hundreds of years to get 1% of the developed world to be vegetarian and it will take similar amounts of effort here. And so on.

I think looking at specifically how a metacharity plans to grow, and how well their plans to grow have worked in the past, is a much better indicator than these sorts of speculative arguments. (The speculative arguments are a good way to argue against "we have reached capacity", which I think was how Ben intended them, but they're not a great argument for the meta charity.)

Comment author: MichaelDello 14 March 2017 11:25:59AM 2 points [-]

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

Comment author: rohinmshah  (EA Profile) 14 March 2017 05:14:55PM 1 point [-]

^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

Comment author: rohinmshah  (EA Profile) 12 March 2017 04:05:33AM 5 points [-]

My main question when I read the title of this post was "Why do I expect that there are ethical issues that require a fast reaction time?" Having read the body, I still have the same question. The bystander effect counts, but are there any other cases? What should I learn from this besides "Try to eliminate bystander effect?"

But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast.

Examples?

Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences.

Examples?

Any time doing good takes place in an adversarial environment, this concept is likely to apply.

Examples? One example I came up with was negative publicity for advocacy of any sort, but you don't make any decisions about ethics in that scenario.

Comment author: RobBensinger 07 February 2017 11:00:02PM 7 points [-]

Anonymous #32(e):

I'm generally worried about how little most people actually seem to change their minds, despite being in a community that nominally holds the pursuit of truth in such high esteem.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

There are also really strong founder effects in regional EA groups. That is, locals of one area generally seem to converge on one or two causes or approaches being best. Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

Excepting a handful of people who have switched cause areas, it seems like EA as a brand serves more to justify what one is already doing and optimize within one's comfort zone in it, as opposed to actually changing minds.

To fix this, I'd want to lower the barriers to changing one's mind by, e.g., translating the arguments for one cause to the culture of a group often associated with another cause, and encouraging thought leaders and community leaders to be more open about the ways in which they are uncertain about their views so that others are comfortable following suit.

Comment author: rohinmshah  (EA Profile) 08 February 2017 06:22:29PM 2 points [-]

I agree that this is a problem, but I don't agree with the causal model and so I don't agree with the solution.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

I'd guess that the majority of the people who take the EA Survey are fairly new to EA and haven't encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality "tips and tricks" to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that's pretty typical.

TL;DR I don't think there's good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.

Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

I'd propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments -- we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)

In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you'll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they've barely heard of other areas.

Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.

Comment author: Linch 04 January 2017 05:36:59AM 6 points [-]

It might be worth doing a pre-registered experiment of people who have never directly talked to non-GWWC members about GWWC before, or at least not in the last 6 months.

Say we get 10 of such people, and they each message 5-20 of their friends. If our initial models are correct (and I agree with you that from the outside view this looks unusually high), we would expect 5-20 new pledges to come out of this.

Do you happen to be somebody in this category? If so, would you be interested in participating in such an experiment?

Comment author: rohinmshah  (EA Profile) 05 January 2017 12:02:02AM 0 points [-]

from the outside view this looks unusually high

I would have said this a little over a year ago, but I'm less surprised by it now and I do expect it would replicate. I also expect that it becomes less effective as it scales (I expect the people who currently do it are above average at this, due to selection effects), but not by that much.

This is based on running a local EA group for a year and constantly being surprised by how much easier it is to get a pledge than I thought it would be.

Comment author: Ben_Todd 24 December 2016 09:37:37PM *  20 points [-]

Hi Michael,

Thanks for the comments.

Our last chain got very long, and it didn't feel like we made much progress, so I'm going to limit myself to one reply.

My fundamental concern with 80K is that the evidence it its favor is very weak. My favorite meta-charity is REG because it has a straightforward causal chain of impact, and it raises a lot of money for charities that I believe do much more good in expectation than GiveWell top charities. 80K can claim the latter to some extent but cannot claim the former.

I agree that the counterfactuals + chain of impact are clearer with REG, however, strength of evidence is only one criteria I'd use when assessing a charity. To start with, I'm fairly risk-neutral, so I'm open to accepting weak evidence of high upside. But there's a lot of other considerations to use when comparing charities, including some of these:

  1. A larger multiplier - REG's multiplier is about 4x once you include opportunity costs (see figures https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/#raising-for-effective-giving); whereas I've argued our multiplier at the margin is at least 15x.

  2. 80k's main purpose is to solve talent gaps rather than funding gaps, and about 70% of the plan changes don't donate. So probably most of the benefits of 80k aren't captured by the donation multiplier. We've also argued that talent gaps are more pressing than funding gaps. The nature of solving talent gaps means that your evidence will always be weak, so if donors only accept strong evidence, the EA community would never systematically deal with talent gaps.

  3. Better growth prospects - REG hasn't grown the last few years, whereas we've grown 20-fold; the remaining upside is also much better with 80k (due to the reasons in the post).

  4. Better movement building benefits. 80k is getting hundreds of people into the EA movement, who are taking key positions at EA orgs, have valuable expertise etc. REG has only got a handful of poker players into EA.

  5. 80k produces valuable research for the movement, REG doesn't.

  6. Donations to REG might funge with the rest of EAS, which mostly works on stuff with similar or even weaker evidence than 80k (EA movement building in German-speaking countries, Foundational Research Institute, animal-focused policy advocacy).

  7. Larger room for more funding. REG has been struggling to find someone to lead the project, so has limited room for funding. I'm keen to see that REG gets enough to cover their existing staff and pay for an Executive Director, but beyond that, 80k is better able to use additional funds.

On your specific criticisms of the donation multiplier calculation via getting people to pledge:

These people would not have signed the pledge without 80K.

We ask them whether they would have taken it otherwise, and don't count people who said yes. More discussion here: http://effective-altruism.com/ea/154/thoughts_on_the_meta_trap/9hk And here: https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/#giving-what-we-can-pledges

These people would not have done something similarly or more valuable otherwise.

It seems like people take the GWWC pledge independently from their choice of job, so there isn't an opportunity cost problem here.

The GWWC pledge is as valuable as GWWC claims it is.

I'm persuaded by GWWC's estimates: https://www.givingwhatwecan.org/impact/ In fact, I think they undercount the value, because they ignore the chance of GWWC landing a super-large donor, which seems likely to be in the next 10 years. GWWC/CEA has already advised multiple billionaires.

On the opportunity cost point:

When someone switches from (e.g.) earning to give to direct work, 80K adds this to its impact stats. When someone else switches from direct work to earning to give, 80K also adds this to its impact stats.

Most of the plan changes are people moving from "regular careers" to "effective altruist style careers", rather than people who are already EAs switching between great options.

If someone switched from a good option to another good option, we may not count it as a plan change in the first place. Second, we wouldn't give it an equally high "impact rating".

In specific cases, we also say what the counterfactuals were e.g. see the end of the section on earning to give here: https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/#the-earning-to-give-community

I would like to see more effort on 80K's part to figure out whether its plan changes are actually causing people to do more good.

In terms of whether people are going into the right options, our core purpose is research into which careers are highest impact, so that's where most of our effort already goes.

In terms of the specific cases, there's more detail here: https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/

Questionable marketing tactics.

We're highly concerned by this, and think about it a lot. We haven't seen convincing evidence that we've been turning lots of people off (very unclear counterfactuals!). Our bounce rates etc. are if anything better than standard websites and I expect most people leave simply because they're not interested. Note as well that our intro advice isn't EA branded, which makes it much less damaging for the rest of the movement if we do make a bad impression.

We've also thought a lot about the analogy with GiveWell, and even discussed it with them. I think there's some important differences. Our growth has also been faster than GiveWell the last few years.

Do we have any reason to believe that 80K will cause more organizations to be created, and that they will be as effective as the ones it contributed to in the past?

4 new organisations seems like a pretty good track record - it's about one per year.

I expect more because we actively encourage people to set up new EA organisations (or work for the best existing ones) as part of our career advice.

Also note that our impact doesn't rest on this - we have lots of other pathways to impact.

but writing new articles has diminishing utility as you start to cover the most important ideas.

You can also get increasing returns. The more articles you have, the more impressive the site seems, so the more convincing it is. More articles can also bring the domain more traffic, which benefits all the other articles. More articles mean you cover more situations, which makes a larger fraction of users happy, increasing your viral coefficient.

You need to convince me that it has sufficiently high leverage that it does more good than the single best direct-work org, and it has higher leverage than any other meta org.

Yes. Though where it gets tricky is making the assessment at the margin.

Comment author: rohinmshah  (EA Profile) 25 December 2016 06:36:06PM 3 points [-]

Yes. Though where it gets tricky is making the assessment at the margin.

I was wondering about this too. Is your calculation of the marginal cost per plan change just the costs for 2016 divided by the plan changes in 2016? That doesn't seem to be an assessment at the margin.

Comment author: MichaelPlant 23 December 2016 02:08:18PM 0 points [-]

Just a small comment. Shouldn't we really be calling this worry about 'movement building' rather than 'meta'? Meta to me means things like cause prioritisation.

Comment author: rohinmshah  (EA Profile) 23 December 2016 07:08:17PM 0 points [-]

Yeah I didn't have a great term for it so I just went with the term that was used previously and made sure to define what I meant by it. I think this is a little broader than movement building -- I like the suggestion of "promotion traps" above.

View more: Next