Comment author: MichaelPlant 12 April 2018 10:16:26AM 16 points [-]

However, we can also err by thinking about a too narrow reference class

Just to pick up on this, a worry I've had for a while - which I'm don't think I'm going to do a very job explaining here - is that the reference class people use is "current EAs" not "current and future EAs". To explain, when I started to get involved in EA back in 2015, 80k's advice, in caricature, was that EAs should become software developers or management consultants and earn to give, whereas research roles, such as becoming a philosopher or historian, are low priority. Now the advice has, again in caricature, swung the other way: management consultancy looks very unpromising, and people are being recommended to do research. There's even occassion discussion (see MacAskill's 80k podcast) that, on the margin, philosophers might be useful. If you'd taken 80k's advice seriously and gone in consultancy, it seems you would have done the wrong thing. (Objection, imagining Wiblin's voice: but what about personal fit? We talked about that. Reply: if personal fit does all the work - i.e. "just do the thing that has greatest personal fit" - then there's no point making more substantive recommendations)

I'm concerned that people will funnel themselves into jobs that are high-priority now, in which they have a small comparative advice to other EAs, rather than jobs in which they will later have a much bigger comparative advantage to other EAs. At the present time, the conversation is about EA needing more operations roles. Suppose two EAs, C and D, are thinking about what to do. C realises he's 50% better than D at ops and 75% better at research, so C goes into Ops because that's higher priority. D goes into research. Time passes the movement grows. E now joins. E is better than C at Ops. The problem is that C has taken an ops role and it's much harder for C to transition to research. C only has a comparative advantage at ops in the first time period, thereafter he doesn't. Overall, it looks like C should just have gone into research, not ops.

In short, our comparative advantage is not fixed, but will change over time simply based on who else shows up. Hence we should think about comparative advantage over our lifetimes rather than the shorter term. This likely changes things.

Comment author: Michael_PJ 12 April 2018 08:46:53PM 1 point [-]

This is a good point, although talent across time is comparatively harder to estimate. So "act according to present-time comparative advantage" might be a passable approximation in most cases.

We also need to consider the interim period when thinking about trades across time. If C takes the ops job, then in the period between C taking the job and E joining the movement, we get better ops coverage. It's not immediately obvious to me how this plays out, might need a little bit of modelling.

Comment author: JohnGalt 12 April 2018 01:45:05PM 1 point [-]

High Michael_PJ.

I am interested in working on the social benefit entrepreneurship problem. I drafted a white paper on the design of an organization to conduct socially beneficial entrepreneurship. I am a bit over excited about it and I am most likely blind to its flaws. I also have limited knowledge of the work done by others in this area. I am looking for reviewers to identify its flaws so I can bullet proof it prior to seeking funding. It seems like you have very good qualifications to review the paper. Would you be willing to look it over? If so what is the best way to get it to you?

Comment author: Michael_PJ 12 April 2018 08:42:00PM 0 points [-]

I've DM'd you.

Comment author: MichaelPlant 23 January 2018 09:16:01PM 0 points [-]

FWIW, I think this is way too broad. Even if, a priori, systemic interventions are more clueness-ny (?) than atomic interventions ones, it's not that useful to talk about them as a category. It'd would be more useful to argue the toss on particular cases.

Comment author: Michael_PJ 24 January 2018 12:08:42AM 0 points [-]

Sure - I don't think "systematic change" is a well-defined category. The relevant distinction is "easy to analyze" vs "hard to analyze". But in the post you've basically just stipulated that your example is easy to analyze, and I think that's doing most of the work.

So I don't think we should conclude that "systematic changes look much more effective" - as you say, we should look at them case by case.

Comment author: WillPearson 23 January 2018 08:00:40PM 1 point [-]

There are some systemic reforms that seem easier reason about that others. Getting governments to be able to agree a tax scheme such that the Google's and Facebook's of the world can't hide their profits, seems like a pretty good idea. Their money piles suggest that they aren't hurting for cash to invest in innovation. It is hard to see the downside.

The upside is going to be less in developing world than the developed (due to more profits occurring in the developed world). So it may not be ideal. The tax justice network is something I want to follow more. They had a conversation with givewell

Comment author: Michael_PJ 23 January 2018 09:03:33PM 0 points [-]

There's a sliding scale of what people consider "systematic reform". Often people mean things like "replace capitalism". I probably wouldn't even have classed drug policy reform or tax reform as "systematic reform", but it's a vague category. Of course the simpler ones will be easier to analyze.

Comment author: MichaelPlant 23 January 2018 08:29:04PM 0 points [-]

the core of my problem with "systematic reform" is that we're "clueless" about its effects - it could have good effects, but could also have quite bad effects, and it's extremely hard for us to tell which.

I think this can also apply for the atomic interventions EAs tend to like, namely those from GW. You can tell a story about how Give Directly increases meat consumption, so that's bad. For life saving charities, there's the same worry about meat, in addition to concerns about overpopulation. I'm not claiming we can't sensible work through these and concude they all do more good than bad, only that cluelessness isn't just a systemic intervention worry.

Comment author: Michael_PJ 23 January 2018 09:01:28PM 0 points [-]

Frame it as a matter of degree if you like: I think we're drastically more clueless about systematic reform than we are about atomic interventions.

Comment author: MichaelPlant 22 January 2018 11:28:05PM 1 point [-]

These numbers are just illustrative and to get people thinking, rather than to be taken literally.

Nevertheless, in some sense, it's not the 0.01 that's so important, it's the ratio between that and the Give Directly score. I'm amusing the intervention, whatever it is, has a 1/50th of a effect Give Directly does. That seems pretty believable: massive campaign to restucture the trade and subsidy system could do quite a bit of shift people out of poverty.

We could make the average effect a 1/500th of the GD average effect and the mystery campaign would be cost-effective up to $14.6bn. That's still a lot of sauce.

But yes, if you don't think the intervention would do good, that would be a substantial reason to dodge it (presumably in favour of another systemic intervention).

Comment author: Michael_PJ 23 January 2018 06:57:02PM 0 points [-]

You seem to be assuming that the "bad case" for systematic reform is that it's, say, 1/500th of the benefit of the GD average effort. But I don't think that's the bad case for most systematic reforms: the bad case is that they're actively harmful.

For me, at least, the core of my problem with "systematic reform" is that we're "clueless" about its effects - it could have good effects, but could also have quite bad effects, and it's extremely hard for us to tell which.

I think the ceiling cost estimate is a nice way of framing the comparison, but I agree with Milan that the hard bit is working out the expected effect.

Comment author: MichaelPlant 13 January 2018 07:17:48PM 1 point [-]

Interesting thoughts, actually...

MH&HR.

What does the R stand for?

Comment author: Michael_PJ 18 January 2018 09:53:58PM 1 point [-]

"Mental Health and Happiness Research". Coin your own meaningless acronym if you don't like it :)

Comment author: MichaelPlant 12 January 2018 12:11:13AM *  9 points [-]

I worry you've missed the most important part of the analysis. If we think what it means for a "new cause to be accepted by the effective altruism movement" that would proably be either:

  1. It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that's end goal you need to achieve; writing good arguments is a means to that end.

  2. you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that's who people look to for confirmation a new cause has been vetted. This isn't necessarily stupid on the part of individual EAs to defer to expert judgement: they might think "Oh, well if so and so aren't convinced about X, there's probably a reason for it".

This seems as good as time as any to re-plug the stuff I've done. I think these mostly meet your criteria, but fail in some key ways.

I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it's important new cause suggestions have 'shovel ready' project.

I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.

Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.

Comment author: Michael_PJ 13 January 2018 12:32:59PM 4 points [-]

I think you're right that having "an organization" talking about X is necessary for X to reach "full legitimacy", but it's worth pointing out that many pioneers in new areas within EA just started their own orgs (ACE, MIRI etc.) rather than trying to persuade others to support them.

Having even a nominal "project" allows you to collaborate more easily with others and starts to build credibility that isn't just linked to you. I think perhaps you should just start MH&HR.

Comment author: JohnGreer 01 December 2017 04:10:34PM 1 point [-]

I agree that that's a possibility regarding problems and solutions but wish I would see it more in practice.

Re: certificates of impact. I talked to my team about this. One of my cofounders said:

“That's an interesting idea. It'd be really cool to create a currency that incentivized people to do good things and pay for good things! But it seems like coordinating that would be extremely difficult unless you had a central institution that was doling these things out. Otherwise how could anyone agree on the utility of anything? Like, I ate lunch, and therefore reduced my suffering from hunger, so do I get certificate for that? Maybe you can make a certificate for anything, but it depends on who's willing to buy it. Say I cure cancer, and I make myself a certificate. “I cured cancer!” Someone buys it from me for $100. But then someone else wants to buy it for $1 million. So I end up with less money than the middle man who did nothing but bet on which certificates were worth something. And I don't know why people would want these certificates in the first place if they're so divorced from the actual deed on them that they have no value but bragging rights. I know people collect high-status bragging type things all the time, but it seems kind of a stretch to say, “Hey, here's this new virtual thing! Want it!””

Do you have thoughts on this? We're seriously considering trying to implement something provided it would be useful.

Comment author: Michael_PJ 01 December 2017 10:36:45PM 0 points [-]

I think the easiest way to understand is by analogy to carbon offsets, which are a kind of limited form of CoI that currently exist.

Carbon offsets are generally certified by an organization to say that they actually correspond to what happened, and that they are not producing too many. I don't think there's a fundamental problem with allowing un-audited certificates in the market, but they'd probably be worth a lot less!

I think the middle man making money is exactly what you want to happen. The argument is much the same as for investment and speculation in the for-profit world: people who're willing to take on risk by investing in uncertain things can profit from it, thus increasing the supply of money to people doing risky things.

Here's a concrete example: suppose I want to start a new bednet -distributing charity. I think I can get them out for half the price of AMF. Currently, there is little incentive for me to do this, and I have to go and persuade grantmakers to give me money.

In the CoI world, I can go to a normal investor, and point out that AMF sells (audited) bednet CoIs for $X, and that I can produce them at half the cost, allowing us to undercut AMF. I get investment and off we go. So things behave just like they do in the for-profit world (which you may or may not think is good).

What you do need is for people to want to buy these and "consume" them. I think that's the really hard problem, getting people to treat them like "real" certificates.

Happy to talk about this more - PM me if you'd like to have a chat.

Comment author: Ben_Todd 27 November 2017 01:14:08AM 1 point [-]

Yes, there are other instrumental reasons to be involved in new tech. It's not only the money, but it also means you'll learn about the tech, which might help you spot new opportunities for impact, or new risks.

I also think I disagree with the reasoning. If you consider neglectedness over all time, then new tech is far more neglected since people have only just started using it. With tech that has been around for decades, people have already had a chance to find all its best applications. e.g. when we interviewed biomedical researchers, several mentioned that breakthroughs often come when people apply new tech to a research question.

My guess is that there are good reasons for EAs to aim to be on the cutting edge of technology.

Comment author: Michael_PJ 27 November 2017 08:05:17PM 1 point [-]

Let me illustrate my argument. Suppose there are two opportunities, X and Y. Each of them contributes some value at each time step after they've been taken.

In the base timeline, A is never taken, and B is taken at time 2.

Now, it is time 1 and you have the option of taking A or B. Which should you pick?

In one sense, both are equally neglected, but in fact taking A is much better, because B will be taken very soon, whereas A will not.

The argument is that new technology is more likely to be like B, and any remaining opportunities in old technology is more likely to be like A (simply because if it were easy to do, we would have expected someone to do it already).

So even if most breakthroughs occur at the cutting edge, so long as we expect other people to do them soon, and they are not so big that we really want even a small speedup, then it can be better to find things that are more "persistently" neglected. (I used to use "persistent neglectedness" and "temporary neglectedness" for these concepts, but I thought it was confusing)

View more: Next