Comment author: MichaelPlant 23 January 2018 09:16:01PM 0 points [-]

FWIW, I think this is way too broad. Even if, a priori, systemic interventions are more clueness-ny (?) than atomic interventions ones, it's not that useful to talk about them as a category. It'd would be more useful to argue the toss on particular cases.

Comment author: Michael_PJ 24 January 2018 12:08:42AM 0 points [-]

Sure - I don't think "systematic change" is a well-defined category. The relevant distinction is "easy to analyze" vs "hard to analyze". But in the post you've basically just stipulated that your example is easy to analyze, and I think that's doing most of the work.

So I don't think we should conclude that "systematic changes look much more effective" - as you say, we should look at them case by case.

Comment author: WillPearson 23 January 2018 08:00:40PM 1 point [-]

There are some systemic reforms that seem easier reason about that others. Getting governments to be able to agree a tax scheme such that the Google's and Facebook's of the world can't hide their profits, seems like a pretty good idea. Their money piles suggest that they aren't hurting for cash to invest in innovation. It is hard to see the downside.

The upside is going to be less in developing world than the developed (due to more profits occurring in the developed world). So it may not be ideal. The tax justice network is something I want to follow more. They had a conversation with givewell

Comment author: Michael_PJ 23 January 2018 09:03:33PM 0 points [-]

There's a sliding scale of what people consider "systematic reform". Often people mean things like "replace capitalism". I probably wouldn't even have classed drug policy reform or tax reform as "systematic reform", but it's a vague category. Of course the simpler ones will be easier to analyze.

Comment author: MichaelPlant 23 January 2018 08:29:04PM 0 points [-]

the core of my problem with "systematic reform" is that we're "clueless" about its effects - it could have good effects, but could also have quite bad effects, and it's extremely hard for us to tell which.

I think this can also apply for the atomic interventions EAs tend to like, namely those from GW. You can tell a story about how Give Directly increases meat consumption, so that's bad. For life saving charities, there's the same worry about meat, in addition to concerns about overpopulation. I'm not claiming we can't sensible work through these and concude they all do more good than bad, only that cluelessness isn't just a systemic intervention worry.

Comment author: Michael_PJ 23 January 2018 09:01:28PM 0 points [-]

Frame it as a matter of degree if you like: I think we're drastically more clueless about systematic reform than we are about atomic interventions.

Comment author: MichaelPlant 22 January 2018 11:28:05PM 1 point [-]

These numbers are just illustrative and to get people thinking, rather than to be taken literally.

Nevertheless, in some sense, it's not the 0.01 that's so important, it's the ratio between that and the Give Directly score. I'm amusing the intervention, whatever it is, has a 1/50th of a effect Give Directly does. That seems pretty believable: massive campaign to restucture the trade and subsidy system could do quite a bit of shift people out of poverty.

We could make the average effect a 1/500th of the GD average effect and the mystery campaign would be cost-effective up to $14.6bn. That's still a lot of sauce.

But yes, if you don't think the intervention would do good, that would be a substantial reason to dodge it (presumably in favour of another systemic intervention).

Comment author: Michael_PJ 23 January 2018 06:57:02PM 0 points [-]

You seem to be assuming that the "bad case" for systematic reform is that it's, say, 1/500th of the benefit of the GD average effort. But I don't think that's the bad case for most systematic reforms: the bad case is that they're actively harmful.

For me, at least, the core of my problem with "systematic reform" is that we're "clueless" about its effects - it could have good effects, but could also have quite bad effects, and it's extremely hard for us to tell which.

I think the ceiling cost estimate is a nice way of framing the comparison, but I agree with Milan that the hard bit is working out the expected effect.

Comment author: MichaelPlant 13 January 2018 07:17:48PM 0 points [-]

Interesting thoughts, actually...


What does the R stand for?

Comment author: Michael_PJ 18 January 2018 09:53:58PM 0 points [-]

"Mental Health and Happiness Research". Coin your own meaningless acronym if you don't like it :)

Comment author: MichaelPlant 12 January 2018 12:11:13AM *  8 points [-]

I worry you've missed the most important part of the analysis. If we think what it means for a "new cause to be accepted by the effective altruism movement" that would proably be either:

  1. It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that's end goal you need to achieve; writing good arguments is a means to that end.

  2. you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that's who people look to for confirmation a new cause has been vetted. This isn't necessarily stupid on the part of individual EAs to defer to expert judgement: they might think "Oh, well if so and so aren't convinced about X, there's probably a reason for it".

This seems as good as time as any to re-plug the stuff I've done. I think these mostly meet your criteria, but fail in some key ways.

I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it's important new cause suggestions have 'shovel ready' project.

I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.

Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.

Comment author: Michael_PJ 13 January 2018 12:32:59PM 3 points [-]

I think you're right that having "an organization" talking about X is necessary for X to reach "full legitimacy", but it's worth pointing out that many pioneers in new areas within EA just started their own orgs (ACE, MIRI etc.) rather than trying to persuade others to support them.

Having even a nominal "project" allows you to collaborate more easily with others and starts to build credibility that isn't just linked to you. I think perhaps you should just start MH&HR.

Comment author: JohnGreer 01 December 2017 04:10:34PM 1 point [-]

I agree that that's a possibility regarding problems and solutions but wish I would see it more in practice.

Re: certificates of impact. I talked to my team about this. One of my cofounders said:

“That's an interesting idea. It'd be really cool to create a currency that incentivized people to do good things and pay for good things! But it seems like coordinating that would be extremely difficult unless you had a central institution that was doling these things out. Otherwise how could anyone agree on the utility of anything? Like, I ate lunch, and therefore reduced my suffering from hunger, so do I get certificate for that? Maybe you can make a certificate for anything, but it depends on who's willing to buy it. Say I cure cancer, and I make myself a certificate. “I cured cancer!” Someone buys it from me for $100. But then someone else wants to buy it for $1 million. So I end up with less money than the middle man who did nothing but bet on which certificates were worth something. And I don't know why people would want these certificates in the first place if they're so divorced from the actual deed on them that they have no value but bragging rights. I know people collect high-status bragging type things all the time, but it seems kind of a stretch to say, “Hey, here's this new virtual thing! Want it!””

Do you have thoughts on this? We're seriously considering trying to implement something provided it would be useful.

Comment author: Michael_PJ 01 December 2017 10:36:45PM 0 points [-]

I think the easiest way to understand is by analogy to carbon offsets, which are a kind of limited form of CoI that currently exist.

Carbon offsets are generally certified by an organization to say that they actually correspond to what happened, and that they are not producing too many. I don't think there's a fundamental problem with allowing un-audited certificates in the market, but they'd probably be worth a lot less!

I think the middle man making money is exactly what you want to happen. The argument is much the same as for investment and speculation in the for-profit world: people who're willing to take on risk by investing in uncertain things can profit from it, thus increasing the supply of money to people doing risky things.

Here's a concrete example: suppose I want to start a new bednet -distributing charity. I think I can get them out for half the price of AMF. Currently, there is little incentive for me to do this, and I have to go and persuade grantmakers to give me money.

In the CoI world, I can go to a normal investor, and point out that AMF sells (audited) bednet CoIs for $X, and that I can produce them at half the cost, allowing us to undercut AMF. I get investment and off we go. So things behave just like they do in the for-profit world (which you may or may not think is good).

What you do need is for people to want to buy these and "consume" them. I think that's the really hard problem, getting people to treat them like "real" certificates.

Happy to talk about this more - PM me if you'd like to have a chat.

Comment author: Ben_Todd 27 November 2017 01:14:08AM 1 point [-]

Yes, there are other instrumental reasons to be involved in new tech. It's not only the money, but it also means you'll learn about the tech, which might help you spot new opportunities for impact, or new risks.

I also think I disagree with the reasoning. If you consider neglectedness over all time, then new tech is far more neglected since people have only just started using it. With tech that has been around for decades, people have already had a chance to find all its best applications. e.g. when we interviewed biomedical researchers, several mentioned that breakthroughs often come when people apply new tech to a research question.

My guess is that there are good reasons for EAs to aim to be on the cutting edge of technology.

Comment author: Michael_PJ 27 November 2017 08:05:17PM 1 point [-]

Let me illustrate my argument. Suppose there are two opportunities, X and Y. Each of them contributes some value at each time step after they've been taken.

In the base timeline, A is never taken, and B is taken at time 2.

Now, it is time 1 and you have the option of taking A or B. Which should you pick?

In one sense, both are equally neglected, but in fact taking A is much better, because B will be taken very soon, whereas A will not.

The argument is that new technology is more likely to be like B, and any remaining opportunities in old technology is more likely to be like A (simply because if it were easy to do, we would have expected someone to do it already).

So even if most breakthroughs occur at the cutting edge, so long as we expect other people to do them soon, and they are not so big that we really want even a small speedup, then it can be better to find things that are more "persistently" neglected. (I used to use "persistent neglectedness" and "temporary neglectedness" for these concepts, but I thought it was confusing)

Comment author: MichaelPlant 26 November 2017 10:53:12PM 1 point [-]

Thanks very much for this. I just want to add a twist to this:

Counterintuitively, this suggests that you should stay away from new technologies: it is very likely that someone will try “machine learning for X” relatively soon, so it is unlikely to be neglected.

EAs don't have stay away from new tech. You could plan to have impact by getting rich via being the first to build cutting edge tech and then giving your money away; basically doing a variant of 'earn to give'. In this case your company wouldn't have done much good directly - because what you call the 'time advantage' would be so tiny - and the value would come from your donations. This presumes the owners of the company you beat wouldn't have given their money away.

Comment author: Michael_PJ 27 November 2017 07:58:02PM 1 point [-]

Yes - I should have clarified but this is deliberately not addressing the "earning to give through entrepreneurship" route. I should have mentioned it because it's quite important: I think for a lot of people it's going to be the best route.

Aside: if I think earning to give is so great, why have I been spending so much time talking about direct work? Because I think we need to do more exploration.

Comment author: Michiel 27 November 2017 06:20:34PM 0 points [-]

One of the things I find hard is the externalities, because often there are tons of things that a company is influencing. For example, with Heroes & Friends (our company) we try to built a platform for social movements (NGOs, social enterprises, etc.) and we don't control who is using it. So it can be used for ineffective movements but also highly effective ones. However, in our view we see a new society emerging where people take action themselves and take responsibility to improve their own community and help other people too. So on the surface it might have less direct impact (depending on the users) but on the long-term we want to be the market place of the 'informal economy' where people can 'harvest goodwill'. In order for this bottom-up economy to self-organize it needs a system or marketplace that provides the technology to do so, and we are basically building the best software for social movements to grow. But how would you include or exclude externalities? Which ones do you count and which ones do you leave out?

Is it a positive externality that more than 1 million people read good news stories and opportunities to act in their social media because of our platform or not? Is it a negative externality that many projects are not optimalized for 'doing the most good'? I'm just wondering how we could measure this for our own company but also for many others because I think there should be a lot of data points included.

Comment author: Michael_PJ 27 November 2017 07:54:56PM 1 point [-]

I think it's worth trying to have a toy model of this, even if it's mostly big boxes full of question marks. Going down to the gears level can be very helpful.

For example, it can help you answer questions like "how much good does doing X for one person have to do for this to be worth it?", or "how many people do we need to reach for this to be worth it?". You might also realise that all your expected impact comes from a certain class of thing, and then try and do more of that or measure it more carefully.

Which externalities to include is a tough question! In most examples I think there are a few that are "obviously" the most important, but that's just pumping my intuition and probably missing some things. I think often this is a case of building out your "informal model" of the project: presumably you think it will be good, but why? What is it about the project that could be good (or bad)? If you can answer those questions you have at least a starting point.

One final thing: when I say "negative externality" I mean something that's actively bad. It seems unlikely that people using your platform for ineffective projects is bad, but rather neutral (since we think they're not very effective). What might be bad could be e.g. reputational damage from being associated with such things.

View more: Next