Comment author: JohnGreer 01 December 2017 04:10:34PM 1 point [-]

I agree that that's a possibility regarding problems and solutions but wish I would see it more in practice.

Re: certificates of impact. I talked to my team about this. One of my cofounders said:

“That's an interesting idea. It'd be really cool to create a currency that incentivized people to do good things and pay for good things! But it seems like coordinating that would be extremely difficult unless you had a central institution that was doling these things out. Otherwise how could anyone agree on the utility of anything? Like, I ate lunch, and therefore reduced my suffering from hunger, so do I get certificate for that? Maybe you can make a certificate for anything, but it depends on who's willing to buy it. Say I cure cancer, and I make myself a certificate. “I cured cancer!” Someone buys it from me for $100. But then someone else wants to buy it for $1 million. So I end up with less money than the middle man who did nothing but bet on which certificates were worth something. And I don't know why people would want these certificates in the first place if they're so divorced from the actual deed on them that they have no value but bragging rights. I know people collect high-status bragging type things all the time, but it seems kind of a stretch to say, “Hey, here's this new virtual thing! Want it!””

Do you have thoughts on this? We're seriously considering trying to implement something provided it would be useful.

Comment author: Michael_PJ 01 December 2017 10:36:45PM 0 points [-]

I think the easiest way to understand is by analogy to carbon offsets, which are a kind of limited form of CoI that currently exist.

Carbon offsets are generally certified by an organization to say that they actually correspond to what happened, and that they are not producing too many. I don't think there's a fundamental problem with allowing un-audited certificates in the market, but they'd probably be worth a lot less!

I think the middle man making money is exactly what you want to happen. The argument is much the same as for investment and speculation in the for-profit world: people who're willing to take on risk by investing in uncertain things can profit from it, thus increasing the supply of money to people doing risky things.

Here's a concrete example: suppose I want to start a new bednet -distributing charity. I think I can get them out for half the price of AMF. Currently, there is little incentive for me to do this, and I have to go and persuade grantmakers to give me money.

In the CoI world, I can go to a normal investor, and point out that AMF sells (audited) bednet CoIs for $X, and that I can produce them at half the cost, allowing us to undercut AMF. I get investment and off we go. So things behave just like they do in the for-profit world (which you may or may not think is good).

What you do need is for people to want to buy these and "consume" them. I think that's the really hard problem, getting people to treat them like "real" certificates.

Happy to talk about this more - PM me if you'd like to have a chat.

Comment author: Ben_Todd 27 November 2017 01:14:08AM 1 point [-]

Yes, there are other instrumental reasons to be involved in new tech. It's not only the money, but it also means you'll learn about the tech, which might help you spot new opportunities for impact, or new risks.

I also think I disagree with the reasoning. If you consider neglectedness over all time, then new tech is far more neglected since people have only just started using it. With tech that has been around for decades, people have already had a chance to find all its best applications. e.g. when we interviewed biomedical researchers, several mentioned that breakthroughs often come when people apply new tech to a research question.

My guess is that there are good reasons for EAs to aim to be on the cutting edge of technology.

Comment author: Michael_PJ 27 November 2017 08:05:17PM 1 point [-]

Let me illustrate my argument. Suppose there are two opportunities, X and Y. Each of them contributes some value at each time step after they've been taken.

In the base timeline, A is never taken, and B is taken at time 2.

Now, it is time 1 and you have the option of taking A or B. Which should you pick?

In one sense, both are equally neglected, but in fact taking A is much better, because B will be taken very soon, whereas A will not.

The argument is that new technology is more likely to be like B, and any remaining opportunities in old technology is more likely to be like A (simply because if it were easy to do, we would have expected someone to do it already).

So even if most breakthroughs occur at the cutting edge, so long as we expect other people to do them soon, and they are not so big that we really want even a small speedup, then it can be better to find things that are more "persistently" neglected. (I used to use "persistent neglectedness" and "temporary neglectedness" for these concepts, but I thought it was confusing)

Comment author: MichaelPlant 26 November 2017 10:53:12PM 1 point [-]

Thanks very much for this. I just want to add a twist to this:

Counterintuitively, this suggests that you should stay away from new technologies: it is very likely that someone will try “machine learning for X” relatively soon, so it is unlikely to be neglected.

EAs don't have stay away from new tech. You could plan to have impact by getting rich via being the first to build cutting edge tech and then giving your money away; basically doing a variant of 'earn to give'. In this case your company wouldn't have done much good directly - because what you call the 'time advantage' would be so tiny - and the value would come from your donations. This presumes the owners of the company you beat wouldn't have given their money away.

Comment author: Michael_PJ 27 November 2017 07:58:02PM 1 point [-]

Yes - I should have clarified but this is deliberately not addressing the "earning to give through entrepreneurship" route. I should have mentioned it because it's quite important: I think for a lot of people it's going to be the best route.

Aside: if I think earning to give is so great, why have I been spending so much time talking about direct work? Because I think we need to do more exploration.

Comment author: Michiel 27 November 2017 06:20:34PM 0 points [-]

One of the things I find hard is the externalities, because often there are tons of things that a company is influencing. For example, with Heroes & Friends (our company) we try to built a platform for social movements (NGOs, social enterprises, etc.) and we don't control who is using it. So it can be used for ineffective movements but also highly effective ones. However, in our view we see a new society emerging where people take action themselves and take responsibility to improve their own community and help other people too. So on the surface it might have less direct impact (depending on the users) but on the long-term we want to be the market place of the 'informal economy' where people can 'harvest goodwill'. In order for this bottom-up economy to self-organize it needs a system or marketplace that provides the technology to do so, and we are basically building the best software for social movements to grow. But how would you include or exclude externalities? Which ones do you count and which ones do you leave out?

Is it a positive externality that more than 1 million people read good news stories and opportunities to act in their social media because of our platform or not? Is it a negative externality that many projects are not optimalized for 'doing the most good'? I'm just wondering how we could measure this for our own company but also for many others because I think there should be a lot of data points included.

Comment author: Michael_PJ 27 November 2017 07:54:56PM 1 point [-]

I think it's worth trying to have a toy model of this, even if it's mostly big boxes full of question marks. Going down to the gears level can be very helpful.

For example, it can help you answer questions like "how much good does doing X for one person have to do for this to be worth it?", or "how many people do we need to reach for this to be worth it?". You might also realise that all your expected impact comes from a certain class of thing, and then try and do more of that or measure it more carefully.

Which externalities to include is a tough question! In most examples I think there are a few that are "obviously" the most important, but that's just pumping my intuition and probably missing some things. I think often this is a case of building out your "informal model" of the project: presumably you think it will be good, but why? What is it about the project that could be good (or bad)? If you can answer those questions you have at least a starting point.

One final thing: when I say "negative externality" I mean something that's actively bad. It seems unlikely that people using your platform for ineffective projects is bad, but rather neutral (since we think they're not very effective). What might be bad could be e.g. reputational damage from being associated with such things.

Comment author: JohnGreer 26 November 2017 06:04:40PM 2 points [-]

I've been thinking of similar things as I co-founded a crypto startup that just got into an incubator. Our original idea was a personal finance app for cryptocurrency, but we started building it and realized it might be too early to make money because there wasn’t a market for it yet -- very few people use cryptocurrency as an actual currency! We have sort of the reverse problem most people have where they have a great idea and just need someone to back it: we have the backing now but with no good idea. We’ll probably build some moderately useful cryptocurrency financial tools and use them to get users while we explore the space more and potentially pivot. We're very interested in EA and life extension but don't seem to have any domain expertise that would allow us to work directly in that space without making money first and having more flexibility like the serial entrepreneur you mention.

The coming up with better ideas to work on problem seems like a difficult one. Having a master doc of world problems separated by domain as you describe would be really useful. For example, I could see my team going through it and seeing if blockchain tech could contribute anything. This doc from a comment in the plant-based startup post seems like a great representation: https://docs.google.com/document/d/1zCwLkwqwYzfzxwIm1-iHrvheRhMhbLYBxqEiz_7bHdE/edit That said, it almost seems if people could formulate the problems that well then they very well might already be working on it.

Thank you for taking the time to write this up!

Comment author: Michael_PJ 27 November 2017 07:43:29PM 1 point [-]

I'm pretty interested in blockchain-based tools as platforms for improved institutions. For example, I'd love to see a well-thought-out implementation of certificates of impact.

I think that there is an important distinction between problems and solutions, so I'm optimistic that it could be possible to make a useful breakdown of problems without having to (or being able to) say much about solutions. However, I'm largely speculating.

Comment author: RyanCarey 27 November 2017 02:18:29PM 1 point [-]

This points to another feature of the landscape model: entrepreneurs are locally situated by their existing background knowledge, and this is part of what lets them do what they do. Attempts to “move” them are likely to both meet with resistance and be ultimately counterproductive.... So what we need is much more granular cause prioritization, ideally right down to the size of a problem that can be worked on by an individual or team.

Or maybe we just need to work with people who are actually cause neutral?

Comment author: Michael_PJ 27 November 2017 07:38:53PM 2 points [-]

It's true that often entrepreneurs aren't cause-neutral. They're may be working in an area because it interests them, or they care about it.

But often they're more constrained by what they can do. Typically that's where they start from, and then they look for things that they can fix in that area.

The second problem is, I think, more fundamental. We can try and convince people to be more cause-neutral (although the fewer population-level changes we need to make, the better!), but it's just hard, per-individual work to move expertise and knowledge.

5

Towards effective entrepreneurship: what makes a startup high-impact?

Introduction This post owes a great deal to prior work and thought by Spencer Greenberg, Eric Gastfriend, and Peter Hartree. This post is a summary of the object-level thought on what makes a startup high impact which we developed while working on the Good Technology Project. A lot of this... Read More
12

Towards effective entrepreneurship: Good Technology Project post-mortem

Introduction This document aims to be two things: a summary of the things that we learned from the Good Technology Project (GTP), and a post-mortem of the project itself. I’m going to simply state my beliefs in this post, but I should clarify beforehand that I am not very certain... Read More
Comment author: Michael_PJ 18 November 2017 12:13:35PM 1 point [-]

I am excited about this! I have some technical questions, but I'll save them until I've read part II.

Comment author: Michael_PJ 30 October 2017 10:36:58PM 2 points [-]

Great post!

I think the question of "how do we make epistemic progress" needs a bit more attention.

Continuing the analogy with the EMH (which I like), I think the usual answer is that there are some $20 bills on the floor, and that some individuals are either specially placed to see that, or have a higher risk tolerance and so are willing to take the risk that it's a prank.

This suggests similar cases in epistemics: some people really are better situated (the experts), but perhaps we should also consider the class of "epistemic risk takers", who hold riskier beliefs in the attempt to get "ahead of the curve". However, as with startup founders, we should take such people with more than a pinch of salt. We may want to "fund" the ecosystem as a whole, because on average the one successful epistemic entrepreneur pays for the rest, but any individual is still likely to be worse than the consensus.

So that suggests that we should encourage people to think riskily, but usually discount their "risky" beliefs when making practical decisions until they have proven themselves. And this actually seems to reflect our behaviour: people are much more epistemically modest when the money is on the table. Being more explicit about which beliefs are "speculative" seems like it would be an improvement, though.

Finally, we also have to ask how people become experts. Again, in the economic analogy, people end up well situated to start businesses often through luck, sometimes through canny positioning, and sometimes through irrational pursuit of an idea. Similarly, to become an expert in X one has to invest a lot of time and effort, but we may want people to speculate in this domain too, and become experts in unlikely things so that we can get good credences on topics that may, with low probability, turn out to be important.

(Meta: I was confused about whether to comment on LW2 or here. Cross-posting comments seems silly... or is it?)

View more: Next