Comment author: MichaelDello 14 March 2017 11:18:38AM 0 points [-]

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Comment author: MichaelPlant 14 March 2017 05:09:38PM 0 points [-]

Hello Michael,

I think the key point of John's argument is that he's departing from classical utilitarianism in a particular way. That way is to say future happy lives have no value, but future bad lives have negative value. The rest of the argument then follows.

Hence John's argument isn't a dissent about any of the empirical predictions about the future. The idea is that you the ANU can agree with Bostrom et al. about what actually happens, but disagree on how good it is.

Comment author: rohinmshah  (EA Profile) 12 March 2017 04:05:33AM 4 points [-]

My main question when I read the title of this post was "Why do I expect that there are ethical issues that require a fast reaction time?" Having read the body, I still have the same question. The bystander effect counts, but are there any other cases? What should I learn from this besides "Try to eliminate bystander effect?"

But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast.

Examples?

Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences.

Examples?

Any time doing good takes place in an adversarial environment, this concept is likely to apply.

Examples? One example I came up with was negative publicity for advocacy of any sort, but you don't make any decisions about ethics in that scenario.

Comment author: MichaelPlant 13 March 2017 10:13:51PM 1 point [-]

I agree with Rohinmshah. I can see how reaction time could be important but I don't think you demonstrated this is actually the case.

One case I can think of where you'd have to give a time-pressured ethical view is debating, but I'm not sure how high stakes that really is.

Comment author: JamesSnowden 11 March 2017 02:56:24PM 4 points [-]

I would deprioritise looking at BasicNeeds (in favour of StrongMinds). They use a franchised model and aren't able to provide financials for all their franchisees. This makes it very difficult to estimate cost-effectiveness for the organisation as a whole.

The GWWC research page is out of date (it was written before StrongMinds' internal RCT was released) and I would now recommend StrongMinds above BasicNeeds on the basis of greater levels of transparency, and focus on cost-effectiveness.

Comment author: MichaelPlant 13 March 2017 09:57:34PM 0 points [-]

Very interesting you say this. I recently suggested to Basic Needs' CEO that he get in contact with GW and hopefully this will lead to BN focusing more on cost-effectiveness and transparency.

Did you and I not discuss the Strong Mind's RCT ages ago? I thought we agreed it was too good to be true and we really wanted to see something independent, but maybe I misremember/was talking to someone else. If it's the case the best evidence for mental health in the developing world is an internal RCT that shows 1. how far behind mental health is and 2. the urgent need for a better evidence base.

Comment author: MichaelPlant 13 March 2017 09:54:00PM 4 points [-]

Great post John. I don't think I'd seen the long-term implications of the neutrality intuition pointed out elsewhere. Most people who endorse it seem to think it permits focusing on the present, which I agree isn't correct.

Comment author: ThomasSittler 11 March 2017 11:04:10AM 0 points [-]

To discuss Lovisa Tengberg, StrongMinds please use this thread.

Comment author: MichaelPlant 11 March 2017 01:38:21PM 2 points [-]

Lovisa, have you looked into Basic Needs? http://www.basicneeds.org/

When I spoke to Eric Gastfriend about the Harvard EA report a while ago I asked why Strong Minds and Basic Needs weren't on the list. As far as I recall they just hadn't looked at them, rather than that they'd looked at them and then decided they were bad options.

I'd also be really curious to have someone do a cost-effectiveness comparison for Action for Happiness. http://www.actionforhappiness.org/ The thought is that it might be more effective, if happiness is your goal, to fund broad but shallow happiness education programmes for the general public, rather than funding deep mental health interventions for a few people.

I have no idea how the numbers would come out and would probably be biased (disclaimer: I know some of the people at both orgs and might do some work for Action for Happiness at some point). Hence it would be great to get some fresh eyes on the topic.

Comment author: MichaelPlant 11 March 2017 01:28:39PM 0 points [-]

Oh. It's also ambiguous if you want us to discuss the blog posts you linked on this part of the EA forum or on the oxprio website where they are posted.

Comment author: MichaelPlant 11 March 2017 01:27:04PM 2 points [-]

Thanks for this. It was great, particularly hearing about how people think about things rather than just the outcomes reached.

A couple of comments.

  1. Could you explain what you meant about beliefs? I'm unclear of what you think a belief is and what would be a good account of forming, having or reporting beliefs. This isn't a critical comment asking you to produce a full theory of mind, more that what you say sounds interesting but is unclear and I'd like you to expand.

  2. Reading this, I got a sense you were having to reinvent the philosophical wheel whilst trying to avoid doing so. You seem to be dong lots of what is straightforward and implicitly moral philosophy but making this explicity. Whilst that has an appeal - people don't agree in moral philosophy and educating people is not really what OxPrio is about - I think it might just be easier to get people's assumptions on the table so you can see what follows from them.

As a couple of examples, if you're comparing GiveDirectly to MIRI you have to making implicit assumptions about population axiology (i.e. how much future people matter). It's not your view on future people is one part of the calculation, it basically is the whole calculation. Alternatively, if you're looking at AMF to GiveDirectly and just comparing present people that's going to be very substantially determined by your view about the badness of death.

I wonder if it would help to run through some candidate theories in moral philosophy so that people can use that to form part of their model, rather than having to generate a new theory for themselves on the fly.

A further thought: it would be really nice to get a handle on which prioritisation were truly empirical questions and which philosophical.

Excellent stuff, look forward to reading more.

Comment author: MichaelPlant 05 March 2017 08:04:21PM 2 points [-]

This doesn't apply to me because I'm not a US citizen, but if I were able to do this I'd first want to know more about where the money is likely to head currently without my intervention.

Comment author: John_Maxwell_IV 01 March 2017 11:52:33PM *  17 points [-]

Interesting post.

I wonder if it'd be useful to make a distinction between the "relatively small number of highly engaged, highly informed people" vs "insiders".

I could easily imagine this causal chain:

  1. Making your work open acts as an advertisement for your organization.

  2. Some of the people who see the advertisement become highly engaged & highly informed about your work.

  3. Some of the highly engaged & informed people form relationships with you beyond public discourse, making them "insiders".

If this story is true, public discourse represents a critical first step in a pipeline that ends with the creation of new insiders.

I think this story is quite plausibly true. I'm not sure the EA movement would have ever come about without the existence of Givewell. Givewell's publicly available research regarding where to give was a critical part of the story that sold people on the idea of effective altruism. And it seems like the growth of the EA movement lead to growth in the number of insiders, whose opinions you say you value.

I can easily imagine a parallel universe "Closed Philanthropy Project" with the exact same giving philosophy, but no EA movement that grew up around it due to a lack of publicly available info about its grants. In fact, I wouldn't be surprised if many foundations already had giving philosophies very much like OpenPhil's, but we don't hear about them because they don't make their research public.

I didn't quite realize when I signed up just what it meant to read ten thousand applications a year. It's also the most important thing that we do because one of Y Combinator's great innovations was that we were one of the first investors to truly equalize the playing field to all companies across the world. Traditionally, venture investors only really consider companies who come through a personal referral. They might have an email address on their site where you can send a business plan to, but in general they don't take those very seriously. There's usually some associate who reads the business plans that come in over the transom. Whereas at Y Combinator, we said explicitly, "We don't really care who you know or if you don't know anyone. We're just going to read every application that comes in and treat them all equally."

Source. Similarly, Robin Hanson thinks that a big advantage academics have over independent scholars is the use of open competitions rather than personal connections in choosing people to work with.

So, a power law distribution in commenter usefulness isn't sufficient to show that openness lacks benefits.

As an aside, I hadn't previously gotten a strong impression that OpenPhil's openness was for the purpose of gathering feedback on your thinking. Givewell was open with its research for the purpose of advising people where to donate. I guess now that you are partnering with Good Ventures, that is no longer a big goal. But if the purpose of your openness has changed from advising others to gathering advice yourself, this could probably be made more explicit.

For example, I can imagine OpenPhil publishing a list of research questions on its website for people in the EA community to spend time thinking & writing about. Or highlighting feedback that was especially useful, to reinforce the behavior of leaving feedback/give examples of the kind of feedback you want more of. Or something as simple as a little message at the bottom of every blog post saying you welcome high quality feedback and you continue to monitor for comments long after the blog post is published (if that is indeed true).

Maybe the reason you are mainly gathering feedback from insiders is simply that only insiders know enough about you to realize that you want feedback. I think it's plausible that the average EA puts commenting on OpenPhil blog posts in the "time wasted on the internet" category, and it might not require a ton of effort to change that.

To relate back to the Y Combinator analogy, I would expect that Y Combinator gets many more high-quality applications through the form on its website than the average VC firm does, and this is because more people think that putting their info into the form on Y Combinator's website is a good use of time. It would not be correct for a VC firm to look at the low quality of the applications they were getting through the form on their website and infer that a startup funding model based on an online form is surely unviable.

More broadly speaking this seems similar to just working to improve the state of online effective altruism discussion in general, which maybe isn't a problem that OpenPhil feels well-positioned to tackle. But I do suspect there is relatively low-hanging fruit here.

Comment author: MichaelPlant 05 March 2017 07:48:06PM 1 point [-]

I'd like to build on the causal chain point. I think there's something unsatisfying about the way Holden's set up the problem.

I took the general thought as: "we don't get useful comments from the general public, we get useful comments from those few people who read lots of our stuff then talk to us privately". But if the general way things work is 1. people read the OPP blog (public) then 2. talk to OPP privately (perhaps because they don't believe anyone takes public discourse seriously), but doing 2. means you are then no longer part of the general public, then almost by definition public discourse isn't going to be useful: those motivated enough to engage in private correspondence are now not counted as part of public discourse!

Maybe I've misunderstood something, but it seems very plausible to me that the public discourse generates those useful private conversations even if the useful comments don't happen on public forums themselves.

I'm also uncertain if the EA forum counts as public discourse Holden doesn't expect to be useful, or private discourse which might be, which puts pressure on the general point. If you typify 'public discourse' as 'talking to people who don't know much' then of course you wouldn't expect it to be useful.

Comment author: [deleted] 26 December 2016 07:45:26PM 1 point [-]

Imagine two worlds:

In world 1 Alice is born. She sleeps under bednets and lives and proceeds to have children of her own 15-45 years after her birth. Alice's children make some more children, and they make some more, and more… And by the time our universe dies or the Earth is destroyed or humans are no more or humans stop having children, a million of people with Alice's genes have lived.

In world 2 Alice is born. She doesn't get a bednet and dies from malaria at age 4. Some (0-15) years after Alice's birth her parents create more children. On average they create 1 more child. That child, too, has children on their own, but it happens later than it would happen for Alice – about 10 years later. Because of that there will be only 500000 people with this little kid's genes. Total population of humans throughout the future will be smaller. Therefore less utilons (assuming total utilitarianism and some other ethical systems).

How come these researches don't calculate stuff like this? I wish they did. It seems extremely important even though I haven't figured out what population ethics I prefer.

I am not claiming that AMF is actually better than everything else. I am just making an objection, and hopefullt someone will research different charities' impact on populations size and happiness from now till the end of time.

Comment author: MichaelPlant 05 January 2017 10:46:01AM 0 points [-]

Some researchers do consider this sort of thing, such as Bostrom: http://www.nickbostrom.com/astronomical/waste.html

As I argued though, if you care about total utils over in this impersonal sense, you should probably support x-risk, not AMF.

View more: Next