Comment author: casebash 17 November 2017 07:12:44AM 2 points [-]

I'm not sure how useful this data is given that there are major distribution effects. ie. If I distribute the survey through Less Wrong, I'll find a lot of people who first heard of the movement through Less Wrong, ect.

Comment author: Peter_Hurford  (EA Profile) 17 November 2017 03:59:58PM 2 points [-]

Yep, that is an issue. One idea might be to look at the data for each referral source (e.g., how everyone who heard about the survey through Facebook heard about EA, then how everyone who heard about the survey though SlateStarCodex heard about EA, etc.).

Comment author: Peter_Hurford  (EA Profile) 17 November 2017 05:34:03AM *  3 points [-]

Is it just me, or does the "excited altruism" frame sound perverse to anyone else? I can understand excitement about helping people, but it can easily sound like deriving excitement from other people being in unfortunate situations. Like if no one needed help, you'd be less excited?

...I find it hard to imagine people who just wish there was a building burning down somewhere nearby, so they could play the hero.

Comment author: Peter_Hurford  (EA Profile) 07 November 2017 04:44:01AM *  0 points [-]

An interesting comment on this piece from Amon Elders, posted here with permission:

Basing myself mostly on Kauffman venture capital report of 2012: "we have met the enemy, and he is among us". The data in your report is mostly from venture funds raised between 1980-2001. This was the most profitable period of VC investing. Afterwards returns have taken a nosedive, see kauffman report. There seems one source in your report, Robinson and Sensoy(2011), that uses a database that includes up to 2010, and median PME figures are 0.82 and 2% net IRR. Which, given that it's an illiquid asset-class, is bad. The average seems to be around 1.0 PME, and 9% IRR. Note that this is 'up to', what really should be done is remove the period between 1980-2000, as the distribution of the data has shifted. Moreover, it wouldn't surprise me that the downwards trend in VC returns has continued over the past 7 years.

Either way, the median returns are quite bad, and my question was mostly why does OpenPhil believe they can find these right opportunities. Given that it seems almost impossible to predict if a new venture capital funds is going to be successful. Which makes sense, anything beyond >5 year horizon's seems almost impossible to predict for humans, basing myself on the book superforecasters here. And the data is highly noisy, highly limited, with high bias in venture capital investing and it might be the same for these high-impact high risk opportunities

Comment author: Peter_Hurford  (EA Profile) 06 November 2017 03:17:28AM 5 points [-]

Thanks for your interest. We had to shut it down because Hackpad itself shut down. A copy of the Hackpad has been retained. We had been working on migrating it to another service.

Comment author: Peter_Hurford  (EA Profile) 28 October 2017 01:29:41AM *  5 points [-]

Awesome that you're launching and doing this! :)

Not that I disagree, but can you elaborate more on why you think the space is uncrowded enough for a new charity? Can you also elaborate on why you decided to create a new charity rather than join an existing one?

(Disclaimer: I'm affiliated with Charity Science Health and work closely with Joey Savoie, so this is more of a devil's advocate question, but I'm still genuinely curious about the thinking behind this.)

In response to Open Thread #39
Comment author: casebash 23 October 2017 11:53:40PM *  5 points [-]

LW 2.0 now exists. It's still in beta, with a significant number of bugs left to fix and many features that haven't been added yet, but at some point it will become stable enough that it would be reasonable to consider switching. I'm curious what people think about this? Just thought that I'd flag this now.

In response to comment by casebash on Open Thread #39
Comment author: Peter_Hurford  (EA Profile) 26 October 2017 02:06:44PM 2 points [-]

I'd want to wait for more info on how LW 2.0 plays out. The EA Forum doesn't seem broken in any way right now, so I don't think there's a rush to switch over the backend.

In response to Open Thread #39
Comment author: ELW 26 October 2017 11:05:18AM 1 point [-]

I have a query regarding DALYs which I've been unable to find an answer too, but suspect there is literature on were I more familiar with econ/global health:

By my understanding one of the main advantages of DALYs is that they capture the intuition action in cases like "You may extend either person As life by 5 healthy years or extend person Bs life by 5 mediocre years (lets say they go blind due to the treatment)."

However, there seems to be no way of distinguishing the case where person A and B start of as perfectly healthy and we may help the former more and the cases where B is already blind and we may add "five years at their current state of well-being". This seems to not be ideal.

Is there any talk or use of "marginal DALYs" for want of a better term, where the intervention is considered relative to the previous level of wellbeing? Alternatively, is it simply common practise to use QALYs in the kind of case I am concerned with?

In response to comment by ELW on Open Thread #39
Comment author: Peter_Hurford  (EA Profile) 26 October 2017 02:05:25PM 0 points [-]

However, there seems to be no way of distinguishing the case where person A and B start of as perfectly healthy and we may help the former more

If you improve the number of years lived for a healthy person, that is "straightforward" on the DALY view -- it's +1 DALY for every extra year of life added.

The question of improving the quality of their life is a harder one -- I think the suggestion from the DALY framework is that if the person has perfect health, there isn't any way to improve the quality of their life (because it's already perfect). ...However, we know that's not actually true, because there is no DALY weight for getting tickets to go see Hamilton, while I think that would improve nearly anyone's life. That's just an area where DALY metrics are incomplete, but you could extend the DALY framework that way, by asking people questions like "If you could choose between an free Hamilton tickets but had a 1% chance of death, would you take the tickets?" (I'd probably take the tickets at a 0.005% chance of death.)

-

and the cases where B is already blind and we may add "five years at their current state of well-being". This seems to not be ideal.

This one is also "straightforward" in the DALY view -- you're adding more years at their current disability weight. If I recall correctly, an extra year of life that would otherwise not have been lived, but lived with blindness is worth +0.8 DALY. Thus adding "five years at their current state of well-being" (that is, blindness but no other issues), would be +4 DALY.

Comment author: Kelly_Witwicki 24 October 2017 03:20:36PM 0 points [-]

Also, could respondents not say anything about being e.g. Native American, Middle Eastern, or at least "Other"? I'm sure the structure of these questions has been thoroughly discussed in social sciences literature and I don't think the options shown here are in line with the standard style.

Comment author: Peter_Hurford  (EA Profile) 25 October 2017 12:37:27AM 0 points [-]

These were not options that we presented, but people could implicitly answer "Other" by not answering "Yes" to any of the race questions. We'd be happy to revisit this question if you think we should include additional races or an explicit "Other" option.

Comment author: Peter_Hurford  (EA Profile) 13 October 2017 05:03:43AM 1 point [-]

I found the case studies really valuable in their own right and as ways of explaining your points. Awesome!

Comment author: RyanCarey 10 October 2017 02:18:07AM 6 points [-]

Hey Zack,

I agree that we lose a bunch by moving our movement's centre of gravity away from poverty and development econ. But if we do the move properly, we gain a lot on the basis of the new areas we settle in. What rigor we lost, we should be able to patch up with Bayesian rationalist thinking. What institutional capital we might have lost from World Bank / Gates, we might be able to pick up with RAND/IARPA/Google/etc, a rather more diverse yet impressive group of possible contributors. For organization, yes a lot of experience, like that of Evidence Action, will be lost, but also much will be gained, for example, by working instead at technology think tanks, and elsewhere.

I don't think your conclusion that people should start in the arena of poverty is very well-supported either, if you're not comparing it to other arenas that people might be able to start out in. Do you think you might be privileging the hypothesis that people should start in the management of poverty just because that's salient to you, possibly because it's the status quo?

Comment author: Peter_Hurford  (EA Profile) 10 October 2017 05:02:03PM 6 points [-]

What rigor we lost, we should be able to patch up with Bayesian rationalist thinking

Can you elaborate more on this?

View more: Next