Comment author: John_Maxwell_IV 17 June 2017 04:50:33AM 1 point [-]
Comment author: John_Maxwell_IV 11 June 2017 08:29:45PM 0 points [-]

With all the talk of "societal social capital", it's interesting to consider groups like the Make a Wish Foundation that EA has historically dismissed. It seems plausible to me that improving societal social capital is a really hard problem and the Make a Wish Foundation accomplishes it in a relatively cost-effective way.

Comment author: John_Maxwell_IV 16 May 2017 05:58:27AM 0 points [-]

I would like to understand more about why it is that children are punished physically. My impression is that it's something that occurs in many different cultures and becomes less common as people become wealthier. These facts suggest to me that it's not something parents want to do (because as they get richer, they stop doing it) but it has some kind of utility (because it's done in so many different cultures).

For what it's worth, I was punished physically as a child (more than is typical in the developed world, I think) and I'm pretty skeptical that this should be a top EA cause area, for various reasons. But it sounds like you don't think anecdotes like these count for much.

Comment author: John_Maxwell_IV 16 May 2017 05:54:30AM 0 points [-]

When I visit the model page I see errors about improper syntax. (I assume this is because it's publicly editable and someone accidentally messed up the syntax?)

Comment author: John_Maxwell_IV 22 April 2017 07:50:55AM 1 point [-]

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects.

This is an interesting point.

It seems to me like mere veto power is sufficient to defeat the unilateralist's curse. The curse doesn't apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it's useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don't need to centralize power of action, just power of veto.

That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.

Comment author: John_Maxwell_IV 22 April 2017 07:55:12AM *  1 point [-]

On second thought, perhaps it's just an issue of framing.

Would you be interested in an "EA donors league" that tried to overcome the unilateralist's curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You'd get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)

(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)

Submitting...

Comment author: Kerry_Vaughan 21 April 2017 05:11:07PM 8 points [-]

But if I can't convince them to fund me for some reason and I think they're making a mistake, there are no other donors to appeal to anymore. It's all or nothing.

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn't fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I'm not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I'm open to ideas on how to help with this concern.

Comment author: John_Maxwell_IV 22 April 2017 07:50:55AM 1 point [-]

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects.

This is an interesting point.

It seems to me like mere veto power is sufficient to defeat the unilateralist's curse. The curse doesn't apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it's useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don't need to centralize power of action, just power of veto.

That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.

Comment author: John_Maxwell_IV 27 March 2017 02:23:23AM 3 points [-]
Comment author: Richard_Batty 25 March 2017 06:18:18PM 5 points [-]
Comment author: John_Maxwell_IV 27 March 2017 12:53:53AM 0 points [-]

Nice work!!

Comment author: John_Maxwell_IV 23 March 2017 02:01:35AM *  0 points [-]

Previously I had wondered whether effective altruism was a "prediction-complete" problem--that is, whether learning to predict things accurately should be considered a prerequisite for EA activity (if you're willing to grant that the far future is of tremendous importance). But the other day it occurred to me that it might be sufficient to simply be well-calibrated. If you really are well calibrated, and it really does appear to be the case that if you say something is 90% probable it actually happens 90% of the time, then you don't need to know how to predict everything--it should be sufficient to just look for areas where you are currently assigning a 90% probability to this being a good thing, and then focus your EA activities there.

(There's a flaw in this argument if calibration is domain-specific.)

Comment author: kokotajlod 21 March 2017 08:53:51PM 1 point [-]

And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.

Comment author: John_Maxwell_IV 21 March 2017 11:57:05PM 1 point [-]

Can you elaborate on this?

View more: Next