Comment author: John_Maxwell_IV 22 April 2017 07:50:55AM 1 point [-]

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects.

This is an interesting point.

It seems to me like mere veto power is sufficient to defeat the unilateralist's curse. The curse doesn't apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it's useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don't need to centralize power of action, just power of veto.

That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.

Comment author: John_Maxwell_IV 22 April 2017 07:55:12AM *  1 point [-]

On second thought, perhaps it's just an issue of framing.

Would you be interested in an "EA donors league" that tried to overcome the unilateralist's curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You'd get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)

(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)

Submitting...

Comment author: Kerry_Vaughan 21 April 2017 05:11:07PM 8 points [-]

But if I can't convince them to fund me for some reason and I think they're making a mistake, there are no other donors to appeal to anymore. It's all or nothing.

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn't fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I'm not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I'm open to ideas on how to help with this concern.

Comment author: John_Maxwell_IV 22 April 2017 07:50:55AM 1 point [-]

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects.

This is an interesting point.

It seems to me like mere veto power is sufficient to defeat the unilateralist's curse. The curse doesn't apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it's useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don't need to centralize power of action, just power of veto.

That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.

Comment author: John_Maxwell_IV 27 March 2017 02:23:23AM 3 points [-]
Comment author: Richard_Batty 25 March 2017 06:18:18PM 3 points [-]
Comment author: John_Maxwell_IV 27 March 2017 12:53:53AM 0 points [-]

Nice work!!

Comment author: John_Maxwell_IV 23 March 2017 02:01:35AM *  0 points [-]

Previously I had wondered whether effective altruism was a "prediction-complete" problem--that is, whether learning to predict things accurately should be considered a prerequisite for EA activity (if you're willing to grant that the far future is of tremendous importance). But the other day it occurred to me that it might be sufficient to simply be well-calibrated. If you really are well calibrated, and it really does appear to be the case that if you say something is 90% probable it actually happens 90% of the time, then you don't need to know how to predict everything--it should be sufficient to just look for areas where you are currently assigning a 90% probability to this being a good thing, and then focus your EA activities there.

(There's a flaw in this argument if calibration is domain-specific.)

Comment author: kokotajlod 21 March 2017 08:53:51PM 1 point [-]

And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.

Comment author: John_Maxwell_IV 21 March 2017 11:57:05PM 1 point [-]

Can you elaborate on this?

Comment author: John_Maxwell_IV 19 March 2017 07:17:10PM *  5 points [-]

[Reinforcing Alice for giving more attention to this consideration despite the fact that it's unpleasant for her]

Maybe something like spreading cooperative agents, which is helpful both if things go well or not well.

[speculative]

What is meant by "cooperative agents"? Personally, I suspect "cooperativeness" is best split into multiple dimensions, analogous to lawful/chaotic and good/evil in a roleplaying game. My sense is that

  • humanity is made up of competing groups

  • bigger groups tend to be more powerful

  • groups get big because they are made up of humans who are capable of large-scale cooperation (in the "lawful" sense, not the "good" sense)

There's probably some effect where humans capable of large-scale cooperation also tend to be more benevolent. But you still see lots of historical examples of empires (big human groups) treating small human groups very badly. (My understanding is that small human groups treat each other badly as well, but we hear about it less because such small-scale conflicts are less interesting and don't hang as well on grand historical narratives.)

If by "spreading cooperative agents" you mean "spreading lawfulness", I'm not immediately seeing how that's helpful. My prior is that the group that's made up of lawful people is already going to be the one that wins, since lawfulness enables large-scale cooperation and thus power. Perhaps spreading lawfulness could make conflicts more asymmetrical, by pitting a large group of lawful individuals against a small group of less lawful ones. In an asymmetrical conflict, the powerful group has the luxury of subduing the much less powerful group in a way that's relatively benevolent. A symmetrical conflict is more likely to be a highly destructive fight to the death. Powerful groups also have stronger deterrence capabilities, which disincentivizes conflict in the first place. So this could be an argument for spreading lawfulness.

Spreading lawfulness within the EA movement seems like a really good thing to me. More lawfulness will allow us to cooperate at a larger scale and be a more influential group. Unfortunately, utilitarian thinking tends to have a strong "chaotic good" flavor, and utilitarian thought experiments often pit our harm-minimization instincts against deontological rules that underpin large-scale cooperation. This is part of why I spent a lot of time arguing in this thread and elsewhere that EA should have a stronger central governance mechanism.

BTW, a lot of this thinking came out of these discussions with Brian Tomasik.

In response to Open Thread #36
Comment author: John_Maxwell_IV 17 March 2017 07:41:29AM *  7 points [-]

In addition to retirement planning, if you're down with transhumanism, consider attempting to maximize your lifespan so you can personally enjoy the fruits of x-risk reduction (and get your selfish & altruistic selves on the same page). Here's a list of tips.

With regard to early retirement, an important question is how you'd spend your time if you were to retire early. I recently argued that more EAs should be working at relaxed jobs or saving up funds in order to work on "projects", to solve problems that are neither dollar-shaped nor career-shaped (note: this may be a self-serving argument since this is an idea that appeals to me personally).

I can't speak for other people, but I've been philosophically EA for something like 10 years now. I started from a position of extreme self-sacrifice and have basically been updating continuously away from that for the past 10 years. A handwavey argument for this: If we expect impact to have a Pareto distribution, a big concern should be maximizing the probability that you're able to have a 100x or more impact relative to baseline. In order to have that kind of impact, you will want to learn a way to operate at peak performance, probably for extended periods of time. Peak performance looks different for different people, but I'm skeptical of any lifestyle that feels like it's grinding you down rather than building you up. (This book has some interesting ideas.)

In principle, I don't think there needs to be a big tradeoff between selfish and altruistic motives. Selfishly, it's nice to have a purpose that gives your life meaning, and EA does that much better than anything else I've found. Altruistically, being miserable is not great for productivity.

One form of self-sacrifice I do endorse is severely limiting "superstimuli" like video games, dessert, etc. I find that after allowing my "hedonic treadmill" to adjust for a few weeks, this doesn't actually represent much of a sacrifice. Here are some thoughts on getting this to work.

Comment author: John_Maxwell_IV 17 March 2017 06:57:47AM *  1 point [-]

It looks like this is the link to the discussion of "Vipul's paid editing enterprise". Based on a quick skim,

this has fallen afoul of the wikipedia COI rules in spectacular fashion - with wikipedia administrators condeming the work as a pyramid scheme

strikes me as something of an overstatement. For example, one quote:

In general, I think Vipul's enterprise illustrates a need to change the policy on paid editors rather than evidence of misconduct.

Anyway, if it's true that Vipul's work on Wikipedia has ended up doing more harm than good, this doesn't make me optimistic about other EA projects.

Comment author: Richard_Batty 02 March 2017 09:56:16AM 11 points [-]

This is really helpful, thanks.

Whilst I could respond in detail, instead I think it would be better to take action. I'm going to put together an 'open projects in EA' spreadsheet and publish it on the EA forum by March 25th or I owe you £100.

Comment author: John_Maxwell_IV 04 March 2017 06:08:25AM 3 points [-]

£100... sounds tasty! I'll add it to my calendar :D

View more: Next