Comment author: John_Maxwell_IV 27 March 2017 02:23:23AM 1 point [-]
Comment author: Richard_Batty 25 March 2017 06:18:18PM 3 points [-]
Comment author: John_Maxwell_IV 27 March 2017 12:53:53AM 0 points [-]

Nice work!!

Comment author: John_Maxwell_IV 23 March 2017 02:01:35AM *  0 points [-]

Previously I had wondered whether effective altruism was a "prediction-complete" problem--that is, whether learning to predict things accurately should be considered a prerequisite for EA activity (if you're willing to grant that the far future is of tremendous importance). But the other day it occurred to me that it might be sufficient to simply be well-calibrated. If you really are well calibrated, and it really does appear to be the case that if you say something is 90% probable it actually happens 90% of the time, then you don't need to know how to predict everything--it should be sufficient to just look for areas where you are currently assigning a 90% probability to this being a good thing, and then focus your EA activities there.

(There's a flaw in this argument if calibration is domain-specific.)

Comment author: kokotajlod 21 March 2017 08:53:51PM 1 point [-]

And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.

Comment author: John_Maxwell_IV 21 March 2017 11:57:05PM 1 point [-]

Can you elaborate on this?

Comment author: John_Maxwell_IV 19 March 2017 07:17:10PM *  3 points [-]

[Reinforcing Alice for giving more attention to this consideration despite the fact that it's unpleasant for her]

Maybe something like spreading cooperative agents, which is helpful both if things go well or not well.

[speculative]

What is meant by "cooperative agents"? Personally, I suspect "cooperativeness" is best split into multiple dimensions, analogous to lawful/chaotic and good/evil in a roleplaying game. My sense is that

  • humanity is made up of competing groups

  • bigger groups tend to be more powerful

  • groups get big because they are made up of humans who are capable of large-scale cooperation (in the "lawful" sense, not the "good" sense)

There's probably some effect where humans capable of large-scale cooperation also tend to be more benevolent. But you still see lots of historical examples of empires (big human groups) treating small human groups very badly. (My understanding is that small human groups treat each other badly as well, but we hear about it less because such small-scale conflicts are less interesting and don't hang as well on grand historical narratives.)

If by "spreading cooperative agents" you mean "spreading lawfulness", I'm not immediately seeing how that's helpful. My prior is that the group that's made up of lawful people is already going to be the one that wins, since lawfulness enables large-scale cooperation and thus power. Perhaps spreading lawfulness could make conflicts more asymmetrical, by pitting a large group of lawful individuals against a small group of less lawful ones. In an asymmetrical conflict, the powerful group has the luxury of subduing the much less powerful group in a way that's relatively benevolent. A symmetrical conflict is more likely to be a highly destructive fight to the death. Powerful groups also have stronger deterrence capabilities, which disincentivizes conflict in the first place. So this could be an argument for spreading lawfulness.

Spreading lawfulness within the EA movement seems like a really good thing to me. More lawfulness will allow us to cooperate at a larger scale and be a more influential group. Unfortunately, utilitarian thinking tends to have a strong "chaotic good" flavor, and utilitarian thought experiments often pit our harm-minimization instincts against deontological rules that underpin large-scale cooperation. This is part of why I spent a lot of time arguing in this thread and elsewhere that EA should have a stronger central governance mechanism.

BTW, a lot of this thinking came out of these discussions with Brian Tomasik.

In response to Open Thread #36
Comment author: John_Maxwell_IV 17 March 2017 07:41:29AM *  7 points [-]

In addition to retirement planning, if you're down with transhumanism, consider attempting to maximize your lifespan so you can personally enjoy the fruits of x-risk reduction (and get your selfish & altruistic selves on the same page). Here's a list of tips.

With regard to early retirement, an important question is how you'd spend your time if you were to retire early. I recently argued that more EAs should be working at relaxed jobs or saving up funds in order to work on "projects", to solve problems that are neither dollar-shaped nor career-shaped (note: this may be a self-serving argument since this is an idea that appeals to me personally).

I can't speak for other people, but I've been philosophically EA for something like 10 years now. I started from a position of extreme self-sacrifice and have basically been updating continuously away from that for the past 10 years. A handwavey argument for this: If we expect impact to have a Pareto distribution, a big concern should be maximizing the probability that you're able to have a 100x or more impact relative to baseline. In order to have that kind of impact, you will want to learn a way to operate at peak performance, probably for extended periods of time. Peak performance looks different for different people, but I'm skeptical of any lifestyle that feels like it's grinding you down rather than building you up. (This book has some interesting ideas.)

In principle, I don't think there needs to be a big tradeoff between selfish and altruistic motives. Selfishly, it's nice to have a purpose that gives your life meaning, and EA does that much better than anything else I've found. Altruistically, being miserable is not great for productivity.

One form of self-sacrifice I do endorse is severely limiting "superstimuli" like video games, dessert, etc. I find that after allowing my "hedonic treadmill" to adjust for a few weeks, this doesn't actually represent much of a sacrifice. Here are some thoughts on getting this to work.

Comment author: John_Maxwell_IV 17 March 2017 06:57:47AM *  1 point [-]

It looks like this is the link to the discussion of "Vipul's paid editing enterprise". Based on a quick skim,

this has fallen afoul of the wikipedia COI rules in spectacular fashion - with wikipedia administrators condeming the work as a pyramid scheme

strikes me as something of an overstatement. For example, one quote:

In general, I think Vipul's enterprise illustrates a need to change the policy on paid editors rather than evidence of misconduct.

Anyway, if it's true that Vipul's work on Wikipedia has ended up doing more harm than good, this doesn't make me optimistic about other EA projects.

Comment author: Richard_Batty 02 March 2017 09:56:16AM 11 points [-]

This is really helpful, thanks.

Whilst I could respond in detail, instead I think it would be better to take action. I'm going to put together an 'open projects in EA' spreadsheet and publish it on the EA forum by March 25th or I owe you £100.

Comment author: John_Maxwell_IV 04 March 2017 06:08:25AM 3 points [-]

£100... sounds tasty! I'll add it to my calendar :D

Comment author: John_Maxwell_IV 01 March 2017 11:52:33PM *  17 points [-]

Interesting post.

I wonder if it'd be useful to make a distinction between the "relatively small number of highly engaged, highly informed people" vs "insiders".

I could easily imagine this causal chain:

  1. Making your work open acts as an advertisement for your organization.

  2. Some of the people who see the advertisement become highly engaged & highly informed about your work.

  3. Some of the highly engaged & informed people form relationships with you beyond public discourse, making them "insiders".

If this story is true, public discourse represents a critical first step in a pipeline that ends with the creation of new insiders.

I think this story is quite plausibly true. I'm not sure the EA movement would have ever come about without the existence of Givewell. Givewell's publicly available research regarding where to give was a critical part of the story that sold people on the idea of effective altruism. And it seems like the growth of the EA movement lead to growth in the number of insiders, whose opinions you say you value.

I can easily imagine a parallel universe "Closed Philanthropy Project" with the exact same giving philosophy, but no EA movement that grew up around it due to a lack of publicly available info about its grants. In fact, I wouldn't be surprised if many foundations already had giving philosophies very much like OpenPhil's, but we don't hear about them because they don't make their research public.

I didn't quite realize when I signed up just what it meant to read ten thousand applications a year. It's also the most important thing that we do because one of Y Combinator's great innovations was that we were one of the first investors to truly equalize the playing field to all companies across the world. Traditionally, venture investors only really consider companies who come through a personal referral. They might have an email address on their site where you can send a business plan to, but in general they don't take those very seriously. There's usually some associate who reads the business plans that come in over the transom. Whereas at Y Combinator, we said explicitly, "We don't really care who you know or if you don't know anyone. We're just going to read every application that comes in and treat them all equally."

Source. Similarly, Robin Hanson thinks that a big advantage academics have over independent scholars is the use of open competitions rather than personal connections in choosing people to work with.

So, a power law distribution in commenter usefulness isn't sufficient to show that openness lacks benefits.

As an aside, I hadn't previously gotten a strong impression that OpenPhil's openness was for the purpose of gathering feedback on your thinking. Givewell was open with its research for the purpose of advising people where to donate. I guess now that you are partnering with Good Ventures, that is no longer a big goal. But if the purpose of your openness has changed from advising others to gathering advice yourself, this could probably be made more explicit.

For example, I can imagine OpenPhil publishing a list of research questions on its website for people in the EA community to spend time thinking & writing about. Or highlighting feedback that was especially useful, to reinforce the behavior of leaving feedback/give examples of the kind of feedback you want more of. Or something as simple as a little message at the bottom of every blog post saying you welcome high quality feedback and you continue to monitor for comments long after the blog post is published (if that is indeed true).

Maybe the reason you are mainly gathering feedback from insiders is simply that only insiders know enough about you to realize that you want feedback. I think it's plausible that the average EA puts commenting on OpenPhil blog posts in the "time wasted on the internet" category, and it might not require a ton of effort to change that.

To relate back to the Y Combinator analogy, I would expect that Y Combinator gets many more high-quality applications through the form on its website than the average VC firm does, and this is because more people think that putting their info into the form on Y Combinator's website is a good use of time. It would not be correct for a VC firm to look at the low quality of the applications they were getting through the form on their website and infer that a startup funding model based on an online form is surely unviable.

More broadly speaking this seems similar to just working to improve the state of online effective altruism discussion in general, which maybe isn't a problem that OpenPhil feels well-positioned to tackle. But I do suspect there is relatively low-hanging fruit here.

Comment author: RomeoStevens 24 February 2017 02:27:03AM *  5 points [-]
  1. sharing more things of dubious usefulness is what I advocate.
  2. I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can't expect to outperform them by much.
  3. I think it is easy to underestimate the effect of the long tail. See Chalmers' comment on the value of the LW and EA communities in his recent AMA.
  4. I also don't care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
  5. Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don't affect decision criteria because [Insert Robin Hanson analysis here].

I'm not advocating a sea change in policy, but an increase in skepticism at the margin.

Comment author: John_Maxwell_IV 01 March 2017 11:30:32PM *  0 points [-]

I think it is easy to underestimate the effect of the long tail. See Chalmers' comment on the value of the LW and EA communities in his recent AMA.

Notably, it's easy for me to imagine that people who work at foundations outside the EA community spend time reading OpenPhil's work and the discussion of it in deciding what grants to make. (This is something that could be happening without us being aware of it. As Holden says, transparency has major downsides. OpenPhil is also running a risk by associating its brand with a movement full of young contrarians it has no formal control over. Your average opaquely-run foundation has little incentive to let the world know if discussions happening in the EA community are an input into their grant-making process.)

View more: Next