Comment author: John_Maxwell_IV 23 March 2017 02:01:35AM *  0 points [-]

Previously I had wondered whether effective altruism was a "prediction-complete" problem--that is, whether learning to predict things accurately should be considered a prerequisite for EA activity (if you're willing to grant that the far future is of tremendous importance). But the other day it occurred to me that it might be sufficient to simply be well-calibrated. If you really are well calibrated, and it really does appear to be the case that if you say something is 90% probable it actually happens 90% of the time, then you don't need to know how to predict everything--it should be sufficient to just look for areas where you are currently assigning a 90% probability to this being a good thing, and then focus your EA activities there.

(There's a flaw in this argument if calibration is domain-specific.)

Comment author: kokotajlod 21 March 2017 08:53:51PM 1 point [-]

And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.

Comment author: John_Maxwell_IV 21 March 2017 11:57:05PM 0 points [-]

Can you elaborate on this?

Comment author: John_Maxwell_IV 19 March 2017 07:17:10PM *  3 points [-]

[Reinforcing Alice for giving more attention to this consideration despite the fact that it's unpleasant for her]

Maybe something like spreading cooperative agents, which is helpful both if things go well or not well.


What is meant by "cooperative agents"? Personally, I suspect "cooperativeness" is best split into multiple dimensions, analogous to lawful/chaotic and good/evil in a roleplaying game. My sense is that

  • humanity is made up of competing groups

  • bigger groups tend to be more powerful

  • groups get big because they are made up of humans who are capable of large-scale cooperation (in the "lawful" sense, not the "good" sense)

There's probably some effect where humans capable of large-scale cooperation also tend to be more benevolent. But you still see lots of historical examples of empires (big human groups) treating small human groups very badly. (My understanding is that small human groups treat each other badly as well, but we hear about it less because such small-scale conflicts are less interesting and don't hang as well on grand historical narratives.)

If by "spreading cooperative agents" you mean "spreading lawfulness", I'm not immediately seeing how that's helpful. My prior is that the group that's made up of lawful people is already going to be the one that wins, since lawfulness enables large-scale cooperation and thus power. Perhaps spreading lawfulness could make conflicts more asymmetrical, by pitting a large group of lawful individuals against a small group of less lawful ones. In an asymmetrical conflict, the powerful group has the luxury of subduing the much less powerful group in a way that's relatively benevolent. A symmetrical conflict is more likely to be a highly destructive fight to the death. Powerful groups also have stronger deterrence capabilities, which disincentivizes conflict in the first place. So this could be an argument for spreading lawfulness.

Spreading lawfulness within the EA movement seems like a really good thing to me. More lawfulness will allow us to cooperate at a larger scale and be a more influential group. Unfortunately, utilitarian thinking tends to have a strong "chaotic good" flavor, and utilitarian thought experiments often pit our harm-minimization instincts against deontological rules that underpin large-scale cooperation. This is part of why I spent a lot of time arguing in this thread and elsewhere that EA should have a stronger central governance mechanism.

BTW, a lot of this thinking came out of these discussions with Brian Tomasik.

In response to Open Thread #36
Comment author: John_Maxwell_IV 17 March 2017 07:41:29AM *  7 points [-]

In addition to retirement planning, if you're down with transhumanism, consider attempting to maximize your lifespan so you can personally enjoy the fruits of x-risk reduction (and get your selfish & altruistic selves on the same page). Here's a list of tips.

With regard to early retirement, an important question is how you'd spend your time if you were to retire early. I recently argued that more EAs should be working at relaxed jobs or saving up funds in order to work on "projects", to solve problems that are neither dollar-shaped nor career-shaped (note: this may be a self-serving argument since this is an idea that appeals to me personally).

I can't speak for other people, but I've been philosophically EA for something like 10 years now. I started from a position of extreme self-sacrifice and have basically been updating continuously away from that for the past 10 years. A handwavey argument for this: If we expect impact to have a Pareto distribution, a big concern should be maximizing the probability that you're able to have a 100x or more impact relative to baseline. In order to have that kind of impact, you will want to learn a way to operate at peak performance, probably for extended periods of time. Peak performance looks different for different people, but I'm skeptical of any lifestyle that feels like it's grinding you down rather than building you up. (This book has some interesting ideas.)

In principle, I don't think there needs to be a big tradeoff between selfish and altruistic motives. Selfishly, it's nice to have a purpose that gives your life meaning, and EA does that much better than anything else I've found. Altruistically, being miserable is not great for productivity.

One form of self-sacrifice I do endorse is severely limiting "superstimuli" like video games, dessert, etc. I find that after allowing my "hedonic treadmill" to adjust for a few weeks, this doesn't actually represent much of a sacrifice. Here are some thoughts on getting this to work.

Comment author: John_Maxwell_IV 17 March 2017 06:57:47AM *  1 point [-]

It looks like this is the link to the discussion of "Vipul's paid editing enterprise". Based on a quick skim,

this has fallen afoul of the wikipedia COI rules in spectacular fashion - with wikipedia administrators condeming the work as a pyramid scheme

strikes me as something of an overstatement. For example, one quote:

In general, I think Vipul's enterprise illustrates a need to change the policy on paid editors rather than evidence of misconduct.

Anyway, if it's true that Vipul's work on Wikipedia has ended up doing more harm than good, this doesn't make me optimistic about other EA projects.

Comment author: Richard_Batty 02 March 2017 09:56:16AM 10 points [-]

This is really helpful, thanks.

Whilst I could respond in detail, instead I think it would be better to take action. I'm going to put together an 'open projects in EA' spreadsheet and publish it on the EA forum by March 25th or I owe you £100.

Comment author: John_Maxwell_IV 04 March 2017 06:08:25AM 2 points [-]

£100... sounds tasty! I'll add it to my calendar :D

Comment author: John_Maxwell_IV 01 March 2017 11:52:33PM *  17 points [-]

Interesting post.

I wonder if it'd be useful to make a distinction between the "relatively small number of highly engaged, highly informed people" vs "insiders".

I could easily imagine this causal chain:

  1. Making your work open acts as an advertisement for your organization.

  2. Some of the people who see the advertisement become highly engaged & highly informed about your work.

  3. Some of the highly engaged & informed people form relationships with you beyond public discourse, making them "insiders".

If this story is true, public discourse represents a critical first step in a pipeline that ends with the creation of new insiders.

I think this story is quite plausibly true. I'm not sure the EA movement would have ever come about without the existence of Givewell. Givewell's publicly available research regarding where to give was a critical part of the story that sold people on the idea of effective altruism. And it seems like the growth of the EA movement lead to growth in the number of insiders, whose opinions you say you value.

I can easily imagine a parallel universe "Closed Philanthropy Project" with the exact same giving philosophy, but no EA movement that grew up around it due to a lack of publicly available info about its grants. In fact, I wouldn't be surprised if many foundations already had giving philosophies very much like OpenPhil's, but we don't hear about them because they don't make their research public.

I didn't quite realize when I signed up just what it meant to read ten thousand applications a year. It's also the most important thing that we do because one of Y Combinator's great innovations was that we were one of the first investors to truly equalize the playing field to all companies across the world. Traditionally, venture investors only really consider companies who come through a personal referral. They might have an email address on their site where you can send a business plan to, but in general they don't take those very seriously. There's usually some associate who reads the business plans that come in over the transom. Whereas at Y Combinator, we said explicitly, "We don't really care who you know or if you don't know anyone. We're just going to read every application that comes in and treat them all equally."

Source. Similarly, Robin Hanson thinks that a big advantage academics have over independent scholars is the use of open competitions rather than personal connections in choosing people to work with.

So, a power law distribution in commenter usefulness isn't sufficient to show that openness lacks benefits.

As an aside, I hadn't previously gotten a strong impression that OpenPhil's openness was for the purpose of gathering feedback on your thinking. Givewell was open with its research for the purpose of advising people where to donate. I guess now that you are partnering with Good Ventures, that is no longer a big goal. But if the purpose of your openness has changed from advising others to gathering advice yourself, this could probably be made more explicit.

For example, I can imagine OpenPhil publishing a list of research questions on its website for people in the EA community to spend time thinking & writing about. Or highlighting feedback that was especially useful, to reinforce the behavior of leaving feedback/give examples of the kind of feedback you want more of. Or something as simple as a little message at the bottom of every blog post saying you welcome high quality feedback and you continue to monitor for comments long after the blog post is published (if that is indeed true).

Maybe the reason you are mainly gathering feedback from insiders is simply that only insiders know enough about you to realize that you want feedback. I think it's plausible that the average EA puts commenting on OpenPhil blog posts in the "time wasted on the internet" category, and it might not require a ton of effort to change that.

To relate back to the Y Combinator analogy, I would expect that Y Combinator gets many more high-quality applications through the form on its website than the average VC firm does, and this is because more people think that putting their info into the form on Y Combinator's website is a good use of time. It would not be correct for a VC firm to look at the low quality of the applications they were getting through the form on their website and infer that a startup funding model based on an online form is surely unviable.

More broadly speaking this seems similar to just working to improve the state of online effective altruism discussion in general, which maybe isn't a problem that OpenPhil feels well-positioned to tackle. But I do suspect there is relatively low-hanging fruit here.

Comment author: RomeoStevens 24 February 2017 02:27:03AM *  5 points [-]
  1. sharing more things of dubious usefulness is what I advocate.
  2. I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can't expect to outperform them by much.
  3. I think it is easy to underestimate the effect of the long tail. See Chalmers' comment on the value of the LW and EA communities in his recent AMA.
  4. I also don't care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
  5. Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don't affect decision criteria because [Insert Robin Hanson analysis here].

I'm not advocating a sea change in policy, but an increase in skepticism at the margin.

Comment author: John_Maxwell_IV 01 March 2017 11:30:32PM *  0 points [-]

I think it is easy to underestimate the effect of the long tail. See Chalmers' comment on the value of the LW and EA communities in his recent AMA.

Notably, it's easy for me to imagine that people who work at foundations outside the EA community spend time reading OpenPhil's work and the discussion of it in deciding what grants to make. (This is something that could be happening without us being aware of it. As Holden says, transparency has major downsides. OpenPhil is also running a risk by associating its brand with a movement full of young contrarians it has no formal control over. Your average opaquely-run foundation has little incentive to let the world know if discussions happening in the EA community are an input into their grant-making process.)

Comment author: Richard_Batty 28 February 2017 04:04:46PM 7 points [-]

I think we have a real problem in EA of turning ideas into work. There have been great ideas sitting around for ages (e.g. Charity Entrepreneurship's list of potential new international development charities, OpenPhil's desire to see a new science policy think tank, Paul Christiano's impact certificate idea) but they just don't get worked on.

Comment author: John_Maxwell_IV 01 March 2017 10:11:01PM *  6 points [-]

Brainstorming why this might be the case:

  • Lack of visibility. For example, I'm pretty into EA, but I didn't realize OpenPhil wanted to see a new science policy think tank. Just having a list of open projects could help with visibility.

  • Bystander effects. It's not clear who has a comparative advantage to work on this stuff. And many neglected projects aren't within the purview of existing EA organizations.

  • Risk aversion. Sometimes I wonder if the "moral obligation" frame of EA causes people to shy away from high-risk do-gooding opportunities. Something about wanting to be sure that you've fulfilled your obligation. Earning to give and donating to AMF or GiveDirectly becomes a way to certify yourself as a good person in the eyes of as many people as possible.

  • EA has strong mental handles for "doing good with your donations" and "doing good with your career". "Doing good with your projects" is a much weaker handle, and it competes for resources with the other handles. Speculative projects typically require personal capital, since it's hard to get funding for a speculative project, especially if you have no track record. But if you're a serious EA, you might not have a lot of personal capital left over after making donations. And such speculative projects typically require time and focus. But many careers that are popular among serious EAs are not going to leave much time and focus for personal projects. I don't see any page on the 80K website for careers that leave you time to think so you can germinate new EA organizations in your spare time. Arguably, the "doing good with your career" framing is harmful because it causes you to zoom out excessively instead of making a series of small bets.

  • Lack of accountability. Maybe existing EA organizations are productive because the workers feel accountable to the leaders, and the leaders feel accountable to their donors. In the absence of accountability, people default to browsing Facebook instead of working on projects. Under this model, using personal capital to fund projects is an antipattern because it doesn't create accountability the way donations do. Another advantage of EAs donating money to each other is that charitable donations can be deducted from your taxes, but savings intended for altruistic personal projects cannot be. But note that accountability can have downsides.

  • It's not that there is some particular glitch in the process of turning ideas into work. Rather, there is no process in the first place. We can work to identify and correct glitches once we actually have a pipeline.

If someone made it their business to fix this problem, how might they go about it? Brainstorming:

  • Secure seed funding for the project, then have a competitive application process to be the person who starts the organization. Advantages: Social status goes to the winner of the application process. Comparing applicants side-by-side, especially using a standard application, should result in better hiring decisions/better comparative advantage mapping. Project founders can be selected more on the basis of project-specific aptitude and less on the basis of connections/fundraising ability. If the application process is open and widely advertised (the way e.g. Y Combinator does with their application), there's the possibility of selecting talented people outside the EA community and expanding our effective workforce. Disadvantages: Project founders less selected for having initiative/being agentlike/being really passionate about this particular project?

  • Alternatively, one can imagine more of a "headhunter" type approach. Maybe someone from the EA funds looks through the EA rolodex and gets in contact with people based on whether they seem like promising candidates.

  • Both the competitive application approach and the headhunter approach could also be done with organizations being the unit that's being operated on rather than individuals. E.g. publicize a grant that organizations can apply for, or contact organizations with a related track record and see if they'd be interested in working on the project if given the funding. Another option is to work through universities. In general, I expect that you're able to attract higher quality people if they're able to put the name of a prestigious university on their resume next to the project. The university benefits because they get to be associated with anything cool that comes out of the project. And the project has an easier time getting taken seriously due to its association with the university's brand. So, wins all around.

  • Some of these projects could work well as thesis topics. I know there was a push a little while ago to help students find EA-related thesis topics that ended up fading out. But this seems like a really promising idea to me.

Comment author: sdspikes 01 March 2017 01:50:13AM 1 point [-]

As a Stanford CS (BS/MS '10) grad who took AI/Machine Learning courses in college from Andrew Ng, worked at Udacity with Sebastian Thrun, etc. I have mostly been unimpressed by non-technical folks trying to convince me that AI safety (not caused by explicit human malfeasance) is a credible issue.

Maybe I have "easily corrected, false beliefs" but the people I've talked to at MIRI and CFAR have been pretty unconvincing to me, as was the book Superintelligence.

My perception is that MIRI has focused in on an extremely specific kind of AI that to me seems unlikely to do much harm unless someone is recklessly playing with fire (or intentionally trying to set one). I'll grant that that's possible, but that's a human problem, not an AI problem, and requires a human solution.

You don't try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.

But maybe you do make friendly nuclear power plants? Not sure if this analogy worked out for me or not.

Comment author: John_Maxwell_IV 01 March 2017 09:10:43PM 2 points [-]

I'd be interested to read you elaborate more on your views, for what it's worth.

View more: Next