Comment author: aarongertler  (EA Profile) 14 September 2018 11:37:13AM 4 points [-]

To provide some context for this discussion, here's a 2017 overview of the cause prioritization landscape (not an intellectual summary -- more about the way resources are distributed, and what happens to the output).

That summary notes that existing cause-prioritization research is rarely used by non-EAs, but has influenced some government funding when it was spread by other parties (e.g. the Copenhagen Consensus Center talking to the British government). If a journal did come to exist for cause prioritization, much of its impact might come from how the results are shared, rather than the existence of the results in a journal format. And the EA community already has routes to sharing our results -- so to me, the main question at hand is: "How do we get better results?" Or, as the OP put it, how do we make intellectual progress?

If we want to focus on accelerating progress and helping discussions not become "lost", a journal doesn't seem like the optimal format. Something like the Cause Prioritization Wiki, which allows for rapid updating and the aggregation of content in a single place (rather than scattered through many articles) seems better for those goals.

This makes it a bit harder for some outsiders (e.g. academics) to contribute, but makes it much easier for non-academics to incorporate academic information into summaries. I suspect that an approach of "help EAs find good research and add it to our databases" would go better than an approach of "help good researchers find EA and publish in our journal", but each plan has its own pro/con list.

Comment author: Davis_Kingsley 28 May 2018 05:13:41PM 16 points [-]

One thing I've noticed is that direct work tends to put you much more in contact with reality (for lack of a better term) than community-building; it's much easier to see what you're accomplishing and what is and isn't working. This can be especially important for people trying to build and/or demonstrate skills.

Comment author: aarongertler  (EA Profile) 01 July 2018 05:26:54AM 2 points [-]

I strongly second this. This doesn't even have to mean direct EA work -- I think you learn a lot even by volunteering for non-EA causes (a few hours knocking on doors for a political candidate, an evening at a soup kitchen, etc.). It's good to see how nonprofits of all stripes organize their events and volunteers, and also good to be able to discuss the different nonprofit experiences you've had. (It's easy to come across as "do-nothing philosopher idly speculating" when you talk about EA with someone who spends every weekend volunteering, and that's not a good look.)

Comment author: Lila 16 November 2017 09:52:36PM *  1 point [-]

I don't think most of these will convince people to share your views, often because they come from different moral perspectives. They seem too negative or directly contradictory for people to change their minds - particularly the ones on social justice. However, it might help people understand your personal choices. What have been your results?

Comment author: aarongertler  (EA Profile) 31 May 2018 07:50:16PM 0 points [-]

These are all composites -- I'm not giving these exact speeches, but I might borrow different examples at different times to use in conversations.

When used in context, with the specific people I think are likely to respond best, bits and pieces of these frames been fairly effective; something like 20 people I've introduced to this have gone on to donate some amount of money to an EA-approved charity.

The idea of using "different moral perspectives" is specifically to convince as wide a range of people as possible. Too many common EA arguments assume that everyone is consequentialist, deep down. But you do have to match the perspective to the person -- otherwise, the conversation can certainly backfire!

Comment author: Peter_Hurford  (EA Profile) 17 November 2017 05:34:03AM *  3 points [-]

Is it just me, or does the "excited altruism" frame sound perverse to anyone else? I can understand excitement about helping people, but it can easily sound like deriving excitement from other people being in unfortunate situations. Like if no one needed help, you'd be less excited?

...I find it hard to imagine people who just wish there was a building burning down somewhere nearby, so they could play the hero.

Comment author: aarongertler  (EA Profile) 31 May 2018 07:47:04PM 0 points [-]

I did like Holden's post of that name, though it would be easy to mangle the concept in translation.

One better way to phrase it might be in historical perspective: "If someone wanted to help people as much as they could a hundred years ago, they might be able to volunteer in their town, and -- unless they were Cornelius Vanderbilt -- that would be about as good as it got. Now, we know a lot more about the world, which means we can help a lot more people, and make better use of whatever time and energy we'd like to give."


Talking About Effective Altruism At Parties

(Cross-posted from my blog , with a few edits.) Many of the effective altruists I've known were first introduced to EA through some kind of interpersonal connection -- a friend got interested first, or they heard about it at a college event, or something along those lines. I've introduced a... Read More
Comment author: aarongertler  (EA Profile) 05 November 2017 07:54:10PM 4 points [-]

One Molochian factor that was briefly mentioned in the dead-baby example: The people most skilled at generating outrage, at least until good-aligned organizations get good at training people to generate outrage, will typically generate outrage about more-or-less random topics that happen to affect them.

See, for example, the one-man campaign by a heart surgeon, whose wife died due to very rare complications, to reduce the odds of those rare complications ever happening -- and getting unusually rapid support from the FDA, because he made a petition and writes in a style that is accessible, yet sufficiently medical-sounding, to draw attention from many different groups.

(I'm no medical expert, but the surgeon's suggestions are controversial, and many doctors seem to think they'll cause more harm than good by squeezing out the good uses of the procedure which caused the complications.)

If this person had been the father of a child who died of parenteral nutrition-associated liver disease, the FDA might well have acted on that issue instead. But it's hard to point people like this in the "right direction".

Comment author: aarongertler  (EA Profile) 05 November 2017 07:01:15PM 3 points [-]

People who have the requisite talent level to be a "top" or "senior-level" hire seem to be rare in general, given that there's a huge market for recruiting organizations whose only job is to refer promising senior-level hires to companies who will pay a five-or-six-figure bounty if they actually manage to hire someone at that level.

How many people connected to effective altruism are at this level, AND are not involved with some other key EA project, AND do not already have a job that generates enough money that they'd be very unlikely to take a low-salary job at a small EA organization? (Even if you care a lot about impact, it's probably tempting to make $150,000 and donate $50,000 for "someone else" to make that impact, rather than to take the $50,000 job yourself.)

It seems like we're talking around one aspect of the problem: What, exactly, defines a "top hire"? What are the differences between that person and the average enthusiastic recent college graduate? How many of those differences can be remedied with an internship and some skills training, and how many are inherent features of the way someone "turned out" after their first twentysomething years of being alive? What fraction of the EA population -- among people who are willing to go in for unpaid training and don't already have great jobs/positions -- might actually be able to become "top hires" with a reasonable amount of training?

I'd be interested to hear your thoughts on that, Joey. Having run a few small organizations myself, I've worked with people who were reliable vs. unreliable, or who had good vs. bad instincts, and I know what my own criteria look like, but I don't have a good sense for how many people actually fit those criteria, since I've done very little direct "hiring" (these were student orgs, so anyone who wanted to join was welcome).

Comment author: Benito 01 November 2017 05:38:22AM 5 points [-]

For my own benefit I thought I'd write down examples of markets that I can see are inadequate yet inexploitable. Not all of these I'm sure are actually true, some just fit the pattern.

  • I notice that most charities aren’t cost effective, but if I decide to do better by making a super cost-effective charity I shouldn’t expect to be more successful than the other charities.
  • I notice that most people at university aren’t trying to learn but get good signals for their career, I can’t easily do better in the job market by stopping trying to signal and just learn better
  • I notice most parenting technique books aren't helpful (because genetics), but I probably can’t make money by selling a shorter book that tells you the only parenting techniques that do matter.
  • If I notice that politicians aren’t trying to improve the country very much, I can’t get elected over them by just optimising for improving the country more (because they're optimising for being elected).
  • If most classical musicians spend a lot of money on high-status instruments and spend time with high-status teachers that don’t correlate with quality, you can’t be more successful by just picking high quality instruments and teachers.
  • If most rocket companies are optimising for getting the most money out of government, you probably can’t win government contracts by just making a better rocket company. (?)
  • If I notice that nobody seems to be doing research on the survival of the human species, I probably can’t make it as an academic by making that my focus
  • If I notice that most music recommendation sites are highly reviewing popular music (so that they get advance copies) I can’t have a more successful review site/magazine by just being honest about the music.

Correspondingly, if these models are true, here are groups/individuals who it would be a mistake to infer strong information about if they're not doing well in these markets:

  • Just because a charity has a funding gap doesn't mean it's not very cost-effective
  • Just because someone has bad grades at university doesn't mean they are bad at learning their field
  • Just because a parenting book isn't selling well doesn't mean it isn't more useful than others
  • Just because a politician didn't get elected doesn't mean they wouldn't have made better decisions
  • Just because a rocket company doesn't get a government contract doesn't mean it isn't better at building safe and cheap rockets than other companies Just because an academic is low status / outside academia doesn't mean they're views aren't true
  • Just because a band isn't highly reviewed in major publications doesn't mean it isn't innovative/great

Some of these seem stronger to me than others. I tend to think that academic fields are more adequate at finding truth and useful knowledge than music critics are adequate at figuring out which bands are good.

Comment author: aarongertler  (EA Profile) 05 November 2017 06:39:37PM 1 point [-]

Good list! Makes me wonder whether there's some way to model the expected level of adequacy in a field. Factors we'd have to consider:

  • How much money is available within the field?
  • How much prestige is available within the field?
  • How many people are there in the field?
  • How much do participants care about the field for non-monetary, non-prestige reasons? That is, how inherently fun is it to work within this field?
  • How hard is it to work within the field? That is, to what extent do skills other than "having good ideas" matter? (Some scientific fields invest hundreds of hours of grunt work in every paper, while a philosophy paper requires little effort outside of writing.)
  • How fast does new information enter the field, compared to the amount of information that exists within the field already?
  • How many competing ideas/groups exist within the field, and how easy is it for a new idea/group to get started?

As a toy example, you could look at the metagame for the Modern format of Magic: The Gathering, which consists of roughly one hundred thousand players (maybe ten thousand of whom are really serious about winning). Rewards for being in the top 1% of the serious group equate to a few thousand dollars in profit per year and a few dozen fans; for the top 0.1%, a few tens of thousands of dollars in profit per year and a few thousand fans, plus a solid chance at a steady job if you want it (producing Magic-related media, working as a designer, etc.).

It's possible to generate a good burst of fame and profit by creating a new deck that matches up well against current popular decks (the "metagame").

Information enters the field rapidly, at a pace of a few hundred new cards (into a pool of 11,000) every three months. Building a new deck and thoroughly testing it against the metagame might take a few hundred dollars and a few dozen hours, but the cost is balanced by the fact that playing Magic: The Gathering is a lot of fun. The competitors you need to worry about are people who spend about 50 hours/week playing, and who are as skilled or a little more skilled than you are at the "basics" of the game. Maybe 5% of new decks that are tested this thoroughly turn out to generate any kind of positive return, relative to playing a deck someone else designed already. And so on. the end, we can observe that a new, good Modern deck (one capable of winning a 500-player, $5000 tournament) comes out once every couple of months, and that cards from sets that are more than one year old are almost never at the center of "new, good decks", relative to newer sets. Magic: The Gathering seems to be an adequate market; any new ideas are absorbed quickly, and few people really get a chance to profit off of them. The evaluation system for cards and decks optimizes more slowly than the evaluation system for stocks, but more quickly than the evaluation system for Major League Baseball players circa 1990.

I'd be interested in viewing examples for other fields, with better data collection, perhaps gathered into some kind of "adequacy" database.

Comment author: Julia_Wise2 16 December 2013 07:01:00PM 8 points [-]

Note that this essay was written several years ago, and the cost to e.g. save a life from malaria has changed.

Comment author: aarongertler  (EA Profile) 10 March 2016 03:40:54AM 2 points [-]

The cost, or our understanding of the cost? I don't think diminishing marginal utility has been achieved in such a drastic way; I think that the old Peter Singer quote from which that number originated has been taken out of context for decades. I could be wrong, though!

Comment author: aarongertler  (EA Profile) 12 February 2015 04:56:53AM 1 point [-]

This is great!

Do you have any "impact stories" to share about the group? That is, people who signed the Pledge but would not have if not for EA London, or donations given that otherwise would not have been, etc.? Getting 250 people into a Facebook group is definitely a good thing, and worth replicating, but any tricks for turning semi-passive followers into an active/impactful community would also be welcome.

View more: Next