Comment author: Maxdalton 13 April 2017 04:15:40PM *  20 points [-]

[My views, not my employer's]

I appreciate the spirit in which this was written, and I think we should all be looking out for more ways to help each other, especially in ways that directly improve skills - e.g. through the advice and mentorship you generously offer.

However, some of this feels a little deceptive to me. If people see 'speaking at a top law school' as impressive, that's probably because they think that I was invited because I'm a great speaker/have expertise that lots of people in the law school value. If in fact I was invited just because I was involved in effective altruism, and I only gave a 10 minute talk, I might be giving someone a misleading impression of my talents. Similarly, people might think that receiving the award you describe would require a higher bar of achievement than the one you suggest.

I'm probably overreacting here - this is the sort of thing that people do on CVs all of the time, and so perhaps people automatically downgrade such claims on CVs. However, I think that it's valuable for our internal culture, and for the community's reputation, to hold ourselves to high standards, and I think this article would have been better if it had noted these issues. I'm not sure whether the benefits outweigh the costs.

Comment author: RyanCarey 11 February 2017 07:56:16PM *  5 points [-]

Great to see this!

My 2c on what research I and others like me would find useful from groups like this:

  • Overviewing empirical and planning-relevant considerations (rather than philosophical theorizing).
  • Focusing on obstacles and major events on the path to "technological maturity" I.e. risky or transformative techs.
  • Investigate specific risky and transformative tech in detail. FHI has done a little of this but it is very neglected on the margin. Scanning microscopy for neural tissue, invasive brain-computer interfaces, surveillance, brain imaging for mind-reading, CRISPR, genome synthesis, GWAS studies in areas of psychology, etc.
  • Help us understand AI progress. AI Impacts has done a bit of this but they are tiny. It would be really useful to have a solid understanding of growth of capabilities, funding and academic resources in a field like deep learning. How big is the current bubble compared to previous ones, et cetera.

Also, in its last year, GPP largely specialized on tech and long-run issues. This meant it did a higher density of work on prioritization questions that mattered. Prima facie, this and other reasons would also make Oxford Prioritization Project want to specialize on the same.

Lastly, you'll get more views and comments if you use a (more beautiful) Medum blog.

Happy to justify these positions further.

Good luck!

Comment author: Maxdalton 12 February 2017 06:36:03AM 1 point [-]

Hey Ryan, I'd be particularly interested in hearing more about your reasons for your first point (about theoretical vs. empirical work).

Comment author: Maxdalton 18 January 2017 04:08:32PM *  1 point [-]

CEA is now considering where to take this project next: how much effort we should put into expanding it, and what new features/content we should focus on. We'd welcome feedback from anyone, regardless of whether you've used the site before, via this Google form

In response to High Impact Science
Comment author: LaurenMcG  (EA Profile) 11 January 2017 06:22:18PM 0 points [-]

Amazing article! Are there any resources available to help identify, prioritise and facilitate opportunities for high-impact science? I'm currently researching cause prioritisation of, and within, biotechnology - especially it's positive applications. Any ideas would be greatly appreciated :)

In response to comment by LaurenMcG  (EA Profile) on High Impact Science
Comment author: Maxdalton 12 January 2017 08:51:22AM 1 point [-]
Comment author: AlexMennen 12 December 2016 10:03:16PM 0 points [-]

Even though the last paragraph of the expected value maximization article now says that it's talking about the VNM notion of expected value, the rest of the article still seems to be talking about the naive notion of expected value that is linear with respect to things of value (in the examples given, years of fulfilled life). This makes the last paragraph seem pretty out of place in the article.

Nitpicks on the risk aversion article: "However, it seems like there are fewer reasons for altruists to be risk-neutral in the economic sense" is a confusing way of starting a paragraph about how it probably makes sense for altruists to be close to economically risk-neutral as well. And I'm not sure what "unless some version of pure risk-aversion is true" is supposed to mean.

Comment author: Maxdalton 13 December 2016 11:40:33AM 1 point [-]

Thanks, I've made some further changes, which I hope will clear things up. Re your first worry, I think that's a valid point, but it's also important to cover both concepts. I've tried to make the distinction clearer. If that doesn't address your worry, feel free to drop me a message or suggest changes via the feedback tab, and we can discuss further.

Comment author: AlexMennen 10 December 2016 08:57:55PM *  2 points [-]

The article on expected value theory incorrectly cites the VNM theorem as a defense of maximizing expected value. The VNM theorem says that for a rational agent, there must exist some measure of value for which the rational agent maximizes its expectation, but the theorem does not say anything about the structure of that measure of value. In particular, it does not say that value must be linear with respect to anything, so it does not give a reason not to be risk averse. There are good reasons for altruists to have very low risk aversion, but the VNM theorem is not a sufficient such reason.

Edit: I see the article on risk aversion clarifies that "risk aversion" means in the psychological sense, but without that context, it looks like the expected value article is saying that many EAs think altruists should have low risk aversion in the economic sense, which is true, an important point, and not supported by the VNM theorem. Also, the economics version of risk aversion is also an important concept for EAs, so I don't think it's a good idea to establish that "risk aversion" only refers to the psychological notion by default, rather than clarifying it every time.

Edit 2: Since this stuff is kind of a pet peeve of mine, I'd actually be willing to attempt to rewrite those articles myself, and if you're interested, I would let you use and modify whatever I write however you want.

Comment author: Maxdalton 12 December 2016 08:01:03AM *  2 points [-]

Hi Alex, thanks for the comment, great to pick up issues like this.

I wrote the article, and I agree and am aware of your original point. Your edit is also correct in that we are using risk aversion in the psychological/pure sense, and so the VNM theory does imply that this form of risk aversion is irrational. However, I think you're right that, given that people are more likely to have heard of the concept of economic risk aversion, the expected value article is likely to be misleading. I have edited to emphasise the way that we're using risk aversion in these articles, and to clarify that VNM alone does not imply risk neutrality in an economic sense. I've also added a bit more discussion of economic risk aversion. Further feedback welcome!

Comment author: Maxdalton 15 November 2016 12:57:25PM 0 points [-]

This seems to be an interesting approach to this question. However, for a top level post in this forum, I would like to see more of an attempt to link this directly to effective altruism, which, as many have noted, is not simply consequentialism. There is no mention of 'effective altruism', 'charity', 'career', 'poverty', 'animal' or 'existential risk' (of course effective altruism is broader than these things, but I think this is indicative).

(Writing in a personal capacity)

Comment author: Kerry_Vaughan 17 August 2016 01:29:01AM 0 points [-]

It's fair to suggest that we don't get carried away with NPS and it's fair to argue that NPS may not represent EA's brand as a whole.

But, for what it's worth, asking EAG attendees about EA doesn't seem like a stronger selection effect than the usual context for this question. NPS is about consumer loyalty. That means someone has to purchase the product before you can ask it.

If you ask someone for their NPS on an Apple Laptop, they have to spend $1K+ on the laptop first. It's not clear that asking this question of people that attended a conference is substantially different.

Comment author: Maxdalton 18 August 2016 12:35:23PM 2 points [-]

I think the point is that for NPS, we're interested in what all effective altruists think, since they're the users of the product. But EAG attendees are not likely to be typical effective altruists: they will probably be more committed, and more positive about EA than a typical EA is.

To continue the Apple analogy, it's a bit like basing your NPS score not on everyone that buys a laptop, but on the people who comment most on Apple product forums: these people won't be typical of Apple's consumers.

Comment author: Maxdalton 20 January 2016 06:15:34AM *  9 points [-]

Thanks for the post! I mostly agree with your key points: some people are (unfortunately) a lot more powerful than others, and this seems like a reason to focus on recruiting them. I also agree that, for this reason and others, it's not obvious that EA should try to be a mass movement.

However, I think that you're missing some benefits of having a more diverse, non-elite movement, and so reaching a conclusion which is too strong. In short, my argument is that the accusation of elitism, and elitism itself can BE hurtful to EA, not just FEEL hurtful. I'll focus on three arguments about the consequences of elitism, then make a couple of other points.

First, I think that appearing like an 'elite' movement has ambiguous effects on how EA is presented in the media. Whilst it might increase how prestigious EA is, and so make it more attractive, it is also something that I could imagine negative articles about (in fact, I think that there may already be such articles, but I can't place them right now). Something along the lines of 'Look at these rich, white, Ivy-league educated men. What do they know about poverty? Why should we listen to them?'. I'm not saying that these arguments are necessarily particularly good ones, just that they could be damaging to EA's image, which might limit our ability to get more people involved, and retain people.

Second, we sadly currently live in a world where power (in the forms of wealth and political capital that you discussed) correlates with a lot of other characteristics - being white, being male, being cis, being straight, having privileged parents, etc. EA probably over-represents those characteristics already, and this can cause a variety of problems. Less privileged people might feel excluded from the community, which is not nice for them. It may also reduce their participation, and so EA may exclude perspectives or skillsets that are more common in underprivileged groups, and make worse decisions as a result.

Third, it is possible that diversity is correlated with avoiding movement collapse (I'm not sure of this though - perhaps others have done more research). I've hinted above at some ways in which this could be brought about: causing negative media attention, and causing individuals to feel excluded, and leave the movement. This might be a really important consideration.

So far I've been talking only about the consequences of making EA more elite, but I think it's important not to dismiss non-consequentialist considerations. It may be that it is just good to promote diversity and fairness whenever you have the chance. There may also be non-consequence based moral reasons to include less powerful people in important decisions that could affect them. (Again, I'm not committing to this position, but it seems worth considering seriously, if we admit some uncertainty about whether utilitarianism is the right moral theory.)

I think that given these considerations, it's no longer so obvious that EA should be an elite movement. You point out some good reasons that EA should be elite, but there are reasons pointing in the other direction.

But as you point out, the question is not 'Should EA be elite?', but 'Should EA try to be more or less elite, given where we are at the moment?'. Where are we? EA already seems to be a pretty elite movement: I mentioned the lack of diversity above, and I think we probably have an abnormally high number of billionaires engaged with EA.

So when we account for how elite EA already is, and the risks of being elite, it seems quite possible that EA should be trying to be less elite.

Edit: see and the comments for even more reasons why this is a tricky question!

In response to EA's Image Problem
Comment author: Larks 13 October 2015 12:02:52AM 0 points [-]

Suppose Clare is on £30K and gives away £15K to AMF, while Flo is on £300K and gives away £30K. Clare is arguably a more virtuous person because she has made a much bigger personal sacrifice for others, despite the fact that Flo does more absolute good.

This argument seems to rely on the decision to donate as being a morally significant one, but one's income as having no merit. However, that's simply not the case; people can change their income! Choosing to study a liberal arts degree, or work for a not-for-profit, or not ask for a raise because it's scary, are all choices. Similarly, changing your degree, aggressively pushing for more money, and taking a job in finance you that doesn't make you feel emotionally fulfilled, are all choices. In the same way that giving a large % is a property of Claire that she deserves credit for, so to is earning a lot a property of Flo that she deserves credit for.

Now suppose Clare mistakenly believes that the most moral action possible is to give the money to disaster relief. Plausibly, Clare is still a more virtuous person than Flo because she has made a huge personal sacrifice for what she believed was right, and Flo has only made a small sacrifice by comparison.

In a similar way people who make serious sacrifices to help the homeless in their area may be better people than EAs who do more absolute good by donating.

You seem to associate virtue with self-sacrifice. I think this is a very unhealthy idea - the purpose of life is to live, not to die! EA offers a positive view of morality, where we have a great opportunity to improve the world. The height of morality is not a wastrel who, never having sought to improve their lot, sacrificed their life to achieve some tiny goal. But no! Far better to be a striving Elon Musk, living an full life that massively helps others.

In response to comment by Larks on EA's Image Problem
Comment author: Maxdalton 13 October 2015 04:17:54PM 2 points [-]

I think you make a good point about virtue not being self-sacrifice, and I definitely see your first point too, particularly for lots of people currently involved in effective altruism.

However, of course people can only vary their income within certain limits. There are lots of people who may be earning as much as they possibly can, and yet still be earning something close to £15k, through no fault of their own. I'd aspire to an effective altruism that can accommodate these people too, and I think it's for people like this that Tom's point comes into play. However, I think that your caveat is really important for the many other people who have a higher upper limit on their earnings.

View more: Next