Comment author: arrowind 31 May 2015 05:50:07PM 0 points [-]

Isn't that a common distinction among philosophers? I recall that there's a technical name for it.

Comment author: Katja_Grace 02 June 2015 04:23:29AM 1 point [-]

Yeah, and among common intuitions I think. But I thought EAs were mostly consequentialists, so the intended role of obligations is not obvious to me.

Comment author: Katja_Grace 28 May 2015 04:22:24PM 3 points [-]

I'm curious about the implicit framework where some things are obligatory and some things are choices.


Impact Purchase: Round 2

Round 2 of the  impact purchase  is over. At the deadline, we had twelve submissions. This round, we are buying a certificate for 1/70th Ryan Carey and Brayden McLean's founding of and involvement in EA Melbourne during 2013, for $1000. The deadline for applications to round 3 is May 25th. Apply ... Read More
Comment author: Evan_Gaensbauer 10 April 2015 04:51:07AM 1 point [-]

You didn't explain in your post your rationale for not purchasing Joao Fabiano's work. For what reasons did you rule it out? Difficulty in evaluation?

Comment author: Katja_Grace 10 April 2015 03:30:03PM 3 points [-]

We evaluated all of the projects other than the three I specifically mentioned not evaluating. Sorry for not writing up the other evaluations - we just didn't have time. We bought the ones that gave us the most impact per dollar, according to our evaluations (and based on the prices people wanted for their work). So we didn't purchase Joao's work this round because we calculated that it was somewhat less cost-effective than the things we did purchase, given the price. We may still purchase it in a later round.


Impact purchase first round results

(Crossposted from The Impact Purchase ) The first round of the 2015 Impact Purchase had eight submissions, including research, translation, party planning, mentoring, teaching and money to GiveDirectly. We expected the evaluations would have to be rough, and would like to emphasize that they really were rough: we had to consider lots of things very quickly to get through... Read More

The economy of weirdness

It is  often   said   that  you should spend your weirdness budget wisely. You should wear a gender-appropriate suit, and follow culture-appropriate sports, and use good grammar, and be non-specifically spiritual, and support moderate policies, and not have any tattoos around either of your eyes. And then on the odd occasion, when it happens to come up,... Read More

When should an Effective Altruist be vegetarian?

Crossposted from Meteuphoric I have lately noticed several people wondering why more  Effective Altruists  are not vegetarians. I am personally not a vegetarian because I don't think it is an effective way to be altruistic. As far as I can tell the fact that many EAs are not vegetarians is surprising to some because they think 'animals... Read More
Comment author: Evan_Gaensbauer 17 October 2014 12:21:37PM *  0 points [-]

They care more about the people around them than those far away, or they care more about some kinds of problems than others, and they care about how things are done, not just the outcome.

It seems to me that part of effective altruism has been not just increasing the effectiveness of altruism by recommending people change their actions, or where their philanthropic dollars go, to interventions with higher leverage, but also pointing out that people would be more effective if they changed their values. For example, Peter Singer's 'expanding circle', meat-free diet advocacy, etc.

People don't like to be told they need to change their values, or that they should change their values, or that the world would be a better place if they had some values that they didn't have already. Really, one's values tend to be near the core of one's social identity, so an attack on values can be perceived as an the attack on the self. The obvious example of this is the friend you know who doesn't like vegetarians for pointing out how bad eating meat is, while that friend doesn't bring up any particular philosophical objections, but just doesn't like being called out for doing something they've always been raised to think of as normal.

Comment author: Katja_Grace 08 November 2014 10:14:09AM 2 points [-]

Changing one's values does not more effectively promote the values one has initially, so it seems one should be averse to it. I think the expanding circle case is more complicated - the advocates of a wider circle are trying to convince the others that those others are mistaken about their own existing values, and that by consistency they must care about some entities they think they don't care about. This is why the phenomenon looks like an expanding circle - points just outside a circle look a lot like points just inside it, so consistency pushes the circle outwards (this doesn't explain why the circle expands rather than contracting).

Comment author: Katja_Grace 07 October 2014 04:09:02PM 1 point [-]

It seems there are some common states where this comes up, such as when one person is doing a thing which they think is good, given personal constraints which are hidden to their conversation partner, and worries that they are harshly judged because the constraints are hidden. Or where one person is trying out a thing, because they think it might be very good, however they don't already think it is very good (except for VOI), and worry that others think they are actually advocating for something suboptimal. Or where one person doesn't think what they are doing is likely to be optimal, but struggles to find something actually better that they could feasibly do.

Perhaps it would be helpful if there was a thing you could say in these recognized circumstances to let your conversation partner know that you know that what you are doing doesn't look optimal, and you are already aware of the situation.

Comment author: atucker 01 October 2014 12:02:36AM *  9 points [-]

I agree with your points about there being disagreement about EA, but I don't think that they fully explain why people didn't come up with it earlier.

I think that there are two things going on here -- one is that the idea of thinking critically about how to improve other people's lives without much consideration of who they are or where they live and then doing the result of that thinking isn't actually new, and the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

To the first point, I'll just list Ethical Culture, the Methodists, John Stuart Mill's involvement with the East India Company, communists, Jesuits, and maybe some empires. I could go into more detail, but doing so would require more research than I want to do tonight.

To the second point, I don't think that anything resembling modern academic social science existed until relatively recently (around the 1890s?), and so prior to that there was nothing resembling peer-reviewed academic evidence about the efficacy of an intervention.

Giving them time to develop methods and be interrupted by two world wars, we would find that "evidence" was not actually developed until fairly recently, and that prior to that people had reasons for thinking that their ideas are likely to work (and maybe even be the most effective plans), but that those reasons would not constitute well-supported evidence in the sense used by the current EA community.

Also the internet makes it much easier for people with relatively rare opinions to find each other, and enables much more transparency much more easily than was possible prior to it.

Comment author: Katja_Grace 02 October 2014 07:17:03AM 2 points [-]

the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

The kinds of evidence available for some EA interventions, e.g. existential risk ones, doesn't seem different in kind to the evidence probably available earlier in history. Even in the best cases, EAs often have to lean on a combination of more rigorous evidence and some not very rigorous or evidenced guesses about how indirect effects work out etc. So if the more rigorous evidence available were substantially less rigorous than it is, I think I would expect things to look pretty much the same, with us just having lower standards - e.g. only being willing to trust certain people's reports of how interventions were going. So I'm not convinced that some recently attained level of good evidence has much to do with the overall phenomenon of EA.

View more: Next