In response to comment by Telofy  (EA Profile) on Why I left EA
Comment author: kbog  (EA Profile) 20 February 2017 09:34:20PM *  1 point [-]

it’s also important to prevent the people who are not sufficiently aligned from taking it – for the sake of the movement

How so?

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

It would be a problem if we loosened our standards and weakened the movement to accommodate them. But I don't see what's harmful about someone thinking that EA is for them, exploring it and then later deciding otherwise.

and for their own sake.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Telofy  (EA Profile) 21 February 2017 09:20:04AM *  2 points [-]

There be dragons! Dragons with headaches!

I think the discussion that has emerged here is about an orthogonal point from the one I wanted to make.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

A year ago I would’ve simply agreed or said the same thing, and there would’ve been no second level to my decision process, but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

So I certainly agree with you that I care incomparably more about reducing suffering than about pandering to some privileged person’s divergent moral goals, but here are some more things I currently believe:

  1. The EA movement has a huge potential to reduce suffering (and further related moral goals).
  2. All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.
  3. People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree. When someone leaves with a loud, public bang, they’ll likely pull even more people after them.

If I may, for the moment, redefine “self-interested” to include the “self-interested” pursuit of altruistic goals at the expense of other people’s (selfish and nonselfish) goals, then such a “self-interested” approach will run us into several of the walls or failure modes above:

  1. Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated. They may come to feel like the EA project has failed or is about to, and so don’t want to invest into it anymore. Maybe they’ll rather join some adjacent movement or an object-level organization, but the potential of the consolidated EA movement will be lost.
  2. Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals. I have more and more the feeling that the whole talk about ACE being overconfident is just a meme perpetuated by people who haven’t been following ACE or animal advocacy closely.
  3. Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking. It happens when we moral-trade away too much of our actual moral goals.
  4. But if we trade away too little, we’ll create enemies, resulting in more and more zero-sum fights with groups with other moral goals.

The failure modes most relevant to this post are the lethargy and the zero-sum fights one:

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach. They’ll further my moral goal of reducing suffering for the time they’re around, but if they’re, e.g., a Kantian, they’ll afterwards feel instrumentalized and become a more or less vocal opponent. That’s probably more costly for us than whatever they may’ve contributed along the way unless the first was as trajectory-changing as I think movement building (or movement destroying) can be.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

In response to Why I left EA
Comment author: Telofy  (EA Profile) 20 February 2017 07:22:42PM 1 point [-]

Should we maybe take this as a sign that EA needs to become more like Aspirin, or many other types of medicine? I just checked an Aspirin leaflet, and it said clearly exactly what Aspirin is for. The common “doing the most good” slogan kind of falls short of that.

The definition from the FAQ is better, especially in combination with the additional clarifications below on the page:

Effective altruism is using evidence and analysis to take actions that help others as much as possible.

We’ve focused a lot on finding (with high recall) all the value aligned people who find EA to be exactly the thing they’ve been looking for all their lives, but just like with medicine, it’s also important to prevent the people who are not sufficiently aligned from taking it – for the sake of the movement and for their own sake.

Asprin may be a good example because it’s not known for any terrible side effects, but if someone takes it for some unrelated ailment, they’ll be disillusioned and angry about their investment.

Do we need to be more clear not only about who EA is for but also who EA is probably not for?

In response to comment by Telofy  (EA Profile) on Anonymous EA comments
Comment author: Ben_Todd 09 February 2017 10:42:18AM 6 points [-]

My impression is that many of the founders of the movement are moral realists and professional moral philosophers e.g. Peter Singer published a book arguing for moral realism in 2014 ("The Point of View of the Universe").

Comment author: Telofy  (EA Profile) 10 February 2017 04:01:39PM 0 points [-]

Ah, cool! I should read it.

Comment author: RobBensinger 07 February 2017 11:03:25PM 3 points [-]

Anonymous #37:

I would like to see more humility from people involved in effective altruism regarding metaethics, or at least better explanations for why EAs' metaethical positions are what they are. Among smart friends and family members of mine whom I've tried to convince of EA ideas, the most common complaint is, 'But that's not what I think is good!' I think this is a reasonable complaint, and I'd like it if we acknowledged it in more introductory material and in more of our conversations.

More broadly, I think that rather than having a 'lying problem,' EA has an 'epistemic humility problem' -- both around philosophical questions and around empirical ones, and on both the community level and the individual level.

Comment author: Telofy  (EA Profile) 08 February 2017 09:37:48PM 0 points [-]

It's fascinating how diverse the movement is in this regard. I've only found a single moral realist EA who had thought about metaethics and could argue for it. Most EAs around me are antirealists or haven't thought about it.

(I'm antirealist because I don't know any convincing arguments to the contrary.)

Comment author: RobBensinger 07 February 2017 10:38:52PM 4 points [-]

Anonymous #8:

If I could change the effective altruism community tomorrow, I would move it somewhere other than the Bay Area, or at least make it more widely known that moving to the Bay is defecting in a tragedy of the commons and makes you Bad.

If there were large and thriving EA communities all over the place, nobody would need to move to the Bay, we'd have better outreach to a number of communities, and fewer people would have to move a long distance, get US visas, or pay a high rent in order to get seriously involved in EA. The more people move to the Bay, the harder it is to be outside the Bay, because of the lack of community. If everyone cooperated in developing relatively local communities, rather than moving to the bay, there'd be no need to move to the Bay in the first place. But we, a community that fangirls over 'Meditations on Moloch' (http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) and prides itself on working together to get shit done, can't even cooperate on this simple thing.

I know people who are heartbroken and depressed because they need community and all their partners are in the Bay and they want to contribute, but they can't get a US visa or they can't afford Bay Area rent levels, so they're stuck friendless and alone in whatever shitty place they were born in. This should not be a hard problem to solve if we apply even a little thought and effort to it; any minimally competent community could pull this off.

Comment author: Telofy  (EA Profile) 08 February 2017 08:46:34PM 2 points [-]

I can recommend Berlin! Also biased. ;-)

Comment author: Daniel_Eth 08 February 2017 07:35:29AM 8 points [-]

This. As a meat-eating EA who personally does think animal suffering is a big deal, I've found the attitude from some animal rights EAs to be quite annoying. I personally believe that the diet I eat is A) healthier than if I was vegan and B) allows me to be more focussed and productive than if I was vegan, allowing me to do more good overall. I'm more than happy to debate that with anyone who disagrees (and most EAs who are vegan are civil and respect this view), but I have encountered some EAs who refuse to believe that there's any possibility of either A) or B) being true, which feels quite dismissive.

Contrast that attitude to what happened recently at a Los Angeles EA meetup where we went for dinner. Before ordering, I asked around if anyone was vegan since if there was anyone who was, I didn't want to eat meat in front of them and offend them. The person next to me said he was vegan, but that if I wanted meat I should order it since "we're all adults and we want the community to be as inclusive as it can." I decided to get a vegan dish anyway, but having him say that made me feel more welcome.

Comment author: Telofy  (EA Profile) 08 February 2017 08:30:42PM 6 points [-]

Before ordering, I asked around if anyone was vegan since if there was anyone who was, I didn't want to eat meat in front of them and offend them.

Oh wow, thank you! That’s so awesome of you! I greatly appreciate it!

Comment author: RobBensinger 07 February 2017 10:49:49PM 13 points [-]

Anonymous #28:

I have really positive feelings towards the effective altruism community on the whole. I think EA is one of the most important ideas out there right now.

However, I think that there is a lot of hostility in the movement towards those of us who started off as 'ineffective altruists,' as opposed to coming from the more typical Silicon Valley perspective. I have a high IQ, but I struggled through college and had to drop out of a STEM program as a result of serious mental health disturbances. After college, I wanted to make a difference, so I've spent my time since then working in crisis homeless shelters. I've broken up fistfights, intervened in heroin overdoses, received 2am death threats from paranoid meth addicts, mopped up the blood from miscarriages. I know that the work I've done isn't as effective as what the Against Malaria Foundation does, but I've still worked really hard to help people, and I've found that my peers in the movement have been very dismissive of it.

I'm really looking to build skills in an area where I can do more effective direct work. I keep hearing that the movement is talent-constrained, but it isn't clearly explained anywhere what the talent constraints are, specifically. I went to EA Global hoping for career advice -- an expensive choice for someone in social work! -- but even talking one-on-one with Ben Todd, I didn't get any actionable advice. There's a lot of advice out there for people who are interested in earning to give, and for anyone who already has great career prospects, but for fuck-ups like me, there doesn't seem to be any advice on skills to develop, how to go back to school, or anything of that kind.

When I've tried so hard to get any actionable advice whatsoever about what I should do, and nobody has any, and yet there's nothing but contempt for people in social work or doing local volunteer work to make a difference -- that's a movement that isn't accessible to me, and isn't accessible to a lot of people, and it makes me want to ragequit. If you don't respect the backbreaking work I've done for years while attempting to help people, that's fine, but please have some kind of halfway viable advice for what I should be doing instead if you're going to dismiss what I'm currently doing as ineffective.

Comment author: Telofy  (EA Profile) 08 February 2017 08:15:16PM 10 points [-]

I want to hug this person so much!

Comment author: Telofy  (EA Profile) 05 February 2017 08:25:53PM *  5 points [-]

Agreed. You can also add the Effective Altruism Foundation to your list. One of its strategies is to try out many high–risk high reward interventions, especially in the animal advocacy space, to reap the value of information of these experiments and to profit from the potentially greater neglectedness due to the risk aversity of most other actors.

The Foundational Research Institute is also run by EAF.

(I used to work for EAF.)

Comment author: Linch 24 January 2017 08:22:11AM 1 point [-]

The hitchhiker is mentioned in Chapter One of Reasons and Persons. Interestingly, Parfit was more interested in the moral implications than the decision-theory ones.

Comment author: Telofy  (EA Profile) 05 February 2017 09:54:42AM 1 point [-]

Thanks!

Comment author: Kathy 20 January 2017 02:09:01PM *  2 points [-]

I agree that most people will not understand the most strange ideas until they understand the basic ideas. Ensuring they understand the foundation is a good practice.

I definitely agree that the instances of weirdness that are beneficial are only a tiny fraction of the weirdness that is present.

Regarding weirdness:

There are effective and ineffective ways to be weird.

There are several apparently contradictory guidelines in art: "use design principles", "break the conventions", and "make sure everything looks intentional".

The effective ways to be weird manage all three guidelines.

Examples: Picasso, Björk, Lady Gaga

One of the major and most observable differences between these three artists vs. many weird people is that the behavior of the artists can be interpreted as a communication about something specific, meaningful, and valuable. Art is a language. Everything strange we do speaks about us. If you haven't studied art, it might be rather hard to interpret the above three artists. The language of art is sometimes completely opaque to non-artists, and those who interpret art often find a variety of different meanings rather than a consistent one. (I guess that's one reason why they don't call it science.) Quick interpretations: In Picasso, I interpret an exploration of order and chaos. In Björk, I interpret an exploration of the strangeness of nature, the familiarity and necessity of nature, and the contradiction between the two. In Lady Gaga, I interpret an edgy exploration of identity.

These artists have the skill to say something of meaning as they follow principles and break conventions in a way that looks intentional. That is why art is a different experience from, say, looking at an odd-shaped mud splatter on the sidewalk, and why it can be a lot more special.

Ineffective weirdness is too similar to the odd-shaped mud splatter. There need to be signs of intentional communication. To interpret meaning, we need to see that combination of unbroken principles and broken conventions arranged in an intentional-looking pattern.

Comment author: Telofy  (EA Profile) 05 February 2017 09:53:39AM 1 point [-]

Fascinating! Thanks for the summary of how you interpret these artists! But even though I didn’t have any insight into their work, I think I still understand what you’re trying to explain based on other experiences. But there I encounter another hurdle, probably parallel to my lacking understanding of these artists’ work.

I’ve been surrounded by design all my life, so I can look at a poster and see that it looks intentional but I can try as I may to create something of the sort myself and still see that it’s not even close. But that’s not actually what I want to say. What I want to say is rather that my exposure seems to have taught me to recognize something even though I don’t understand how it works. That’s a huge advantage for designers or artists who want to speak to me or to any other nonspecialist.

I’m afraid, however, that a lot of EA concepts that I would like to impart are too far removed by inferential distance for most people to ever recognize any intentionality. I hope I’m wrong. My experience with the board game Othello is quite aligned, though: I used to be pretty good, so when looking at some games of players better than me, I would see a move that would give me shivers and make me stare at the board in awe. I didn’t understand it, but it was surprising (“break the conventions”) and looked perfectly intentional. At the same time, it was usually clear to me when one of these better players just accidentally clicked the wrong field. If I hadn’t been pretty good at the game, though, I would’ve seen just a random chaos of black and white chips.

There was some study where people were asked to solve a number of hard language tasks, some of them unsolvable. Somehow people had an intuition for which tasks were solvable long before they managed to actually solve them. Maybe that is related to the effect that artists are using. But again it only worked because these people already had a lot of background in language.

Maybe the only ones whose interest in EA we can possibly pique using the most fine-tuned types of weirdness are a small fraction of young progressives at universities, and not even just for reasons of moral differences but because we can’t communicate EA ideas effectively enough to anyone else.

I should’ve phrased this as a challenge. :-3

View more: Next