Comment author: redmoonsoaring 18 March 2017 05:38:04PM 14 points [-]

While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn't seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it's easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.

Comment author: Gentzel 18 March 2017 04:22:52PM 0 points [-]

That is the point.

The reason it is appropriate to call this ethical reaction time, rather than just reaction time is because the focus of planning and optimization is around ethics and future goals. To react quickly with respect to an opportunity that is hard to notice, you have to be looking for it.

Technical reaction time is a better name in some ways, but it implies too narrow of a focus, while just reaction time implies too wide of a focus. There probably is a better name though.

Comment author: remmelt  (EA Profile) 18 March 2017 11:21:30AM *  7 points [-]

Thank you for this article. My own concern is that I've personally had little access to guidance on movement building (in the Netherlands) from people more experienced/knowledgeable on this area. I've therefore have had to try to understand the risks and benefits for considerations like EA's 'strictness' vs. the number of people it appeals to myself. I don't think someone with a 'coordinator' role like me should be expected to rigorously compile and evaluate research on movement building by him or herself.

My default position with which I started last year was to get as many concrete EA actions happening immediately as possible (e.g. participants, donations, giving pledges, career changes, etc.) to create the highest multiplier that I could. What intuitively follows from that is to do lots of emotionally appealing marketing (and get on TV if possible). I've encountered other people at other local groups and national organisations who seemed to think this way at least in part (I'm probably exaggerating this paragraph slightly, also to drive home my point).

I'd like to point out that in the last year, EA Flanders, EA France, EA Australia and EA Netherlands (and probably others which I've missed) have launched. Also, the number of local groups still seem to be growing rapidly (I count 313 on EA Hubs though some are missing and some are inactive). I think it would be a mistake to (implicitly) assume that the entrepreneurial people taking initiative here will come to conclusions that incorporate the currently-known risks and benefits to the future impact of the EA movement by themselves.

If the CEA Research Division is building an expertise in this area, I would suggest it starts giving the option to these individual leaders of grassroots local & national groups (with say member count >20) to do short Skype calls where it can share and discuss its insights on movement building as it's relevant in each local context.

I'd happily connect CEA researchers with other interested national group leaders to test this out (and be the first to sign up myself). Feel free to send me a quick message at remmeltellenis(at}gmail{dot)com (an alternative is to go through CEA Chapters/the Group Organisers call.)

Edits: some spelling & grammar nitpicks, further clarification and call-to action.

In response to comment by DiverWard on Open Thread #36
Comment author: Mac- 18 March 2017 01:08:42AM *  1 point [-]

I don't plan to retire, but I've been thinking recently about a related topic: what to do in very advanced age, when my health and abilities have deteriorated such that I am unable to cover cost of living.

My current plan is to donate and gift my remaining assets and take a one-way trip on Mac's Morphine Express if I find that I've outlived my usefulness. But I'm not sure, and it's easier said than done.

Comment author: RobBensinger 17 March 2017 10:43:57PM *  3 points [-]

I think wild animal suffering isn't a long-term issue except in scenarios where we go extinct for non-AGI-related reasons. The three likeliest scenarios are:

  1. Humans leverage AGI-related technologies in a way that promotes human welfare as well as (non-human) animal welfare.

  2. Humans leverage AGI-related technologies in a way that promotes human welfare and is effectively indifferent to animal welfare.

  3. Humans accidentally use AGI-related technologies in a way that is indifferent to human and animal welfare.

In all three scenarios, the decision-makers are likely to have "ambitious" goals that favor seizing more and more resources. In scenario 2, efficient resource use almost certainly implies that biological human bodies and brains get switched out for computing hardware running humans, and that wild animals are replaced with more computing hardware, energy/cooling infrastructure, etc. Even if biological humans who need food stick around for some reason, it's unlikely that the optimal way to efficiently grow food in the long run will be "grow entire animals, wasting lots of energy on processes that don't directly increase the quantity or quality of the food transmitted to humans".

In scenario 1, wild animals might be euthanized, or uploaded to a substrate where they can live whatever number of high-quality lives seems best. This is by far the best scenario, especially for people who think (actual or potential) non-human animals might have at least some experiences that are of positive value, or at least some positive preferences that are worth fulfilling. I would consider this extremely likely if non-human animals are moral patients at all, though scenario 1 is also strongly preferable if we're uncertain about this question and want to hedge our bets.

Scenario 3 has the same impact on wild animals as scenario 1, and for analogous reasons: resource limitations make it costly to keep wild animals around. 3 is much worse than 1 because human welfare matters so much; even if the average present-day human life turned out to be net-negative, this would be a contingent fact that could be addressed by improving global welfare.

I consider scenario 2 much less likely than scenarios 1 and 3; my point in highlighting it is to note that scenario 2 is similarly good for the purpose of preventing wild animal suffering. I also consider scenario 2 vastly more likely than "sadistic" scenarios where some agent is exerting deliberate effort to produce more suffering in the world, for non-instrumental reasons.

In response to Open Thread #36
Comment author: kbog  (EA Profile) 17 March 2017 09:54:31PM 1 point [-]

It seems like the primary factor driving retirement planning for us is uncertainty over the course of civilization. We don't know when or if a longevity horizon will arise, what kinds of work we'll be able to do in our old age in the future, whether serious tech progress or a singularity will occur, whether humanity will survive, or what kinds of welfare policies we can expect. Generally speaking, welfare and safety nets are progressing in the West, and the economies of the US and other countries are expected to double within half a century IIRC. Personally, I think that if you have a few decades left before retirement would be necessary, then it's reasonable to donate all income, and if there still seems to be a need to save for retirement in the future then you can forego donations entirely and save a solid 30% or so of your income, just like you used to spend on donations.

Comment author: AlasdairGives 17 March 2017 07:25:11PM 1 point [-]

i've deleted the post because I would like to make one on this issue with greater subtlety and nuance to do the complex topic of this saga better justice than my rather late night post did - thanks for your comment, I will take it into account.

Comment author: kbog  (EA Profile) 17 March 2017 05:21:55PM *  3 points [-]

What's ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror,

But they do not increase the risk of terror. Have you studied terrorism? Do you know about where it comes from and how to combat it? As someone who actually has (US military, international relations) I can tell you that this whole thing is beyond silly. Radicalization is a process, not a mere manner of reading philosophical papers, and it involves structural factors among disenfranchised people and communities as well as the use of explicitly radicalizing media. And it is used primarily as a tool for a broad variety of political ends, which could easily include the ends which all kinds of EAs espouse. Very rarely is destruction itself the objective of terrorism. Also, terrorism generally happens as a result of actors feeling that they have a lack of access to legitimate channels of influencing policy. The way that people have leapt to discussing this topic without considering these basic facts shows that they don't have the relevant expertise to draw conclusions on this topic.

Calling it "unnecessary" to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.

But Austen did not say "Not supporting terrorism should be an EA value." He said that not causing harm should be an EA value.

Our ability to discern good arguments even when we don't like them is what sets us apart from the post-fact age we're increasingly surrounded by.

There are many distinctions between EA and whatever you mean by the (new?) "post-fact age", but responding seriously to what essentially amounts to trolling doesn't seem like a necessary one.

It's important to focus on these things when people are being tribal, because that's when it's hard.

That doesn't make any sense. Why should we focus more on things just because they're hard? Doesn't it make more sense to put effort somewhere where things are easier, so that we get more return on our efforts?

If you only engage with facts when it's easy, then you're going to end up mistaken about many of the most important issues.

But that seems wrong: one person's complaints about NU, for instance, isn't one of the most important issues. At the same time, we have perfectly good discussions of very important facts about cause prioritization in this forum where people are much more mature and reasonable than, say, Austen here is. So it seems like there isn't a general relationship between how important a fact is and how disruptive commentators are when discussing it. At the very minimum, one might start from a faux clean slate where a new discussion is started separate from the original instigator - something which takes no time at all and enables a bit of a psychological restart. That seems to be strictly slightly better than encouraging trolling.

Comment author: kbog  (EA Profile) 17 March 2017 05:17:11PM *  -1 points [-]

The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don't downvote this unless you personally know this to be false - it is unfortunately true)

It's a spurious standard. You seem to be drawing a line between mass termination of life and permanent mass termination of life just to make sure that FRI falls on the wrong side of a line. It seems like either could support 'terrorism'. Animal liberationists actually do have a track record of engaging in various acts of violence and disruption in the past. The fact that their interests aren't as comprehensive as some NUs' are doesn't change this.

"An issue"? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don't seem to have discussed anywhere.

I'm not sure why the fact that my comment didn't discuss terrorism implies that it fails to be a good example of raising a point without an example.

For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?

""Not causing harm" should be one of the EA values?" Though it probably falls perfectly well under commitment to others anyway.

Comment author: vipulnaik 17 March 2017 03:33:58PM 2 points [-]

Commenting here to avoid a misconception that some readers of this post might have. I wasn't trying to "spread effective altruism" to any community with these editing efforts, least of all the Wikipedia community (it's also worth noting that the Wikipedia community that participates in these debates is basically disjoint from the people who actually read those specific pages in practice -- many of the latter don't even have Wikipedia accounts).

Some of the editing activities were related to effective altruism in these two ways: (1) The pages we edited, and the content we added, were disproportionately (though not exclusively) of interest to people in and around the EA-sphere, and (2) Some of the topics worked on, I selected based on EA-aligned interests (an example would be global health and disease timelines).

Comment author: Raemon 17 March 2017 03:04:41PM 1 point [-]

Very much agreed. I was pretty worried to see the initial responses saying 'saving for retirement isn't EA'.

Comment author: Andy_Schultz 17 March 2017 02:14:32PM 0 points [-]

Any word on the global health budget decisions?

Comment author: inconvenient 17 March 2017 11:39:28AM *  0 points [-]

The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don't downvote this unless you personally know this to be false - it is unfortunately true)

If not, then it it is a necessary example, plain and simple.

But it is not necessary - as you can see elsewhere in this thread, I raised an issue without providing an example at all.

"An issue"? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don't seem to have discussed anywhere. For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?

Comment author: inconvenient 17 March 2017 11:34:48AM *  0 points [-]

He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don't see what's "ill founded" about that.

What's ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror, then it's obviously necessary to name the orgs. Calling it "unnecessary" to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.

Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn't that the best time to revert to tribalism and ignore what they have to say?

Our ability to discern good arguments even when we don't like them is what sets us apart from the post-fact age we're increasingly surrounded by. It's important to focus on these things when people are being tribal, because that's when it's hard. If you only engage with facts when it's easy, then you're going to end up mistaken about many of the most important issues.

Comment author: kbog  (EA Profile) 17 March 2017 07:55:01AM *  2 points [-]

If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life?

Sure. MFA, ACE and other animal charities plan to drastically reduce or even eliminate entirely the population of farm animals. And poverty reduction charities drastically reduce the number of wild animals.

If not, then it it is a necessary example, plain and simple.

But it is not necessary - as you can see elsewhere in this thread, I raised an issue without providing an example at all.

In response to Open Thread #36
Comment author: John_Maxwell_IV 17 March 2017 07:41:29AM *  7 points [-]

In addition to retirement planning, if you're down with transhumanism, consider attempting to maximize your lifespan so you can personally enjoy the fruits of x-risk reduction (and get your selfish & altruistic selves on the same page). Here's a list of tips.

With regard to early retirement, an important question is how you'd spend your time if you were to retire early. I recently argued that more EAs should be working at relaxed jobs or saving up funds in order to work on "projects", to solve problems that are neither dollar-shaped nor career-shaped (note: this may be a self-serving argument since this is an idea that appeals to me personally).

I can't speak for other people, but I've been philosophically EA for something like 10 years now. I started from a position of extreme self-sacrifice and have basically been updating continuously away from that for the past 10 years. A handwavey argument for this: If we expect impact to have a Pareto distribution, a big concern should be maximizing the probability that you're able to have a 100x or more impact relative to baseline. In order to have that kind of impact, you will want to learn a way to operate at peak performance, probably for extended periods of time. Peak performance looks different for different people, but I'm skeptical of any lifestyle that feels like it's grinding you down rather than building you up. (This book has some interesting ideas.)

In principle, I don't think there needs to be a big tradeoff between selfish and altruistic motives. Selfishly, it's nice to have a purpose that gives your life meaning, and EA does that much better than anything else I've found. Altruistically, being miserable is not great for productivity.

One form of self-sacrifice I do endorse is severely limiting "superstimuli" like video games, dessert, etc. I find that after allowing my "hedonic treadmill" to adjust for a few weeks, this doesn't actually represent much of a sacrifice. Here are some thoughts on getting this to work.

Comment author: kbog  (EA Profile) 17 March 2017 07:41:05AM 0 points [-]

Roman Yampolskiy's shortlist of potential agents who could bring about an end to the world (https://arxiv.org/ftp/arxiv/papers/1605/1605.02817.pdf) also includes Military, Government, Corporations, Villains, Black Hats, Doomsday Cults, Depressed, Psychopaths, Criminals, AI Risk Deniers, and AI Safety Researchers.

Comment author: kbog  (EA Profile) 17 March 2017 07:27:48AM *  2 points [-]

Soeren didn't give an argument. He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don't see what's "ill founded" about that.

It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.

Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn't that the best time to revert to tribalism and ignore what they have to say?

Comment author: kbog  (EA Profile) 17 March 2017 07:24:00AM *  0 points [-]

Benatar is a nonconsequentialist. At least, the antinatalist argument he gives is nonconsequentialist - grounded in rules of consent.

Not sure why that matters though. It just underscores a long tradition of nonconsequentialists who have ideas which are similar to negative utilitarianism. Austen's restriction of the question to NU just excludes obviously relevant examples such as VHEMT.

Comment author: John_Maxwell_IV 17 March 2017 06:57:47AM *  1 point [-]

It looks like this is the link to the discussion of "Vipul's paid editing enterprise". Based on a quick skim,

this has fallen afoul of the wikipedia COI rules in spectacular fashion - with wikipedia administrators condeming the work as a pyramid scheme

strikes me as something of an overstatement. For example, one quote:

In general, I think Vipul's enterprise illustrates a need to change the policy on paid editors rather than evidence of misconduct.

Anyway, if it's true that Vipul's work on Wikipedia has ended up doing more harm than good, this doesn't make me optimistic about other EA projects.

View more: Prev | Next