Comment author: kbog  (EA Profile) 24 March 2017 01:06:35AM 2 points [-]

Gabriel argues that the effective altruism community should heed the issue of moral disagreement

Nobody told him that MacAskill has done some of the most serious recent work on this?

Typo at the bottom of page 10 (should be "two problems" not "two problem").

In response to Open Thread #36
Comment author: kbog  (EA Profile) 17 March 2017 09:54:31PM 1 point [-]

It seems like the primary factor driving retirement planning for us is uncertainty over the course of civilization. We don't know when or if a longevity horizon will arise, what kinds of work we'll be able to do in our old age in the future, whether serious tech progress or a singularity will occur, whether humanity will survive, or what kinds of welfare policies we can expect. Generally speaking, welfare and safety nets are progressing in the West, and the economies of the US and other countries are expected to double within half a century IIRC. Personally, I think that if you have a few decades left before retirement would be necessary, then it's reasonable to donate all income, and if there still seems to be a need to save for retirement in the future then you can forego donations entirely and save a solid 30% or so of your income, just like you used to spend on donations.

Comment author: inconvenient 17 March 2017 11:34:48AM *  0 points [-]

He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don't see what's "ill founded" about that.

What's ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror, then it's obviously necessary to name the orgs. Calling it "unnecessary" to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.

Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn't that the best time to revert to tribalism and ignore what they have to say?

Our ability to discern good arguments even when we don't like them is what sets us apart from the post-fact age we're increasingly surrounded by. It's important to focus on these things when people are being tribal, because that's when it's hard. If you only engage with facts when it's easy, then you're going to end up mistaken about many of the most important issues.

Comment author: kbog  (EA Profile) 17 March 2017 05:21:55PM *  3 points [-]

What's ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror,

But they do not increase the risk of terror. Have you studied terrorism? Do you know about where it comes from and how to combat it? As someone who actually has (US military, international relations) I can tell you that this whole thing is beyond silly. Radicalization is a process, not a mere manner of reading philosophical papers, and it involves structural factors among disenfranchised people and communities as well as the use of explicitly radicalizing media. And it is used primarily as a tool for a broad variety of political ends, which could easily include the ends which all kinds of EAs espouse. Very rarely is destruction itself the objective of terrorism. Also, terrorism generally happens as a result of actors feeling that they have a lack of access to legitimate channels of influencing policy. The way that people have leapt to discussing this topic without considering these basic facts shows that they don't have the relevant expertise to draw conclusions on this topic.

Calling it "unnecessary" to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.

But Austen did not say "Not supporting terrorism should be an EA value." He said that not causing harm should be an EA value.

Our ability to discern good arguments even when we don't like them is what sets us apart from the post-fact age we're increasingly surrounded by.

There are many distinctions between EA and whatever you mean by the (new?) "post-fact age", but responding seriously to what essentially amounts to trolling doesn't seem like a necessary one.

It's important to focus on these things when people are being tribal, because that's when it's hard.

That doesn't make any sense. Why should we focus more on things just because they're hard? Doesn't it make more sense to put effort somewhere where things are easier, so that we get more return on our efforts?

If you only engage with facts when it's easy, then you're going to end up mistaken about many of the most important issues.

But that seems wrong: one person's complaints about NU, for instance, isn't one of the most important issues. At the same time, we have perfectly good discussions of very important facts about cause prioritization in this forum where people are much more mature and reasonable than, say, Austen here is. So it seems like there isn't a general relationship between how important a fact is and how disruptive commentators are when discussing it. At the very minimum, one might start from a faux clean slate where a new discussion is started separate from the original instigator - something which takes no time at all and enables a bit of a psychological restart. That seems to be strictly slightly better than encouraging trolling.

Comment author: inconvenient 17 March 2017 11:39:28AM *  0 points [-]

The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don't downvote this unless you personally know this to be false - it is unfortunately true)

If not, then it it is a necessary example, plain and simple.

But it is not necessary - as you can see elsewhere in this thread, I raised an issue without providing an example at all.

"An issue"? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don't seem to have discussed anywhere. For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?

Comment author: kbog  (EA Profile) 17 March 2017 05:17:11PM *  -1 points [-]

The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don't downvote this unless you personally know this to be false - it is unfortunately true)

It's a spurious standard. You seem to be drawing a line between mass termination of life and permanent mass termination of life just to make sure that FRI falls on the wrong side of a line. It seems like either could support 'terrorism'. Animal liberationists actually do have a track record of engaging in various acts of violence and disruption in the past. The fact that their interests aren't as comprehensive as some NUs' are doesn't change this.

"An issue"? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don't seem to have discussed anywhere.

I'm not sure why the fact that my comment didn't discuss terrorism implies that it fails to be a good example of raising a point without an example.

For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?

""Not causing harm" should be one of the EA values?" Though it probably falls perfectly well under commitment to others anyway.

Comment author: inconvenient 11 March 2017 10:43:22PM 1 point [-]

If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life? If not, then it it is a necessary example, plain and simple.

Comment author: kbog  (EA Profile) 17 March 2017 07:55:01AM *  2 points [-]

If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life?

Sure. MFA, ACE and other animal charities plan to drastically reduce or even eliminate entirely the population of farm animals. And poverty reduction charities drastically reduce the number of wild animals.

If not, then it it is a necessary example, plain and simple.

But it is not necessary - as you can see elsewhere in this thread, I raised an issue without providing an example at all.

Comment author: concerned_ 15 March 2017 02:57:25AM *  4 points [-]

However, atheists don't believe in any divine laws such as the sin of killing, are thus not bound by any rules.

I think your gripe is with consequentialism, not atheism per se. And don't forget that there are plenty of theists who do horrible things, often in the name of their religion.

I think that the Future of Humanity Institute should add negative utilitarian atheism to their list of existential risks.

The X-Risks Institute, which is run by /u/philosophytorres, specializes in agential risks, and mentions NU as one such risk. I don't whether FHI has ever worked on agential risks.

It just means that many EAs use EA as a means to promote atheism/atheists.

It is evident that the majority of EAs are atheist/irreligious, but I am not aware of any EA organizations actively promoting atheism or opposing theism. Who uses EA as a "means to promote atheism"?

Coincidentally, the closest example I can recall is Phil Torres's work on religious eschatological fanaticism as a possible agential x-risk.

Comment author: kbog  (EA Profile) 17 March 2017 07:41:05AM 0 points [-]

Roman Yampolskiy's shortlist of potential agents who could bring about an end to the world (https://arxiv.org/ftp/arxiv/papers/1605/1605.02817.pdf) also includes Military, Government, Corporations, Villains, Black Hats, Doomsday Cults, Depressed, Psychopaths, Criminals, AI Risk Deniers, and AI Safety Researchers.

Comment author: inconvenient 15 March 2017 07:21:19AM -2 points [-]

Exactly, despite the upvotes, Soeren's argument is ill-founded. It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.

Comment author: kbog  (EA Profile) 17 March 2017 07:27:48AM *  2 points [-]

Soeren didn't give an argument. He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don't see what's "ill founded" about that.

It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.

Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn't that the best time to revert to tribalism and ignore what they have to say?

Comment author: concerned_ 15 March 2017 03:18:26AM 3 points [-]

Many antinatalists who are unaffiliated with EA have similar beliefs. (eg, David Benatar, although I'm not sure whether he's even a consequentialist at all.)

Comment author: kbog  (EA Profile) 17 March 2017 07:24:00AM *  0 points [-]

Benatar is a nonconsequentialist. At least, the antinatalist argument he gives is nonconsequentialist - grounded in rules of consent.

Not sure why that matters though. It just underscores a long tradition of nonconsequentialists who have ideas which are similar to negative utilitarianism. Austen's restriction of the question to NU just excludes obviously relevant examples such as VHEMT.

Comment author: Julia_Wise 09 March 2017 02:23:45PM 7 points [-]

We recognize that there are major areas of disagreement between people who are committed to the core ideas of EA, and we don't want emphasis on "unity" to sound like "you have to agree with the majority on specific topics, or you should leave."

Comment author: kbog  (EA Profile) 09 March 2017 02:43:16PM *  2 points [-]

Ah, I see. Thanks for the response. I agree 100%.

I was framing it in more, well, tribalistic terms, almost to the opposite effect. Basically, if you're an EA trying to achieve your goals and have to deal with problems from outsiders, then regardless of whether we agree, I'm "on your team" so to speak.

Comment author: kbog  (EA Profile) 08 March 2017 07:07:23AM *  2 points [-]

I like this, but I think collaborative spirit should be augmented by remembering the value of unity and solidarity, which is rather different than mere collaboration and cooperation. Curious why it didn't get included.

View more: Next