Comment author: Gleb_T  (EA Profile) 22 May 2017 02:22:18AM 0 points [-]

Great piece!

Comment author: Gleb_T  (EA Profile) 01 May 2017 04:07:25AM -5 points [-]

Excellent!

Comment author: Elizabeth 16 January 2017 05:53:25PM 5 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Gleb_T  (EA Profile) 17 January 2017 04:03:59PM 0 points [-]

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

This is simply false. See what I actually said here

Comment author: Elizabeth 16 January 2017 05:53:14PM 4 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Gleb_T  (EA Profile) 17 January 2017 04:03:52PM 0 points [-]

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

This is simply false. See what I actually said here

Comment author: JBeshir 13 January 2017 02:55:58PM *  2 points [-]

I at least would say that I care about doing the most good that I can, but am also mindful of the fact that I run on corrupted hardware, which makes ends justifying means arguments unreliable, per EY's classic argument (http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/)

""The end does not justify the means" is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn't think this way. But it is all still ultimately consequentialism. It's just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware."

This doesn't mean I think there's never a circumstance where you need to breach a deontological rule; I agree with EY when they say "I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort.". This is the reason under Sarah's definition of absolutely binding promises, I would simply never make such a promise- I might say that I would try my best and to the best of my knowledge there was nothing that would prevent me from doing a thing, or something like that- but I think the universe can be amazingly inconvenient and don't want to be a pretender at principles I would not actually in extremis live up to.

The theory I tend to operate under I think of as "biased naive consequentialism", where I do naive consequentialism- estimating out as far as I can see easily- then introduce heavy bias against things which are likely to have untracked bad consequences, e.g. lying, theft. (I kind of am amused by how all the adjectives in the description are negative ones.). But under a sufficiently massive difference, sure, I'd lie to an axe murderer. This means there is a "price", somewhere. This is probably most similar to the concept of "way utilitarianism", which I think is way better than either act or rule utilitarianism, and is discussed as a sort of steelman of Mohist ideas (https://plato.stanford.edu/entries/mohism/).

One of the things I take from the thinking around the non-central fallacy aka the worst argument in the world (http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/) is that one should smoothly reduce the strength of such biases for examples which are very atypical of the circumstances the bias was intended for, so as to not have weird sharp edges near category borders.

All this is to say that in weird extreme edge cases, under conditions of perfect knowledge, I think what people do is not important. It's okay to have a price. But in the central cases, in actual life, I think they should have either a very strong bias against deception and for viewing deceptive behaviour poorly, or an outright deontological prohibition if they can't reliably maintain that.

If I was to say one thing I think is a big problem, it's that in practice some people's price seems to be only able to be infinite or zero (or even negative- a lot of people seem to get tempted by cool "ends justify means" arguments which don't even look prima facie like they'd actually have positive utility. I mean, trading discourse and growth for money in a nascent movement is, um, even naive utilitarianism can track far enough out to see the problems there, you have to have an intuitive preference for deception to favour it).

I disagree with you in that I think infinite works fine almost always, so it wouldn't be a big problem if everyone had that- I'd be very happy if all the people who had their price to cheat at around zero moved it to infinite. But I agree with you in that infinite isn't actually the correct answer for an ideal unbiased reasoner, just not that this should affect how humans behave while under the normal circumstances that are the work of the EA movement.

The alarming part for me is that I think in general these debates do, because people erroneously jump from "a hypothetical scenario with a hypothetical perfect reasoner would not behave deontologically" to sketchiness in practice.

Comment author: Gleb_T  (EA Profile) 14 January 2017 07:17:22PM -3 points [-]

Let me first clarify that I see the goal of doing the most good as my end goal, and YMMV - no judgment on anyone who cares more about truth than doing good. This is just my value set.

Within that value set, using "insufficient" means to get to EA ends is just as bad as using "excessive" means. In this case, being "too honest" is just as bad as "not being honest enough." The correct course of actions is to correctly calibrate one's level of honesty to maximize for positive long-term impact for doing the most good.

Now, the above refers to the ideal-type scenario. IRL, different people are differently calibrated. Some tend to be too oriented toward exaggerating, some too oriented to being humble and understating the case, and in either case, it's a mistake. So one should learn where one's bias is, and push against that bias.

Comment author: Gleb_T  (EA Profile) 13 January 2017 12:23:26PM -4 points [-]

Sarah's post highlights some of the essential tensions at the heart of Effective Altruism.

Do we care about "doing the most good that we can" or "being as transparent and honest as we can"? These are two different value sets. They will sometimes overlap, and in other cases will not.

And please don't say that "we do the most good that we can by being as transparent and honest as we can" or that "being as transparent and honest as we can" is best in the long term. Just don't. You're simply lying to yourself and to everyone else if you say that. If you can't imagine a scenario where "doing the most good that we can" or "being as transparent and honest as we can" are opposed, you've just suffered from a failure mode by flinching away from the truth.

So when push comes to shove, which one do we prioritize? When we have to throw the switch and have the trolley crush either "doing the most good" or "being as transparent and honest as we can," which do we choose?

For a toy example, say you are talking to your billionaire uncle on his deathbed and trying to convince him to leave money to AMF instead of his current favorite charity, the local art museum. You know he would respond better if you exaggerate the impact of AMF. Would you do so, whether lying by omission or in any other way, in order to get much more money for AMF, given that no one else would find out about this situation? What about if you know that other family members are standing in the wings and ready to use all sorts of lies to advocate for their favorite charities?

If you do not lie, that's fine, but don't pretend that you care about doing the most good, please. Just don't. You care about being as transparent and honest as possible over doing the most good.

If you do lie to your uncle, then you do care about doing the most good. However, you should consider at what price point you will not lie - at this point, we're just haggling.

The people quoted in Sarah's post all highlight how doing the most good sometimes involves not being as transparent and honest as we can (including myself). Different people have different price points, that's all. We're all willing to bite the bullet and sometimes send that trolley over transparency and honesty, whether questioning the value of public criticism such as Ben or appealing to emotions such as Rob or using intuition as evidence such as Jacy, for the sake of what we believe is the most good.

As a movement, EA has a big problem with believing that ends never justify the means. Yes, sometimes ends do justify the means - at least if we care about doing the most good. We can debate whether we are mistaken about the ends not justifying the means, but using insufficient means to accomplish the ends is just as bad as using excessive means to get to the ends. If we are truly serious about doing the most good as possible, we should let our end goal be the North Star, and work backward from there, as opposed to hobbling ourselves by preconceived notions of "intellectual rigor" at the cost of doing the most good.

Comment author: kbog  (EA Profile) 09 January 2017 07:23:31PM 0 points [-]

I mean a core as in a fixed point of interest. E.g. a forum, a blog, a website, a college club, etc. Something to seed the initiative that can stand on its own without having thousands of active members. You can't gather interested people without having something valuable to attract them.

In response to comment by kbog  (EA Profile) on Rational Politics Project
Comment author: Gleb_T  (EA Profile) 09 January 2017 09:51:35PM -1 points [-]

We have a number of collaborative venues, such as a Facebook group, blog, email lists, etc. for people who get involved.

Comment author: kbog  (EA Profile) 09 January 2017 04:26:32AM *  1 point [-]

I don't know if social movements ever start from concerted efforts like this. For instance, EA started because one or two organizations and philosophers got a lot of interest from a few people. Other social movements start spontaneously when people are triggered into protest and action by major events. It seems good to have an identifiable 'core' to any kind of movement, like the idea I had - "a formal or semi-formal structure to aggregate and compare evidence from both sides." If you leverage swarm intelligence, prediction markets, argument mapping or more basic online mechanisms then you can start to make something impressive that stands on its own. Though such a system would be more difficult to make successful if you tried to make it relevant for the broad population rather than just EAs. It's just one example.

In response to comment by kbog  (EA Profile) on Rational Politics Project
Comment author: Gleb_T  (EA Profile) 09 January 2017 04:55:57AM 0 points [-]

Yup, we're focusing on a core of people who are upset about lies and deceptions in the US election and the Brexit campaign, and aiming to provide them with means to address these deceptions in an effective manner. That's the goal!

Comment author: kbog  (EA Profile) 08 January 2017 07:16:05PM *  0 points [-]

I don't really get what it will do? Is it supposed to be a broad social movement? Or a new organization? Is it just going to be a name over a bunch of articles?

We thus anticipate that RAP will draw some heat from conservatives, and do not want to risk any backlash on the EA movement as a whole.

You're probably going to get more from liberals when you advocate being rational about conservatives.

In response to comment by kbog  (EA Profile) on Rational Politics Project
Comment author: Gleb_T  (EA Profile) 08 January 2017 09:04:31PM 0 points [-]

Broad social movement. We're aiming to focus on social media organizing at first, and then spread to local grassroots organizing later. There will be a lot of marketing and PR associated with it as well.

Comment author: PeterMcCluskey 08 January 2017 07:32:58PM 6 points [-]

You claim this is non-partisan, yet you make highly partisan claims, such as "conservatives have relied much more on lies" (you cite Trump's lies, but treating Trump as a conservative is objectionable to many conservatives).

Comment author: Gleb_T  (EA Profile) 08 January 2017 09:03:43PM -4 points [-]

Well, ok, are you really going to make this semantic argument with me? Trump is widely accepted by the Republican party as its leader. I'll be happy to agree on using the term "Republican" instead of "conservative" to address your concerns.

View more: Next