Comment author: Telofy  (EA Profile) 20 January 2017 11:14:58AM 0 points [-]

Agreed wrt. honesty. (I’m from Germany.)

That weirdness is costly, though, is something that I’ve often heard and adopted myself, e.g., by asking friends how I can dress less weird and things like that. There’s also the typical progression that I’ve only heard challenged last year that you should first talk with people about poverty alleviation, and only when they understand basics like cost-effectiveness, triage, expected value, impartiality, etc., you can gradually lower your guard and start mentioning other animals and AIs.

Maybe Kathy doesn’t even contradict that, since the instances of weirdness that are beneficial may be a tiny fraction of all the weirdnesses that surrounds us, and finding out which tiny fraction it is (as well as employing it) will require that we first dial back all weirdnesses except for one candidate weirdness. I should just read that book.

Comment author: Kathy 20 January 2017 02:09:01PM *  0 points [-]

I agree that most people will not understand the most strange ideas until they understand the basic ideas. Ensuring they understand the foundation is a good practice.

I definitely agree that the instances of weirdness that are beneficial are only a tiny fraction of the weirdness that is present.

Regarding weirdness:

There are effective and ineffective ways to be weird.

There are several apparently contradictory guidelines in art: "use design principles", "break the conventions", and "make sure everything looks intentional".

The effective ways manage all three guidelines.

Examples: Picasso, Björk, Lady Gaga

One of the major and most observable differences between these three artists vs. many weird people is that the behavior of the artists can be interpreted as a communication about something specific, meaningful, and valuable. Art is a language. Everything strange we do speaks about us. If you haven't studied art, it might be rather hard to interpret the above three artists. The language of art is sometimes completely opaque to non-artists, and those who interpret art often find a variety of different meanings rather than a consistent one. (I guess that's one reason why they don't call it science.) In Picasso, I interpret an exploration of order and chaos. In Björk, I interpret an exploration of the strangeness of nature, the familiarity and necessity of nature, and the contradiction between the two. In Lady Gaga, I interpret an edgy exploration of identity.

These artists have the skill to say something of meaning as they follow principles and break conventions in a way that looks intentional. That is why art is a different experience from, say, looking at an odd-shaped mud splatter on the sidewalk, and why it can be a lot more special.

Ineffective weirdness is too similar to the odd-shaped mud splatter. There needs to be better communication. To interpret meaning, we need to see that combination of unbroken principles and broken conventions arranged in an intentional-looking pattern.

Comment author: DavidNash 20 January 2017 10:24:04AM 5 points [-]

This may be a community based thing but I haven't seen anyone advocating for lying in the UK and haven't heard of it much online either apart from one persons experience in California.

I agree with all the examples you have and think everyone should learn more about honest persuasion, but I'm not sure the myths to be bust are with the EA community rather than some peoples perception of the community.

Comment author: Kathy 20 January 2017 02:03:57PM 0 points [-]

I updated my post to mention some specific examples of the problems I've been seeing. Thank you, David.


3 Ways To Promote Causes Honestly and Effectively

You do not need to be dishonest to be effective at promotion, persuasion, or marketing. I've been seeing discussions about dishonest promotion recently, after what happened with Intentional Insights. I'm contributing the evidence I have that shows why we need to challenge the idea that dishonesty is needed. I also... Read More
Comment author: Kathy 02 November 2016 04:34:08AM *  3 points [-]

It would protect the movement to have a norm that organizations must supply good evidence of effectiveness to the group and only if the group accepts this evidence should they claim to be an effective altruism organization.

I think some similar norm should also extend to individual people who want to publish articles about what effective altruism is. Obviously, this cannot be required of critics, but we can easily demand it from our allies. I'm not sure what we should expect individual people to do before they go out and write articles about effective altruism on Huffington Post or whatever, but expecting something seems necessary.

To prevent startups from being utterly ostracized by this before they've got enough data / done enough experiments to show effectiveness, maybe they could be encouraged to use a different term that includes EA but modifies it in a clear way like "aspiring effective altruism organization".

Comment author: Elizabeth 30 October 2016 06:10:30PM 11 points [-]

One thing to consider is that too much charity for Gleb is actively harmful for people with ASDs in the community.

If I am at a party of a trusted friend and know they've only invited people the trust, and someone hurts my feelings, I'm likely to ascribe it to a misunderstanding and talk it out with them. If I'm at a party where lots of people have been jerks to me before, and someone hurts my feelings, I'm likely to assume this person is a jerk too and withdraw.

By saying "I'm updating" and then committing the same problems again, Gleb is lessening the value of the words. He is teaching people it's not worth correcting others, because they won't change. This is most harmful to the people who most need the most direct feedback and the longest lead time to incorporate it.

Comment author: Kathy 02 November 2016 04:20:45AM 3 points [-]

Wow. More excellent arguments. More updates on my side. You're on fire. I almost never meet people who can change my mind this much. I would like to add you as a friend.

Comment author: Gregory_Lewis 30 October 2016 08:05:08AM 7 points [-]

Hello Kathy,

I have read your replies on various comment threads on this post. If you'll forgive the summary, your view is that Tsipursky's behaviour may arise from some non-malicious shortcomings he has, and that, with some help, these can be mitigated, thus leading InIn to behave better and do more good. In medicalese, I'm uncertain of the diagnosis, strongly doubt the efficacy of the proposed management plan, and I anticipate a bleak prognosis. As I recommend generally, I think your time and laudable energy is better spent elsewhere.

A lot of the subsequent discussion has looked at whether Tsipursky's behaviour is malicious or not. I'd guess in large part it is not: deep incompetence combined with being self-serving and biased towards ones org to succeed probably explain most of it - regrettably, Tsipursky's response to this post (e.g. trumped-up accusations against Jeff and Michelle, pre-emptive threats if his replies are downvoted, veiled hints at 'wouldn't it be bad if someone in my position started railing against EA', etc.) seem to fit well with malice.

Yet this is fairly irrelevant. Tsipursky is multiply incompetant: at creating good content, at generating interest in his org (i.e. almost all of its social media reach is ilusory), at understanding the appropriate ambit for promotional efforts, at not making misleading statements, and at changing bad behaviour. I am confident that any EA I know in a similar position would not have performed as badly. I highly doubt this can all be traced back to a single easy-to-fix flaw. Furthermore, I understand multiple people approached Tsipursky multiple times about these issues; the post documents problems occurring over a number of months. The outside view is not favourable to yet further efforts.

In any case, InIn's trajectory in the EA community is probably fairly set at this point. As I write this, InIn is banned from the FB group, CEA has officially disavowed it, InIn seems to have lost donors and prospective donations from EAs, and my barometer of 'EA public opinion' is that almost all EAs who know of InIn and Tsipursky have very adverse attitudes towards both. Given the understandable reticience of EAs towards corporate action like this, one can anticipate these decisions have considerable inertia. A nigh-Damascene conversion of Tsipursky and InIn would be required for these things to begin to move favourably to InIn again.

In light of all this, attempting to 'reform InIn' now seems almost as ill-starred as trying to reform a mismanaged version of homeopaths without borders: such a transformation is required to be surely worth starting afresh. The opportunity cost is also substantial as there are other better performing EA outreach orgs (i.e. all of them), which promise far greater returns on the margin for basically any return one migh be interested in. Please help them out instead.

Comment author: Kathy 30 October 2016 04:01:55PM 3 points [-]

I'm not completely sure what's going on with Gleb, but I feel a great deal of concern for people with Asperger's, and I think it made me overly sympathetic in this case. Thank you for this.

Comment author: ClaireZabel 28 October 2016 07:21:37AM 19 points [-]

I don't think incompetent and malicious are the only two options (I wouldn't bet on either as the primary driver of Gleb's behavior), and I don't think they're mutually exclusive or binary.

Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.

Views my own, not my employer's.

Comment author: Kathy 30 October 2016 03:39:59PM 5 points [-]

That was a truly excellent argument. Thank you.

Comment author: Elizabeth 29 October 2016 06:04:44PM 6 points [-]

I have bunch of different unorganized thoughts on this.

One, absolute number is obviously the incorrect thing to use. Ratio is an improvement, but I feel loses a lot of information. "Better wrong than vague" is a valuable community norm, and how people respond to criticism and new information is more important than whether they were initially correct. It also matters how public and formal the statement was- an article published in a mainstream publication is different than spitballing on tumblr.

I'm unsure what you mean by "ban". There is no governing body or defined EA group. There are people clustering around particular things. I think banning him from the FB group should be based on the expected quality of his contribution to the FB group, incorporating information from his writing elsewhere. Whether people give him money should depend on their judgement about how well the money will be used. Whether he attends or speaks at EAG should be based on his expected contribution. None of these are independent, but they can have different answers.

I don't think any hard and fast rule would work, even if there was a body to choose and enforce it, because anything can be gamed.

What I want is for people to feel free to make mistakes, and other people to feel free to express concerns, and for proportionate responses to occur if the concerns aren't addressed. I think immune system is exactly the right metaphor. If a foreign particle enters your body, a lot of different immune molecules inspect it. Most will pass it by. Maybe one or two notice a concern. They attack to it and alert other immune molecules they should maybe be concerned. This may go nowhere, or it may cause a cascading reaction targeting the specific foreign particle. If a lot of foreign particles show up you may get an organ wide reaction (runny nose) or whole body (fever). The system coordinates without a coordinator.

Every time an individual talked to Gleb privately (which I'm told happened a lot), that was the first bout of the immune system. Then people complained publicly about specific things in specific posts here, on lesswrong, or on FB, that was the next step. I view the massive facebook thread and public letter as system wide responses necessary only because he did not adjust his behavior after the smaller steps. (Yes, he said he would, and yes, small things changed in the moment, but he kept making the same mistakes). Even now, I don't think you should be "banned" from helping him, if you're making an informed choice. You're an individual and you get to decide where your energy goes.

I do want to see changes in our immune system going forward. There is something of a halo effect around the big organizations, and I would like to see them criticized more often, and be more responsive to that criticism. Ben Hoffman's series on GiveWell is exactly the kind of thing we need more of. I'd also like to see us be less rigorous in evaluating very new organizations, because it discourages people from trying new things. I've been guilty of this- I was pretty hard on Charity Science originally, and I still don't think their fundraising was particularly effective, but they grew into Charity Entrepreneurship, which looks incredible.

I don't think the consequences of Gleb's actions should wait until there is a formal rule and he has had sufficient time to shoot himself in the foot, for a lot of reasons. One, I don't think a formal rule and enforcement is possible. Two, I think the information he has been receiving for over a year should have been sufficient to produce acceptable behavior, so the chances he actually improves are quite small. Three, I think he is doing harm now, and I want to reduce that as quickly as possible.

I realize the lack of hard and fast rule is harder for some people than for others, e.g. people on the autism spectrum. That's sad and unfair and I wish it weren't true. But as a community we're objectively very welcoming to people on the spectrum, far more so than most, and in this particular case I think the costs of being more accommodating would outweigh the benefits.

Comment author: Kathy 30 October 2016 02:52:13AM *  2 points [-]

I'm glad we agree that the absolute number of mistakes is obviously an incorrect thing to use. :) I like your addition of "better wrong than vague", (though I am not sure exactly how you would go about implementing it as part of an assessment beyond "If they're always vague, be suspicious." which doesn't seem actionable.).

Considering how people respond to criticism is important for at least two reasons. If you can communicate to the person, and they can change, this is far less frustrating and far less risky. A person you cannot figure out how to communicate with, or who does not know how to change the particular flaw, will not be able to reduce frustration or risk fast enough. People are going to lose their patience or total up the cost-benefit ratio and decide that it's too likely to be a net negative. This is totally understandable and totally reasonable.

I think the reason we don't seem to have the exact same thoughts on that is because of my main goal in life, understanding how people work. This has included tasks like challenging myself to figure out how to communicate with people when that is very hard, and challenging myself to figure out how to change things about myself even when that is very hard. By practicing on challenging communication tasks, and learning more about how human minds may work through my self-experiments, I have improved both my ability to communicate and also my ability to understand the nature of conflicts between people and other people-related problems.

I think a lot of people reading these comments do feel bad for Gleb or do acknowledge that some potential will be lost if EA rejects InIn despite the high risk that their reputation problems may result in a net negative impact.

Perhaps the real crux of our apparent disagreement is something more like differing levels of determination / ability to communicate about problems and persuade people like Gleb to make all the specific necessary changes.

The way some appear to be seeing this is: "The community is fed up with InIn. Therefore, let's take the opportunity to oust them.".

The way I appear to be seeing this is: "The community is fed up with InIn. Therefore, let's take the opportunity to persuade InIn to believe they need to do enough 2-way communication to understand how others think about reputation and promotion.".

Part of this is because I think Gleb's ignorance about reputation and marketing are so deep that he didn't see a need to spend a significant amount of time learning about these. Perhaps he is/was unaware of how much there is for him to learn. If someone could just convince him that there is a lot he needs to learn, he would be likely to make decisions comparable to: taking a break from promotion while he learns, granting someone knowledgeable veto power over all promotion efforts that aren't good enough, or hiring an expert and following all their advice.

(You presented a lot more worthwhile thoughts in your comment and I wish I could reply intelligibly to them all, but unfortunately, I don't have the time to do all of these thoughts justice right now.)

Comment author: Elizabeth 28 October 2016 04:34:37PM *  12 points [-]

I don't care if it is intentionally a con or not. Given that cons exist, the EA community needs an immune system that will reject them. The immune system has to respond to behavior, not intentions, because behavior is all we can see, and because good intentions are not protection from the effects of behavior.

I no longer believe things Gleb says. In the Facebook thread he made numerous statements that turned out to be fundamentally misleading. Maybe he wasn't intentionally lying; I don't know, I'm not psychic. But the immune system needs to reject people when the things they say turn out to be consistently misleading and a certain number of attempts to correct fail.

I don't think everyone needs to draw the line in the same place, I approve of people helping others after some people have given up on them as a category, even if I think it's not going to work in this case. But before you invest, I encourage you to write out what would make you give up. It can't be "he admits he's a scam artist", because scam artists won't do that, and because that may not be the problem. What amount of work, lack of improvement from him, and negative effects from his work and interactions would convince you helping was no longer worth your time?

Comment author: Kathy 29 October 2016 11:25:55AM *  2 points [-]

These are some really strong arguments, Elizabeth. This has a good chance to change my mind. I don't know whether I agree or disagree with you yet because I prefer to sleep on it when I might update about something important (certain processing tasks happen during sleep). I do know that you have made me think. It looks like the crux of a disagreement, if we have one, would be between one or both of the first two arguments vs. the third argument:

1.) EA needs a set of rules which cannot be gamed by con artists.

2.) EA needs a set of rules which prevent us from being seen as affiliated with con artists.


3.) Let's not ban people and organizations who have good intentions.

A possible compromise between people on different sides would be:

Previously, there had been no rule about this. (Correct me if I'm wrong about this!) Therefore, we cannot say InIn had broken any rule. Let's make a rule to limit dishonesty and misleading mistakes to a certain number in a certain time period / number of promotional pieces / volunteers / whatever. *

If InIn breaks the new rule after it is made, then we'll both agree they should be banned.

If you think they should be banned right now, whether there was an existing rule or not, please tell me why.

/* Specifying a time period or whatever would prevent discrimination against the oldest, most prolific, or largest organizations simply because they made a greater total number of mistakes due to having a greater volume of output.

The ratio between mistakes and output seems really important to me. Thirty mistakes in ninety articles is really egregious because that's a third. Three mistakes in three hundred articles is only 1%, which is about as close to perfection as one can expect humans to get.

Comparing 1 / 3 vs. 1 / 100 is comparing apples to oranges.

I'm not sure what the best limit is, but I hope you can see why think this is an important factor. Maybe this was obvious to everyone who may read this comment. If so, I apologize for over-explaining!

Comment author: Kathy 28 October 2016 06:18:06AM *  3 points [-]

My stance is currently that Gleb most likely has a learning disorder (perhaps he is on the spectrum) and is also ignorant about marketing, resulting in a low skill level with promotion. Some people here are claiming things that make it seem like they believe Gleb intends to do something bad, like a con. It's also possible Gleb was following marketing instructions to the letter which were written by people who are less scrupulous than most EAs (perhaps because he thought it was necessary to follow such instructions to be effective). I wouldn't be surprised if Gleb perceived what he was doing as "white lies" (thinking that there would be a strong net positive impact). It's also possible that some of these were ordinary mistakes (though probably not all of them because there are a lot).

I'd like to discover why people believe things like "this is a con" and see whether I change my mind or not. Anyone up for that?

View more: Next