[minor] In the sentence, "While more pilot testing is necessary in order to make definitive judgements on SHIC as a whole, we feel that we have gathered enough data to guide strategic changes to this exceedingly novel project." "exceedingly novel" seems like a substantial exaggeration to me. There have been EA student groups, and LEAN, before (as you know), as well as inter-school groups for many different causes.
Apologies I had it in my head that ACE was a CEA project
Note though that ACE was originally a part of 80k Hours, which was a part of CEA. The organizations now feel quite separate, at least to me.
Additionally, I am not paid by ACE or CEA. Being on the ACE Board is a volunteer position, as is this.
Generally, I don't feel constrained in my ability to criticize CEA, outside a desire to generally maintain collegial relations, though it seems plausible to me that I'm in an echo chamber too similar to CEAs to help as much as I could if I was more on the outside. Generally, trying to do as much good as possible is the motivation for how I spend most of the hours in my day. I desperately want EA to succeed and increasing the chances that CEA makes sound decisions seems like a moderately important piece of that. That's what's been driving my thinking on this so far and I expect it'll continue to do so.
That all said (or rambled about) here's a preview of a criticism I intend to make that's not related to my role on the advisory board panel: I don't think it's appropriate to encourage students and other very young people to take the GWWC pledge, or to encourage student groups to proselytize about it. I think the analogy to marriage is helpful here; it wouldn't be right to encourage young people who don't know much about themselves or their future life situations to get married (especially if you didn't know them or their situation well yourself) and I likewise think GWWC should not encourage them to take the pledge.
Views totally my own and not my employer's (the Open Philanthropy Project).
I found the formatting of this post difficult to read. I would recommend making it neater and clearer.
I would prefer if the title of this post was something like "My 5 favorite EA posts of 2016". When I see "best" I expect a more objective and comprehensive ranking system (and think "best" is an irritatingly nonspecific and subjective word), so I think the current wording is misleading.
For EAs that don't know, if might be helpful to provide some information about the journal, such as the size and general characteristics of the readership, as well as information about writing for it, such as what sort of background is likely helpful and how long the papers would probably be. Also hopes and expectations for the special issue, if you have any.
It seems like most who are risk-hungry enough to try to start a new GiveWell charity enough would also be risk-hungry enough to consider one or another alternative cause area. So for those readers, it would seem useful to also give a counterfactual estimate for funding Open Phil suggested charities. If moving to a different cause can get you an extra order of magnitude of cost-effectiveness, then this will make giving more effective than trying to start a GiveWell charity.
This gets very tricky very fast. In general, the difference in EV between people's first and second choice plan is likely to be small in situations with many options, if only because their first and second choice plans are likely to have many of the same qualities (depending on how different a plan has to be to be considered a different plan). Subtracting the most plausible (or something) counterfactual from almost anyone's impact makes it seem very small.
Nice idea, Julia. Thanks for doing this!
That was a truly excellent argument. Thank you.
Why do people keep betting against Carl Shulman???
No shame if you lose, so much glory if you win
Shlevy, I think I might actually agree with everything you said here with the exception of the characterization of Intentional Insights as a "con".
I can see the behavior on the outside very clearly. On the outside Gleb has said a list full of incorrect things.
On the inside, the picture is not so clear. What's going on inside his head?
If this is a con, what in the world does he want? He can't seem to make money off of this. Con artists have a tendency to do very, very quick things, with a very very low amount of effort, hoping to gain some disproportionate reward. Gleb is doing the opposite. He has invested an enormous amount of time (Not to mention a permanent Intentional Insights tattoo!) and (as far as I know) has been concerned about finances the whole time. He's not making a disproportionate amount of money off of this... and spreading rationality doesn't even look like one of those things which a con artist could quickly do for a disproportionate reward... so I am confused.
If I thought Intentional Insights was a con, I'd be right with you trying to make that more obvious to everyone... but I launched my con detector and that test was negative.
Maybe you use a different con detector. Maybe, to you, it is irrelevant whether Gleb is intentionally malicious or merely incompetent. Perhaps you would use the word "con" either way just as people use the word "troll" either way.
For the same reasons that we should face the fact that there's a major problem with the inaccuracies Intentional Insights outputs, I think we aught to label the problem we're seeing with Intentional Insights as accurately as possible.
Whether Gleb is incompetent or malicious is really important to me. If Gleb is doing this because of a learning disorder, I would really like to see more mercy. According to Wikipedia's page on psychological trauma, there are a lot of things about this post which Gleb may be experiencing as traumatic events. For instance: humiliation, rejection, and major loss. (https://en.wikipedia.org/wiki/Psychological_trauma)
As some kind of weird hybrid between a bleeding heart and a shrewd person, I can't justify anything but minimizing the brutality of a traumatic event for someone with a learning disorder, no matter how destructive it is. At the same time, I agree that ousting destructive people is a necessity if they won't or can't change, but I think in the case of an incompetent person, there are a lot of ways in which the community has been too brutal. In the event of a malicious con, we've been too charitable, and I'm guilty of this as well. If Gleb really is a con artist, we should be removing him as fast as possible. I just don't see strong evidence that the problem he has is intentional, nor does it even seem to be clearly differentiated from terrible social skills and general ignorance about marketing.
Our response is too brutal for someone with a learning disorder or other form of incompetence, and it's too charitable for a con artist. In order to move forward, I think perhaps we aught to stop and solve this disagreement.
Here's what's at stake: currently, I intend to advocate for an intervention*. If you convince me that he is a con artist, I will abandon this intent and instead do what you are doing. I'll help people see the con.
/* (By intervention, I mean: encouraging everyone to tell Gleb we require him to shape up or ship out, to negotiate things like what we mean by shape up and how we would like him to minimize risk while he is improving. If he has a learning disorder, a bit of extra support could go a long way if the specific problems are identified so the support can target them accurately. I suspect that Gleb needs to see a professional for a learning disorder assessment, especially for Asperger's.)
I'm open to being convinced that Intentional Insights actually does qualify as some type of con or intends net negative destructive behavior. I don't see it, but I'd like to synchronize perspectives, whether I "win" or "lose" the disagreement.
I don't think incompetent and malicious are the only two options (I wouldn't bet on either as the primary driver of Gleb's behavior), and I don't think they're mutually exclusive or binary.
Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.
Views my own, not my employer's.
© 2017 Effective Altruism Forum |
Powered by reddit