F

Fluttershy

182 karmaJoined Nov 2014

Bio

LW is a rape cult

If you wouldn't bail out a bank then would you bail out EA?

Comments
47

Yeah, this sort of thing is basically always in danger of becoming politics all the way down. One good heuristic is to keep the goals you hope to satisfy by engaging in mind--if you want to figure out whether to accept an article's central claim, is the answer to your question decisive with respect to your decision? If you're trying to sway people, are you being careful to make sure it's plausibly deniable that you're doing anything other than truthseeking? If you're engaging because you think it's impactful to do so, are you treating your engagement as a tool rather than an end?

As a guy who used to be female (I was AMAB), Kelly's post rings true to me. Fully endorsed. It would be particularly interesting to hear about AFAB transmen's experiences with respect to this.

The change in how you're treated is much more noticeable when making progress in the direction of becoming more guyish; not sure if this is because this change tends to happen quickly (testosterone is powerful + quick) or because of the offsetting stigma re: people making transition progress towards being female. I could also see this stigma making up some of the positive effect that AMAB people feel on detransitioning, though it's mostly possible to disentangle the effect of the misogyny from that of the transmisogyny if you have good social sense.

In anticipation of being harassed (based on past experience with this community), I'll leave it at that. I'm not going to respond to any BS or bother with politics.

I like the article. The first table makes it viscerally available that the VOI for better estimating eta (or for finding a better model for utility as a function of consumption on the margins) could be high, if you're relatively more interested in global poverty-focused EA than in other causes within EA.

I'm not aware of any better figures you could have used for GWWC/TLYCS/REG's leverage, and I'm not sure if many of us take estimates of leverage for meta-organizations literally, even relative to how literally we take normal EA cost-effectiveness estimates. I agree that combining the leverage estimates with the consumption multipliers in order to estimate impact would be the correct thing to do if you managed to get accurate estimates of both that weren't dependent or interdependent on each other, though!

To the extent that GWWC/TLYCS/REG count donations that they have received themselves as having a certain leverage because of the donations "caused"/influenced by GWWC/TLYCS/REG, everyone who has had their donations "caused"/influenced by GWWC/TLYCS/REG (at least according to GWWC/TLYCS/REG) should count their donations as having proportionally less than 1.0x leverage. (Alternatively, GWWC/TLYCS/REG could claim to have less leverage, and thereby allow those who they claim to have influenced to claim that they've caused a greater fraction of the impact that their own donations have caused). This prevents double-counting of impact, and gives us a more accurate estimate of how much good donations to various organizations cause, which in turn lets us figure out how we can do the most good.

I strongly agree with both of the comments you've written in this thread so far, but the last paragraph here seems especially important. Regarding this bit, though:

I might be a bit of an outlier

This factor may push in the opposite way than you'd think, given the context. Specifically, if people who might have gotten into EA in the past ended up avoiding it because they were exposed to this example, then you'd expect the example to be more popular than it would be if everyone who once stood a reasonable chance of becoming an EA (or even a hardcore EA) had stuck around to give you their opinion on whether you should use that example. So, keep doing what you're doing! I like your approach.

The objection about it being ableist to promote funding for trachoma surgeries rather than guide dogs doesn't have to do with how many QALYs we'd save from providing someone with a guide dog or a trachoma surgery. Roughly, this objection is about how much respect we're showing to disabled people. I'm not sure how many of the people who have said that this example is ableist are utilitarians, but we can actually make a good case that using the example causes negative consequences for the reason that it's ableist. (It's also possible that using the example as it's typically used causes negative consequences by affecting how intellectually rigorous EA is, but that's another topic). A few different points that might be used to support this argument would be:

  • On average, people get a lot of value out of having self-esteem; often, having more self-esteem on the margins enables them to do value-producing things they wouldn't have done otherwise (flow-through effects!). Sometimes, it just makes them a bit happier (probably a much smaller effect in utilitarian terms).
  • Roughly, raising or lowering the group-wise esteem of a group has an effect on the self-esteem of some of the group's members.
  • Keeping from lowering a group's esteem isn't very costly, if doing so involves nothing more than using a different tone. (There are of course situations where making a certain claim will raise or lower a group's esteem a large amount if a certain tone is used, and a lesser amount if a different tone is used, even though the group's esteem is nevertheless changed in the same direction in either case).
  • Decreases in a group's ability to do value-producing things or be happy because their esteem has been lowered by someone acting in an ablelist manner, do not cause others to experience a similarly sized boost to their ability to be happy or do value-producing things. (I.e. the truth value of claims that "status games are zero sum" has little effect on the extent to which it's true that decreasing a group's esteem by e.g. ableist remarks has negative utilitarian consequences).

I've generally found it hard to make this sort of observation publicly in EA-inhabited spaces, since I typically get interpreted as primarily trying to say something political, rather than primarily trying to point out that certain actions have certain consequences. It's legitimately hard to figure out what the ideal utilitarian combination of tone and example would be for this case, but it's possible to iterate towards better combinations of the two as you have time to try different things according to your own best judgement, or just ask a critic what the most hurtful parts of an example are.

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation.

I'd say this is correct. The EA Forum itself has such a selection effect, though it's weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist's perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.

I'd also agree that, at the time of Will's post, it would have been incorrect to say:

The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people's concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you've framed things as if you'll continue on with the project only if you update in the direction of having more public support than before.

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.

In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don't think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.

Part of what I'm tracking when I say that the EA community isn't supportive of EA Funds is that I've spoken to several people in person who have said as much--I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn't have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn't a great deal of community support for EA funds has to do with the ways in which I'd expect the data on how much support there actually is to be filtered. For example:

  • the method in which Kerry presented his survey data made it look like there was more support than there was
  • the fact that Kerry presented the data in this way suggests it's relatively more likely that Kerry will do so again in the future if given the chance
  • social desirability bias should also make it look like there's more support than there is
  • the fact that it's socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there's more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
  • we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.

It also doesn't help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn't show anything useful either way, I'm not comfortable with accepting the claim that EA Funds has been particularly well-received.

I appreciate that the post has been improved a couple times since the criticisms below were written.

A few of you were diligent enough to beat me to saying much of this, but:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communication and EA Funds' website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that's been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren't OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds' Fund Managers encounter.

I've spoken with a couple EAs in person who have mentioned that making the claim that "EA Funds are likely to be at least as good as OPP’s last dollar" is harmful. In this post, it's certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it's less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the "at least as good as OPP" slogan.

More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds "received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page)." But the first sentence of the Wikipedia page for NPS, which I'm sure the author read at least the first line of given that he linked to it, states that NPS is "a management tool that can be used to gauge the loyalty of a firm's customer relationships" (emphasis mine). However, EA Funds isn't a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company's customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you've made this assumption) belies a lack of intent to honestly inform EAs.

This post has other problems, too; it uses the NPS scoring system to analyze donors and other's responses to the question:

How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?

The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being "felt to be good" in industry. Worse, the post mentions that this result

could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.

It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I'd agree that it's a good idea to not "take NPS too seriously", though in this case, I wouldn't say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.

I'm disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I've pointed out, which all point in the direction of making EA Funds look better than it is, things don't look good. Things don't look good regarding how well this project has been received, but that's not the larger problem here. The larger problem is that things don't look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.

Some days, I like to quietly smile to myself and wonder if we might be able to take that back.

This is a problem, both for the reasons you give:

Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the collective pursuit of knowledge. And frankly, it's rude to do and unpleasant to receive.

and through this mechanism, which you correctly point out:

The implication is nearly always that the target of the joust has the ‘wrong’ intuitions.

The above two considerations combine extremely poorly with the following:

I’ve noticed IJing happens much more among effective altruists than academic philosophers.

Another consequence of this tendency, when it emerges, is that communicating a felt sense of something is much harder to do, and less rewarding to do, when there's some level of social expectation that arguments from intuition will be attacked. Note that the felt senses of experts often do contain information that's not otherwise available when said experts work in fields with short feedback loops. (This is more broadly true: norms of rudeness, verbal domination, using microaggressions, and nitpicking impede communication more generally, and your more specific concept of IJ does occur disproportionately often in EA).

Note also that development of a social expectation whereby people believe on a gut level that they'll receive about as much criticism, verbal aggression, and so on regardless of how correct or useful their statements are may be especially harmful (See especially the second paragraph of p.2).

Load more