S

SydMartin

36 karmaJoined Feb 2015

Comments
14

I would love to know if you decided to go forward with Movember this year. I would also be really torn about this. On one hand making a visual statement about being philanthropic helps destigmatize talking about where you donate and why. This is good if your long term goal is to create a large culture shift more towards EA values. The downside being that it takes a long time and a ton of effort and may mean people donate sub-optimally. I think you are right that this is very unlikely to pull any money from highly effective causes since this strategy is aimed at signally groups and people outside of EA.

However Inside EA and as an organizer it can feel hypocritical, I totally understand/agree. It can feel like not 'walking the walk' so to speak. I think there could be a number of solutions. One could be participating in Movember but fundraising for an EA charity - you get the benefits mentioned above but get to give money to an EA cause. The other could be to find the most effective men's health organization you can and donate to them instead. In both of these cases I think the key would be to talk a lot about why.

I appreciate your advice not to bring divergent opinions "to public discussion [to] see what everyone thought" - your reception may in fact convince me this is an good course of action in the future. I fear this may have fallen into the trap of making EA unwelcoming by coming across as presumptuous or even hostile. I would like to think this wasn't your intention and you were hoping to have a conversation so I'll address some of your concerns or assumptions.

A point of clarity: I'm not a strict consequentialist - so there are many things here where we may disagree because I see inherent value in an action taken in good faith in order to test a theory or support a potentially larger good. From your comment I think you would disagree with this.

I would also like to make clear, these donations do not make up a majority of my donation dollars; these instances are exceptions to the rule.

If it's a reciprocation mechanism then I can imagine that working but only if you have an approximately 1:1 or better ratio of marginal donations reciprocated

I usually find that I get a 1:1 ratio or better. I also know that a $10 donation to AMF is doing much more good, to the point that it will likely more than offset any potential harm my donation did, so there was a net benefit.

Or you could just donate based on solid principles and not deny your own ability to make decisions?

I consider operant conditioning a solid principle and a good way to work towards making increasingly good decisions. I don't deny my autonomy, but I also am aware that my mind often needs encouragement to function more effectively.

you'd be much more efficient donating $0.01

You are correct. Usually I default to a few dollars because I assume the credit processing system won't accept anything less. Next time I will try this.

Egoism and contractualism are two different approaches to morality, and neither is part of effective altruism.

I am stating that I feel a contract myself in many of these instances. I also think it is OK to donate out of personal passion - say to the arts. And I completely agree, both of these are not effective altruism. I would even say they only tangentially count as altruism. As I stated I list these here because we traditionally define this allocation of resources as "donations to nonprofits" and are tax exempt. I also list it here because I believe there is space in EA to engage with people who want to continue to donate to things like arts programs or public radio, by encouraging them to realize that these donations aren't part of your charity work, and if they want to make a difference they should also be giving to effective causes.

Re: Cause neutrality. What I am addressing here is the tendency of donors to become emotionally attached to a cause they donate to. This means that if there is no more room for funding, or if a more effective cause arises they are less likely to shift their donations. My theory is I will be more willing to shift the larger portion of my giving, if necessary, since I am not completely invested in any single cause area.

Unexpected outcomes: Yes I agree that there is also a possibility that the charity is ineffective and even potentially doing harm. I would refer you to my comment about not being a strict consequentalist as well as my statement about giving all charities a reputation check. This is just a personal judgment call.

I don't see how this can be the case unless you artificially and arbitrarily emplace such a mechanism into your decisions.

I do! :) I budget very carefully, and my allocated charity donations get deducted monthly on a set schedule. The money that I give to secondary 'ineffective' causes comes out of my 'fun money' budget, so I know very clearly what other things I may have spent that money on.

Hopefully that lends some clarity to my post. I think it is important to be open and welcoming to people who are deeply interested in philanthropy but may not yet have heard of or bought into many of the ideas of EA. To this end I think it is a valuable exercise to think about the intersections of EA thinking and traditional giving, such as the things described above. Creating a bridge between more common giving habits and really effective giving is a useful way of helping people level up their thinking when it comes to philanthropy.

Yes, that is an accurate summation. I don't think many of these causes are the 'most' effective, but I believe them to be potentially effective but lack measurements. We don't talk very often about other potentially effective benefits of donating outside of core EA charities. I think there is benefit in discussing/exploring exceptions or other donation strategies people may have.

This sounds like a really great idea. I think as a community we tend to make loads of predictions; it seems likely we do this a lot more than other demographics. We do this for fun, as thought experiments and often as a key area of focus such as x-risk etc. It seems like a good idea to track our individual abilities on doing this sort of predicting for many reasons. Identifying who is particularly good at this, for improvement etc. It does make me concerned that we would become hyper-focused on predictions and lead us to potentially neglect current causes; getting too caught up in planning and looking forward and forgetting to actually do the thing we say we prioritize.

I also wonder about how well near-future prediction ability translates to far-future predictions. In order to test how well you are able to predict thing you predict near-future events or changes. You increase your accuracy at doing these and assume it translates to the far-future. Lots of people make decisions based around your far-future predictions based on your track record of being an accurate predictor. Perhaps, however, your model of forecasting is actually wildly inaccurate when it comes to long term predictions. I'm not sure how we could account for this. Thoughts?

Thank you for sharing! I think a/b testing this seems like a really good idea. Even just testing the way you are phrasing the question opposed to testing other questions. A static online survey would definitely cut down on the time investment since it will collect all the data for you, however it will definitely cut into your response rate (more clicks = more work).

It seems like continuing to gather this information over the course of all the EA Global meetings and the launch of Will's book would be valuable due to the likelihood of continued rapid growth. Past that it would be more useful to focus on using the data opposed to collecting it.

Right now across both surveys it looks like LW and word of mouth are the best recruiting tools. Continuing to enacting marketing strategies across those two platforms seems like the best course of action. Meaning we should probably be encouraging new members to tell their friends and invite them to meetings. It also seems like a great idea to keep messaging some people to reinforce the welcoming feeling and also because people that have been referred by word of mouth will likely appreciate and need a one-on-one interaction to stay interested or motivated.

I'm inclined to agree with Holden for a number of reasons. First and foremost being that this isn't really what GiveWell does. They are very good at what they do, which is evaluate existing charities; while I see the tie-in with knowing how a good charity is run, it is a far cry from making organizational changes. Which is the other reason I agree with him, doing this is hard. Like really, really, substantially hard.

However I think hard and 'not worth doing' are very different things. I also agree that CEA or EA Ventures would be more appropriate venues to incubate a testable idea around this. After speaking with Kerry at CEA about this he agrees that while this is very exciting and something that would be great, no one yet seems to have a good answer for how to go about doing this. I think the next step is more asking lots and lots of people how they would go about doing this, what the very first change would, should, or could be.

Thank you for posting this! I think these are really great counter arguments, as well as a succinct description of many criticisms of EA. As we are rapidly gaining press, we are also gaining critiques and almost all of the ones I've seen are exactly this rationale.

What I keep waiting for someone to say, but haven't seen quite yet, is the response 'That's OK. You don't have to work on X to still identify as an Effective Altruist.' For example I know quite a few people in EA that care deeply about existential risk, but aren't particularly moved by the global poor. I know people who have said they like the 'effective' more than the 'altruist' because they really, really like optimizing things. I myself am not motivated by AI risk at all - I simply don't find it interesting or engaging, and I'm not entirely convinced it's a good way to spend my energy - but I still have great respect for those that do, and I still strongly identify as an EA.

I wonder if this desire for all or nothing acceptance of base principles may be because many people within EA strive to wipe out cognitive dissonance, which my argument sort of feels like. However I worry if in our avoidance of cognitive dissonance we fall into the trap of dualistic thinking. I found myself wandering back to Effective Altruism is a Question. The last paragraph being the most pertinent:

I can imagine a hypothetical future in which I don’t agree with the set of people that identify with the 'EA movement'. But I can’t imagine a future where I’m not trying to figure out how to answer the question 'How can I do the most good?'

In other words we as a community should be more open to the idea that not everyone has to buy into every idea or tenant within EA. We do have to all agree that we are trying to do the most good. Indeed it is the continual debate about how to go about doing the most good that is ultimately what makes us most effective.

Which is why I love your response, which I would probably summarize as 'It is good that EA is flawed because we have things to strive for, come help us make it better!'

I think that governmental orgs would be a great way to do this!

I do worry that doing this as an individual has it's draw backs. I think getting to this sort of position requires ingraining yourself into a dysfunctional culture and I worry about getting sucked into the dysfunction, or succumbing to the multiple pressures and restraints within such an organization. Whereas an independent organization could remain more objective & focused on effectiveness.

I agree that trying to branch out to, or add an EA cause to a current charities is unlikely to succeed. My experience is that you are right - there are lots of services and advice out there for charities that want to improve implementation or strategy (mainly focused around cultivating donors).

I would be interested to know if there are many resources out there aimed at getting organizations to collect more data. To asses their success rates more scientifically. It is also my understanding that the advice out there for creating more effective implementation is usually based around just getting better numbers, not if those numbers actually make change in a given cause area.

What would you suggest is a good place to start for small scale experimentation? I think you are right, just doing some of this is the best way to gauge tractability.

Yes this is indeed my hypothesis; thank you for stating it so plainly. I think you've summed up my initial idea quite well.

My assumption is that trying to improve a very effective charity is potentially a lot of work and research, while trying to improve an ineffective but well funded charity, even a little, could require less intense research and have a very large pay-off. Particularly given that there are very few highly effective charities but LOTS of semi-effective, or ineffective ones, meaning there is a larger opportunity. Even if only 10% of non EA charities agree to improve their programs by 1% I believe the potential for overall decrease in suffering is greater.

There is also the added benefit of signalling. Having an organization that is working to improve effectiveness (despite of funding problems [see Telofy's comment]) shows organizations that donors and community members really care about measuring and improving outcomes. It plants the idea that effectiveness and an EA framework are valuable and worth considering. Even if they don't use the service initially.

My thought here is this is another way (possibly a very fast one) to spread EA values through the charity world. Creating a shift in nonprofit culture to value similar things seems very beneficial.

Load more