Comment author: DonyChristie 06 August 2017 04:10:35PM 2 points [-]

It's always great to see a new cause analysis! It would be cool to see some math of the utility/importance of this (you may know Guesstimate is a great tool for this :D), and I echo the desire to see tractable steps and an analysis of what effects the intervention(s) could have. I'm also curious where you think most of the disutility from electronic ballots comes from: the candidate that is elected or the perception that the election was hacked? If the former, I'd guess that many people here including me flag an intuitive penalty on cause areas/interventions that involve partisan politics due to its zero-sum, man vs. man, mindkilling nature. It seems likely that politics will naturally try to butt its head into our business and absorb attention, more than it deserves.

Here is a site dedicated to this. Try contacting them if you desire to research the subject further!

Comment author: Austen_Forrester 06 August 2017 07:13:05AM 1 point [-]

I think it may be useful to differentiate between EA and regular morals. I would put donating blood in the latter category. For instance, treating your family well isn't high impact on the margin, but people should still do it because of basic morals, see what I mean? I don't think that practicing EA somehow excuses someone from practicing good general morals. I think EA should be in addition to general morals, not replace it.

Comment author: DonyChristie 06 August 2017 03:16:42PM *  0 points [-]

I'm curious what exactly you mean by regular morals. I try (and fail) to not separate my life into separate magisteria but rather seeing every thing as making tradeoffs with every other thing, and pointing my life in the direction of the path that has the most global impact. I see EA as being regular morality, but extended with better tools that attempt to engage the full scope of the world. It seems like such a demarcation between incommensurable moral domains as you appear to be arguing for can allow a person to defend any status quo in their altruism rather than critically examining whether their actions are doing the most good they can. In the case of blood, perhaps you're talking about the fuzzies budget instead of the utilons? Perhaps your position is something like 'Regular morals are the set of actions that, if upheld by a majority of people, will not lead to society collapsing and due to anthropics I should not defect from my commitment to prevent society from collapsing. Blood donation is one of these actions', or this post?

Comment author: Sanjay 10 July 2017 04:13:49PM *  10 points [-]

Thanks for this post, I used to work for a strategy consultancy that specialised in this sort of area, so I'm quite interested in this.

You state your value-add comes from (a) reducing fees to zero (b) tax-efficiency (e.g. donations of appreciated securities) (c) higher-performing investment strategies

I'm interested to know whether Antigravity investments is really needed when EAs have the option of using the existing investment advice that's out there. In particular:

-- (a) you also ask if people are willing to fund you. Does this mean that an alternative model for you would be to charge your clients and then allow your funders to donate to high-impact charities? If so, doesn't that mean that the zero-cost element of your model isn't actually a big advantage after all? (not meaning to be critical, I just don't know enough about your funding model)

-- (b) is it fair to say that donations of appreciated securities is a well-known phenomenon in tax-efficient donating, and anyone getting any kind of half-decent advice would get this anyway?

-- (c) (I understand you provide no guarantees) How many years of past performance do you have? Would you agree that in general, if a fund manager of any non-passive sort (smart beta or outright active) has a strong first few years, it's much more likely to be luck than an underlying advantage?

Sorry if the questions sounds sceptical, I'm conscious that I don't understand all the details about how you work.

Comment author: DonyChristie 11 July 2017 08:12:32PM 1 point [-]

I'm interested to know whether Antigravity investments is really needed when EAs have the option of using the existing investment advice that's out there.

Trivial inconveniences.

Comment author: KevinWatkinson  (EA Profile) 02 July 2017 10:13:54PM 0 points [-]

I have some doubts generally about the principle of mainstreaming. It seems to me that it utilises dominant ideologies 'strategically', thus reifying them. In terms of the animal movement this is very much the case in regard to One Step for Animals, Pro-Veg and The Vegan Strategist. All these groups and organisations have adopted a mainstream 'pragmatic' approach which concurrently undermines social justice.

This is of course one approach, but i do not believe there is sufficient evidence to pursue it, or that it stands to reason. It would be far better for these mainstream groups to avoid social justice issues completely, so that would include rights and veganism (the cessation of exploitation), rather than essentially undermining them to privilege their approach.

For example, i think it is deeply unfortunate Matt Ball recently said that we need to utilise the idea that people hate vegans in order to appeal to non-vegans and 'help' animals. I would question the ethics of this, and also whether it is in fact true that 'people' hate vegans, or that forming and perpetuating this idea would be a good thing anyway. This is one example, but in my view mainstreaming sets forth a cascade against people that are trying to do good pro-intersectional social justice work, and it is i believe also true that groups involved in 'mainstreaming' have not sufficiently evaluated their approach, so it seems unworthwhile to support it, even whilst many EAs seem to do just that.

Comment author: DonyChristie 02 July 2017 11:22:45PM 1 point [-]

What empirical tests can we make to measure which approach is more effective? What predictions can be made in advance of those tests?

In response to 2017 LEAN Statement
Comment author: DonyChristie 28 June 2017 10:31:36PM 2 points [-]

Local group seeding and activation.

Thanks for this link. I may have raised this in a private channel, but I want to take the time to point out that based on anecdotal experience, I think LEAN & CEA shouldn't be seeding groups without taking the time to make sure the groups are mentored, managed by motivated individual(s), and grown to sustainability. I found my local group last year and it was essentially dilapidated. I felt a responsibility to run it but was mostly unsuccessful in establishing contact to obtain help managing it until some time in 2017. I'm predicting these kinds of problems will diminish at least somewhat now that LEAN & Rethink have full-time staff, the Mentoring program, more group calls, etc. :)

Comment author: SteveGreidinger 22 June 2017 05:07:20AM 0 points [-]

This is a good start about some of the issues, but there is a need to bulk it up with information directly from neuroscientists.

For instance, some very senior people in the Stanford neuroscience community think that an essential difference between animals and people may be that the astrocytes, "helper cells," are so very different. Among many other things, astrocytes help to create and destroy synapses.

Neuroscientists also routinely do mice experiments, and a few have very sophisticated answers to ethical questions about what they do.

There are a lot of topics in EA ethics that benefit from a grounding in neuroscience and neuroethics. Both of these fields also contain many EA opportunities themselves. If money is being put down, then it's time to add some expert scientific opinion.

Comment author: DonyChristie 22 June 2017 04:36:57PM 0 points [-]

Be the change you want to see in this world. You are clearly motivated and knowledgeable about the matter enough to try emailing some neuroscientists about the matter. :)

Comment author: DonyChristie 17 June 2017 01:24:45AM 2 points [-]

How can we further coordinate our action on this? Reward people who tweet?

Comment author: DonyChristie 02 June 2017 07:51:46PM 0 points [-]

or we could add a long-term future fund that focused on areas other than AI-Safety.

+1 differentiation. A Fund specifically for AI Safety would probably have demand - I'd donate. Other Funds for other specific GCRs could be created if there's enough demand too.

A mild consideration against would be if there are funding opportunities in the Long Term Future area that would benefit both AI Safety and the other GCRs, such as the cross-disciplinary Global Catastrophic Risks Institute, and splitting would make it harder for these to be funded, maybe?

Comment author: Cornelius  (EA Profile) 11 May 2017 12:43:46AM *  9 points [-]

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

Comment author: DonyChristie 11 May 2017 02:44:34PM 1 point [-]

Overstatement seems to be selected for when 1) evaluators like Givewell are deferred-to rather than questioned, 2) you want to market that Faithful Deferrence to others.

Comment author: DonyChristie 11 May 2017 12:02:49AM 2 points [-]

One, last thing that was emphasised multiple times, was the value of asking. Most of us still don’t do it enough. If one doesn’t understand something, if we don’t know where to start, if we want people to support us - there’s one simple trick: just ask. We tend to feel like there is some social cost to asking but simply asking can provide extremely valuable support and only has downsides if we do it too often. So far, most of us don’t do it enough.

Great point! Relatedly, for next EAG I want to have a small set of questions to ask people (last time it was "What do you think of insect suffering?"). I would highly recommend others come in with thought-provoking questions they want answered. Brainstorming:

  1. What is one trait/idea/action/ you wished the EA community had/understood/took? (or insert "people" for "EA community)
  2. What is one book you would recommend and why?
  3. Takes out $5 Where should I donate this?
  4. What do you need help with in your life?
  5. What are you most uncertain about?
  6. What would make you change your cause prioritization?
  7. Who do you most look up to?

Warning: At EAG Berkeley at least, not everyone you meet will be able to answer all of these questions because they're newer to the movement - adapt accordingly.

Perhaps print off some business cards (even crude, informal ones), with whatever information you wish to impart to people - contact or otherwise.

Try to do a summary of the day before you go to sleep (or just the trip). I made a post on Facebook and tagged some of the people I met (you will have to friend them first for the tag).

Any other ideas? :-)

View more: Next