Comment author: MichaelDello 14 March 2017 11:27:54AM 0 points [-]

Thanks for sharing, I've saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel 'left out'? Are there plans for other EAGx conferences in Europe?

Comment author: Gentzel 13 March 2017 10:37:50PM *  2 points [-]

I think you may be misunderstanding what I mean by ethical reaction time, but I may change the post to reduce the confusion. I think adding examples would make the post a lot more valuable.

Basically, what I mean by ethical reaction time is just being able to make ethical choices as circumstances unfold and not be caught unable to act, or acting in a bad way.

Here’s a few examples, some hypothetical, some actual:

Policy:

  • One can imagine the Reach Every Mother and Child Act might have passed last year if a few congressmen were more responsive in adjusting it to get past partisan objections (left wing opposing funding religious groups, right wing opposing funding contraception). That likely would have saved a few thousand lives, and possibly millions according to USAID. (http://effective-altruism.com/ea/pk/how_to_support_and_improve_the_reach_every_mother/) My model of the political constraints on the Reach Act may be wrong here though.

  • Any regulation of technology that starts occurring in response to the technology rapidly coming into existence: Uber executed its plans faster than it could be banned, which was probably good. We don’t necessarily want the same to be true for certain tech risks in AI and biology, which makes it important to figure out quickly the correct way to regulate things.

  • Anything on the U.S. Federal Register that is going poorly: if you don’t notice a new rule come up relevant to your area of interest (easy to imagine this with animal welfare and new tech) and respond within the comment period, your concerns aren’t going to inform the regulation (if you put in relevant research, and get ignored, you can sue the federal agency and win: this happened when the FDA first failed to ban trans fat). This is also a sort of situation that may actually require you to do research under a lot of time pressure.

Donor coordination:

  • If your organization is not prepared to accept/talk to donors, there are often times you will lose a lot that you’d otherwise get. This I think is one of the reasons David Goldberg with Founder’s Pledge would carry a backpack with him containing everything needed for someone to legally commit some % of their venture cash out value to effective charities.

Start-ups:

  • If initially you need partnerships/funding and others are competing for those partnerships/funding then OODA loop applies (but less for funding since there can be more sources).

Salary negotiations:

  • It makes a lot of sense to know how you are going to negotiate ahead of time, or to be very quick in thought. (http://haseebq.com/my-ten-rules-for-negotiating-a-job-offer/) Saying the wrong thing could cost you thousands of dollars, and if you are donating that to AMF, that’s likely costing lives.

Grants/research opportunities:

  • Grant opportunities = competitive = OODA loop. Less true when there is a deadline that is far away however.

  • In 2015, there were several EA organizations that had the opportunity to get free research from grad students who were interested in Effective Altruism from the School of Public Policy at the University of Maryland. In order to get free research, an organization would have had to submit a general/rough research proposal (<1 page), in which they could enumerate some of the means by which a study or literature review would be undertaken to maintain rigor/ guidelines for the advisor monitoring grad students. No one was able to react within the month after solicitation, so other non-EA organizations got free research instead for the grad student projects class. It does seem reasonable that EA orgs may have had better priorities, and that there is reason to be skeptical of the grad students, but it would have been a good way to get a bunch of students going into policy more bought into EA even if they didn’t produce work at a level we’d accept. This is also partly my fault, since I could have informed groups earlier, though not by a lot.

Handling the throughput vs. responsiveness trade off:

  • If you set up systems so that you can be reactive without draining as much of your attention, you can get more things done generally. Dropping responsiveness and responding to emails once per day or less may make sense if you are a coder/researcher, but it doesn’t make sense if you are an information node between organizations that need to coordinate. Adopting simple algorithms like writing down everything in my working memory before taking a call has made me both a lot more willing to take calls, and sped up my ability to get right back to work after interruption.

Time sensitive opportunities:

  • If factory farming and or malaria are going to be gone at some point in the next 20 years due to development/economics, then there won’t be the same opportunity to reduce suffering/save lives in the future that there is now. That being said, donations don’t require prompt reaction to opportunity the way policy opportunities with respect to these do.
Comment author: MichaelDello 14 March 2017 11:25:59AM 2 points [-]

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

Comment author: MichaelDello 14 March 2017 11:18:38AM 0 points [-]

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Comment author: Brian_Tomasik 10 March 2017 04:21:47AM *  5 points [-]

Thanks for the post. I agree that those who embrace the asymmetry should be concerned about risks of future suffering.

I would guess that few EAs have a pure time preference for the short term. Rather, I suspect that most short-term-focused EAs are uncertain of the tractability of far-future work (due to long, complex, hard-to-predict causal chains), and some (such as a coalition within my own moral parliament) may be risk-averse. You're right that these considerations also apply to non-suffering-focused utilitarians.

It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical.

As you mention, there are complexities that need to be accounted for. For example, one should think about how catastrophic risks (almost all of which would not cause human extinction) would affect the trajectory of the far future.

It's much easier to get people behind not spreading astronomical amounts of suffering in the future than behind eliminating all current humans, so a more moderate approach is probably better. (Of course, it's also difficult to steer humanity's future trajectory in ways that ensure that suffering-averting measures are actually carried out.)

Comment author: MichaelDello 14 March 2017 11:15:39AM 2 points [-]

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

Comment author: MichaelDello 27 February 2017 09:18:59AM 3 points [-]

I have one concern about this which might reduce estimates of its impact. Perhaps I'm not really understanding it, and perhaps you can allay my concerns.

First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.

But if we grant that we did indeed pick the best candidate, there doesn't seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game where supporters of candidate A are vote swapping as much as supporters of candidate B. So on the margin, engaging in vote swapping seems obviously good, but at a system level, promoting vote swapping seems less obviously good.

Does this make any sense?

Comment author: MichaelDello 19 February 2017 11:05:34PM 0 points [-]

Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.

I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.

Comment author: MichaelDello 27 September 2016 11:04:09PM 4 points [-]

I'd like to steelman a slightly more nuanced criticism of Effective Altruism. It's one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.

Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call 'working within the system', a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, 'you can solve all the world's problems by donating enough', I might have reservations too. They worry that EA does not pay enough credence to the value of building community and social ties.

Of course, articles like this (https://80000hours.org/2015/07/effective-altruists-love-systemic-change/) have been written, but it seems this is still being overlooked. I'm not arguing we should necessarily spend more time trying to convince people that EAs love systemic change, but it's important to recognise that many people have, what sounds to them, like totally rational criticisms.

Take this criticism (https://probonoaustralia.com.au/news/2015/07/why-peter-singer-is-wrong-about-effective-altruism/ - which I responded to here: https://probonoaustralia.com.au/news/2016/09/effective-altruism-changing-think-charity/). Even after addressing the author's concerns about EA focusing entirely on donating, he still contacted me with concerns that EA is going to miss the unintended consequences of reducing community ties. I disagree with the claim, but this makes sense given his understanding of EA.

Comment author: MichaelDello 27 September 2016 10:41:30PM 4 points [-]

Thanks for this Peter, you've increased my confidence that supporting SHIC was a good thing to do.

A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I've mentioned): I'm unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.

The model is largely self-sustaining, and students always look forward to the next weekend conference, which is full of fun activities.

At this point I don't have an idea for how such a model might be applied to SHIC, but it could be worth keeping in mind for the future.

An alternative might be to approach UNYA to get a SHIC workshop into their curriculum. I don't know how open they would be to this, but I'm willing to try through my contacts with UNYA in Adelaide.

Comment author: DonyChristie 25 September 2016 06:45:20AM 0 points [-]

Perhaps it would be easier to figure out what is the worst ethical theory possible? I don't recall ever seeing this question being asked, and it seems like it'd be easier to converge on.

Regardless of how negatively utilitarian someone is, almost everyone has an easier time intuiting the avoidance of suffering rather than the maximization of some positive principle, which ends up sounding ambiguous and somewhat non-urgent. I think suffering enters near mode easier than happiness does. It may be easier for humans to agree on what is the most anti-moral, badness-maximizing schema to adopt.

Comment author: MichaelDello 25 September 2016 10:45:56PM 1 point [-]

This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.

Any thoughts on the worst possible ethical theory?

Comment author: MichaelDello 21 September 2016 10:48:25AM 0 points [-]

Thanks for this Kerry. I'm surprised that cold email didn't work, as I've had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?

Depending on the event, I've had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally. EA Sydney also had a lot of success promoting an 80K event partly by cold contacting university faculty heads asking them to share the workshop with their students (though I note Peter Slattery would be much better to chat to about the relative success of different promotional methods for this last one).

Could you please expand on what you mean by "Identify one “superhero” EA"? What is the purpose of this?

View more: Next