Comment author: rohinmshah  (EA Profile) 14 March 2017 05:14:55PM 1 point [-]

^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

Comment author: Gentzel 18 March 2017 04:22:52PM 0 points [-]

That is the point.

The reason it is appropriate to call this ethical reaction time, rather than just reaction time is because the focus of planning and optimization is around ethics and future goals. To react quickly with respect to an opportunity that is hard to notice, you have to be looking for it.

Technical reaction time is a better name in some ways, but it implies too narrow of a focus, while just reaction time implies too wide of a focus. There probably is a better name though.

Comment author: MichaelPlant 13 March 2017 10:13:51PM 1 point [-]

I agree with Rohinmshah. I can see how reaction time could be important but I don't think you demonstrated this is actually the case.

One case I can think of where you'd have to give a time-pressured ethical view is debating, but I'm not sure how high stakes that really is.

Comment author: Gentzel 13 March 2017 10:56:51PM 3 points [-]

I just added some examples to make it a bit more concrete.

Comment author: rohinmshah  (EA Profile) 12 March 2017 04:05:33AM 4 points [-]

My main question when I read the title of this post was "Why do I expect that there are ethical issues that require a fast reaction time?" Having read the body, I still have the same question. The bystander effect counts, but are there any other cases? What should I learn from this besides "Try to eliminate bystander effect?"

But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast.


Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences.


Any time doing good takes place in an adversarial environment, this concept is likely to apply.

Examples? One example I came up with was negative publicity for advocacy of any sort, but you don't make any decisions about ethics in that scenario.

Comment author: Gentzel 13 March 2017 10:37:50PM *  2 points [-]

I think you may be misunderstanding what I mean by ethical reaction time, but I may change the post to reduce the confusion. I think adding examples would make the post a lot more valuable.

Basically, what I mean by ethical reaction time is just being able to make ethical choices as circumstances unfold and not be caught unable to act, or acting in a bad way.

Here’s a few examples, some hypothetical, some actual:


  • One can imagine the Reach Every Mother and Child Act might have passed last year if a few congressmen were more responsive in adjusting it to get past partisan objections (left wing opposing funding religious groups, right wing opposing funding contraception). That likely would have saved a few thousand lives, and possibly millions according to USAID. ( My model of the political constraints on the Reach Act may be wrong here though.

  • Any regulation of technology that starts occurring in response to the technology rapidly coming into existence: Uber executed its plans faster than it could be banned, which was probably good. We don’t necessarily want the same to be true for certain tech risks in AI and biology, which makes it important to figure out quickly the correct way to regulate things.

  • Anything on the U.S. Federal Register that is going poorly: if you don’t notice a new rule come up relevant to your area of interest (easy to imagine this with animal welfare and new tech) and respond within the comment period, your concerns aren’t going to inform the regulation (if you put in relevant research, and get ignored, you can sue the federal agency and win: this happened when the FDA first failed to ban trans fat). This is also a sort of situation that may actually require you to do research under a lot of time pressure.

Donor coordination:

  • If your organization is not prepared to accept/talk to donors, there are often times you will lose a lot that you’d otherwise get. This I think is one of the reasons David Goldberg with Founder’s Pledge would carry a backpack with him containing everything needed for someone to legally commit some % of their venture cash out value to effective charities.


  • If initially you need partnerships/funding and others are competing for those partnerships/funding then OODA loop applies (but less for funding since there can be more sources).

Salary negotiations:

  • It makes a lot of sense to know how you are going to negotiate ahead of time, or to be very quick in thought. ( Saying the wrong thing could cost you thousands of dollars, and if you are donating that to AMF, that’s likely costing lives.

Grants/research opportunities:

  • Grant opportunities = competitive = OODA loop. Less true when there is a deadline that is far away however.

  • In 2015, there were several EA organizations that had the opportunity to get free research from grad students who were interested in Effective Altruism from the School of Public Policy at the University of Maryland. In order to get free research, an organization would have had to submit a general/rough research proposal (<1 page), in which they could enumerate some of the means by which a study or literature review would be undertaken to maintain rigor/ guidelines for the advisor monitoring grad students. No one was able to react within the month after solicitation, so other non-EA organizations got free research instead for the grad student projects class. It does seem reasonable that EA orgs may have had better priorities, and that there is reason to be skeptical of the grad students, but it would have been a good way to get a bunch of students going into policy more bought into EA even if they didn’t produce work at a level we’d accept. This is also partly my fault, since I could have informed groups earlier, though not by a lot.

Handling the throughput vs. responsiveness trade off:

  • If you set up systems so that you can be reactive without draining as much of your attention, you can get more things done generally. Dropping responsiveness and responding to emails once per day or less may make sense if you are a coder/researcher, but it doesn’t make sense if you are an information node between organizations that need to coordinate. Adopting simple algorithms like writing down everything in my working memory before taking a call has made me both a lot more willing to take calls, and sped up my ability to get right back to work after interruption.

Time sensitive opportunities:

  • If factory farming and or malaria are going to be gone at some point in the next 20 years due to development/economics, then there won’t be the same opportunity to reduce suffering/save lives in the future that there is now. That being said, donations don’t require prompt reaction to opportunity the way policy opportunities with respect to these do.
Comment author: Gentzel 06 October 2016 08:34:42PM *  0 points [-]

Sorry for taking so long to respond. This is the comment:

Comment author: Gentzel 06 October 2016 08:36:14PM *  0 points [-]
Comment author: HowieL 22 January 2016 03:19:07AM 2 points [-]

Would it be easy to send a link to the comment about the $7.7M program you mentioned?

Comment author: Gentzel 06 October 2016 08:34:42PM *  0 points [-]

Sorry for taking so long to respond. This is the comment:

Comment author: Gentzel 28 January 2016 08:19:54PM 0 points [-]

Great summary of why I hate when people walk across the road instead of running. Or when people space themselves out instead of clustering so that no cars can get by.

Comment author: Tom_Ash  (EA Profile) 28 October 2015 09:50:44PM 1 point [-]

Thanks for this detailed update! What's your plan for the future, depending on various outcomes?

Comment author: Gentzel 29 October 2015 01:35:26AM *  2 points [-]

This is my current heuristic, though if we learn unexpected things from feedback I could imagine updating in a different direction:

If positive feedback (successful comment) --> Try to restart project

If really good negative feedback --> Make a better lessons learned post and propose a different type of project

If ambiguous negative feedback --> Recommend people avoid experimenting with this type of policy action and focus on other policy interventions.

Comment author: John_Maxwell_IV 18 June 2015 08:20:57PM 2 points [-]

What does the "assessment" column mean?

Comment author: Gentzel 20 June 2015 10:58:09PM 1 point [-]

In an early version of the sheet we had multiple columns subjectively assessing things like the replaceability of comments, how high impact an influential comment could be, and our sense of how probable influence was. Each person on the team had their own column for ranking importance.

In the current sheet, these were merged together to make a rough prioritization and remove clutter from the sheet for those who help us. That being said, this prioritization did not take into account our current team ability to produce comments, or the fact that easier comments may be good for feedback. This is why we submitted a low importance comment as a feedback test.

Comment author: Gentzel 16 June 2015 03:30:40PM 1 point [-]

For those who are interested, this is our current blog:

We will try to keep it fairly updated.

Comment author: egastfriend 14 June 2015 02:52:14AM 0 points [-]

This sounds awesome! Is the idea that policy comments would get regulators to: consider a new policy they weren't previously considering; change their mind about a proposed policy; help back them up politically for something they already want to do; or a combination of these? I'm not sure which type of impact policy comments are best for.

Comment author: Gentzel 15 June 2015 03:48:39PM 2 points [-]

I think it is most likely we will be backing up good policies that some regulators want. New policies are hard, and a lot of requests for comments come in a sort of binary way: "should we implement policy x.1 or x.2?"

View more: Next