Hide table of contents

By Matthew Gentzel and Ben Hoskin, cross posted here.

People often dismiss the practical application of philosophical ideas such as the trolley problem. In the problem an out of control trolley is racing down a track toward five workers who will not be able to get out of the way in time. If you flip a switch, the trolley will go onto another track, with only one worker who is unable to get out of the way. Flipping the switch sacrifices one to save five, which leads to a hard choice for some people. 

Those who dismiss the usefulness of thought experiments like the trolley problem reason that such situations are unlikely to actually occur in their life, and they wouldn’t have enough time to think and make the right choice anyway, because there are many things to account for. This concern is valid: in the trolley problem you don’t even have enough time to warn the workers, let alone engage in philosophical speculation! If you’re not familiar with the situation, and there’s no time to think, how can you be sure you’ll make the right choice?  To ensure you make the right decision you need to do moral thinking ahead of time and know the situation you are in already when the threat becomes apparent. You need heuristics for how to act in a variety of situations; deliberation is too slow once an urgent problem arises. But if you can do all that deliberation ahead of time and establish fast habits for doing good, you might end up being able to prevent many trolley problem type dilemmas in the first place.

Cases as severe as the trolley problem rarely happen in day-to-day life. Less intense situations do arise often: there are many opportunities to influence decisions, prevent accidents, save dozens of people time, and develop relationships where having good heuristics and habits for how to act ahead of time can be valuable. These situations often go unnoticed or unacted upon due to the bystander effect and diffusion of responsibility. But if we widen our horizons to a global scale, moral dilemmas as bad as the trolley problem happen all the time. Beyond our direct gaze, people are dying preventable deaths - preventable by us, in many cases - but the opportunity we have to intervene is easily overlooked because the deaths happen far away or in the future. Many times you only find out about a threat in hindsight when it’s already too late to do anything about it. But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast

This is the main idea behind ethical reaction time: if your ethics involve outcomes and actually helping people, being competently reactive helps! Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences. This is similar to the idea behind the haste consideration; what you do with your time now is likely to be more important than what you do in the future, because it influences what you and others will actually be able to do in the future. This compounding effect means that generally, acting sooner is better. Of course, the future is hard to predict, so often the best course of action might only become apparent just before you have to make a decision - hence the need for quick reactions.

The usefulness of reaction time is demonstrated well in real time strategy games, where the first minutes of the game and speed matter a lot, often measured with metrics such as time to reach a certain technology and APM (Actions Per Minute). Players with higher meaningful APM can beat players with better general strategy skills simply by being many steps ahead. This speed allows them to build a larger economy faster (more career capital), micro manage units to keep them alive (keep options open), and correct mistakes rapidly. In the military, the concept of the OODA loop (Observe Orient Decide Act) is very similar:


OODA.Boyd.svg
An entity (whether an individual or an organization) that can process this cycle quickly, observing and reacting to unfolding events more rapidly than an opponent, can thereby "get inside" the opponent's decision cycle and gain the advantage.”

What this means is that the faster entity is changing course while the opponent is still deciding what to do about the entity’s prior state. This is being "inside" an opponent’s decision cycle, the opponent can’t catch up, or use superior power to defeat you. Another useful thing about the OODA loop is that it encourages information gathering (observe) and processing it (orient) quickly in a way that just emphasizing reaction time alone does not. Just reacting to situations quickly might be a good thing, but could also involve speeding toward irreversible mistakes. 

Any time doing good takes place in an adversarial environment, this concept is likely to apply. For example while many harmful memes have been quite thoroughly debunked, because they are shared faster than their hosts dismiss them, they can spread anyway and come to rest in minds where they will never be debunked. Likewise, more reactive political organizations also can accumulate power disproportionately fast with respect to their size and cost. 

In summary, a person acting in accordance with the idea of ethical reaction time would do the following:

  • Pay attentive to things expected to be relevant to ethical outcomes
  • Develop career capital toward positions where you can take action in hard situations
  • Develop general rules and heuristics for doing good when there is little time to think
  • Train reflexes for acting based on these rules and eliminate bystander effect
  • Continually update these rules as you see how they work in the real world


End Notes:

Initially, this post mostly laid out the idea of ethical reaction time abstractly. With the following examples I hope illustrate the point better:

Policy:

  • One can imagine the Reach Every Mother and Child Act might have passed last year if a few congressmen were more responsive in adjusting it to get past partisan objections (left wing opposing funding religious groups, right wing opposing funding contraception). That likely would have saved a few thousand lives, and possibly millions according to USAID. My model of the political constraints on the Reach Act may be wrong here though.

  • Any regulation of technology that starts occurring in response to the technology rapidly coming into existence: Uber executed its plans faster than it could be banned, which was probably good. We don’t necessarily want the same to be true for certain tech risks in AI and biology, which makes it important to figure out quickly the correct way to regulate things.

  • Anything on the U.S. Federal Register that is going poorly: if you don’t notice a new rule come up relevant to your area of interest (easy to imagine this with animal welfare and new tech) and respond within the comment period, your concerns aren’t going to inform the regulation (if you put in relevant research, and get ignored, you can sue the federal agency and win: this happened when the FDA first failed to ban trans fat). This is also a sort of situation that may actually require you to do research under a lot of time pressure.

Donor coordination:

  • If your organization is not prepared to accept/talk to donors, there are often times you will lose a lot that you’d otherwise get. This I think is one of the reasons David Goldberg with Founder’s Pledge would carry a backpack with him containing everything needed for someone to legally commit some % of their venture cash out value to effective charities.

Start-ups:

  • If initially you need partnerships/funding and others are competing for those partnerships/funding then OODA loop applies (but less for funding since there can be more sources).

Salary negotiations:

Grants/research opportunities:

  • Grant opportunities = competitive = OODA loop. Less true when there is a deadline that is far away however.

  • In 2015, there were several EA organizations that had the opportunity to get free research from grad students who were interested in Effective Altruism from the School of Public Policy at the University of Maryland. In order to get free research, an organization would have had to submit a general/rough research proposal (<1 page), in which they could enumerate some of the means by which a study or literature review would be undertaken to maintain rigor/ guidelines for the advisor monitoring grad students. No one was able to react within the month after solicitation, so other non-EA organizations got free research instead for the grad student projects class. It does seem reasonable that EA orgs may have had better priorities, and that there is reason to be skeptical of the grad students, but it would have been a good way to get a bunch of students going into policy more bought into EA even if they didn’t produce work at a level we’d accept. This is also partly my fault, since I could have informed groups earlier, though not by a lot.

Handling the throughput vs. responsiveness trade off:

  • If you set up systems so that you can be reactive without draining as much of your attention, you can get more things done generally. Dropping responsiveness and responding to emails once per day or less may make sense if you are a coder/researcher, but it doesn’t make sense if you are an information node between organizations that need to coordinate. Adopting simple algorithms like writing down everything in my working memory before taking a call has made me both a lot more willing to take calls, and sped up my ability to get right back to work after interruption.

Time sensitive opportunities:

  • If factory farming and or malaria are going to be gone at some point in the next 20 years due to development/economics, then there won’t be the same opportunity to reduce suffering/save lives in the future that there is now. That being said, donations don’t require prompt reaction to opportunity the way policy opportunities with respect to these do.

5

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 8:28 PM

My main question when I read the title of this post was "Why do I expect that there are ethical issues that require a fast reaction time?" Having read the body, I still have the same question. The bystander effect counts, but are there any other cases? What should I learn from this besides "Try to eliminate bystander effect?"

But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast.

Examples?

Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences.

Examples?

Any time doing good takes place in an adversarial environment, this concept is likely to apply.

Examples? One example I came up with was negative publicity for advocacy of any sort, but you don't make any decisions about ethics in that scenario.

I think you may be misunderstanding what I mean by ethical reaction time, but I may change the post to reduce the confusion. I think adding examples would make the post a lot more valuable.

Basically, what I mean by ethical reaction time is just being able to make ethical choices as circumstances unfold and not be caught unable to act, or acting in a bad way.

Here’s a few examples, some hypothetical, some actual:

Policy:

  • One can imagine the Reach Every Mother and Child Act might have passed last year if a few congressmen were more responsive in adjusting it to get past partisan objections (left wing opposing funding religious groups, right wing opposing funding contraception). That likely would have saved a few thousand lives, and possibly millions according to USAID. (http://effective-altruism.com/ea/pk/how_to_support_and_improve_the_reach_every_mother/) My model of the political constraints on the Reach Act may be wrong here though.

  • Any regulation of technology that starts occurring in response to the technology rapidly coming into existence: Uber executed its plans faster than it could be banned, which was probably good. We don’t necessarily want the same to be true for certain tech risks in AI and biology, which makes it important to figure out quickly the correct way to regulate things.

  • Anything on the U.S. Federal Register that is going poorly: if you don’t notice a new rule come up relevant to your area of interest (easy to imagine this with animal welfare and new tech) and respond within the comment period, your concerns aren’t going to inform the regulation (if you put in relevant research, and get ignored, you can sue the federal agency and win: this happened when the FDA first failed to ban trans fat). This is also a sort of situation that may actually require you to do research under a lot of time pressure.

Donor coordination:

  • If your organization is not prepared to accept/talk to donors, there are often times you will lose a lot that you’d otherwise get. This I think is one of the reasons David Goldberg with Founder’s Pledge would carry a backpack with him containing everything needed for someone to legally commit some % of their venture cash out value to effective charities.

Start-ups:

  • If initially you need partnerships/funding and others are competing for those partnerships/funding then OODA loop applies (but less for funding since there can be more sources).

Salary negotiations:

  • It makes a lot of sense to know how you are going to negotiate ahead of time, or to be very quick in thought. (http://haseebq.com/my-ten-rules-for-negotiating-a-job-offer/) Saying the wrong thing could cost you thousands of dollars, and if you are donating that to AMF, that’s likely costing lives.

Grants/research opportunities:

  • Grant opportunities = competitive = OODA loop. Less true when there is a deadline that is far away however.

  • In 2015, there were several EA organizations that had the opportunity to get free research from grad students who were interested in Effective Altruism from the School of Public Policy at the University of Maryland. In order to get free research, an organization would have had to submit a general/rough research proposal (<1 page), in which they could enumerate some of the means by which a study or literature review would be undertaken to maintain rigor/ guidelines for the advisor monitoring grad students. No one was able to react within the month after solicitation, so other non-EA organizations got free research instead for the grad student projects class. It does seem reasonable that EA orgs may have had better priorities, and that there is reason to be skeptical of the grad students, but it would have been a good way to get a bunch of students going into policy more bought into EA even if they didn’t produce work at a level we’d accept. This is also partly my fault, since I could have informed groups earlier, though not by a lot.

Handling the throughput vs. responsiveness trade off:

  • If you set up systems so that you can be reactive without draining as much of your attention, you can get more things done generally. Dropping responsiveness and responding to emails once per day or less may make sense if you are a coder/researcher, but it doesn’t make sense if you are an information node between organizations that need to coordinate. Adopting simple algorithms like writing down everything in my working memory before taking a call has made me both a lot more willing to take calls, and sped up my ability to get right back to work after interruption.

Time sensitive opportunities:

  • If factory farming and or malaria are going to be gone at some point in the next 20 years due to development/economics, then there won’t be the same opportunity to reduce suffering/save lives in the future that there is now. That being said, donations don’t require prompt reaction to opportunity the way policy opportunities with respect to these do.

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

That is the point.

The reason it is appropriate to call this ethical reaction time, rather than just reaction time is because the focus of planning and optimization is around ethics and future goals. To react quickly with respect to an opportunity that is hard to notice, you have to be looking for it.

Technical reaction time is a better name in some ways, but it implies too narrow of a focus, while just reaction time implies too wide of a focus. There probably is a better name though.

I agree with Rohinmshah. I can see how reaction time could be important but I don't think you demonstrated this is actually the case.

One case I can think of where you'd have to give a time-pressured ethical view is debating, but I'm not sure how high stakes that really is.

I just added some examples to make it a bit more concrete.