Hide table of contents

 

Summary

If your main contribution to EA is time, how long should you spend trying to figure out the best thing to do before you switch to taking action? The EA community has spent a lot of time thinking about this question as it relates to money, but money and time differ in important ways. You can save money then give it at a later date; you cannot do the same with time. In this article I will show my current best guess at the answer to the question.

Broadly speaking, you should switch from researching to acting once the expected value of marginal research equals the expected value of acting. The goal then is to figure out the values of these two parameters.

The value of taking action depends on:

  • Time left. The expected amount of time you’ll be able to capitalize on your research. This is affected by things like:

    • You changing (ie value drift)

    • You retiring

    • The world changing

  • Inspiring others. How many other people do you inspire to act based on the example you set?

The value of marginal research depends on:

Value increase. How much better does your best option become after researching the possibilities?

  • Delegability. If others want to enact your conclusions as well as or better than you would, you should research indefinitely, as long as you’re still making progress.

  • Research progress. What’s the shape of progress? Is it an S-curve, with tons of low hanging fruit at the beginning, then getting progressively harder to find subsequent discoveries? If so, what’s the slope of the curve during its most productive interval?

The conclusion I drew from these considerations was to invest heavily in up-front research, then do research at spaced intervals to account for considerations you missed, new ones others have thought of, and the world changing over time.

The initial up-front research time can be calculated by putting up the above considerations into a formula based on your best estimates, then figuring out where the marginal value of research dips below enacting the conclusions you came to. Our current best guess suggests we spend two to eight person-years researching.  


What’s the existing literature on the topic?

There is a lot already written on doing good now or doing good later in the EA community, but mostly in regards to giving money. There is also much written about the topic in the wider decision theory community, where it’s commonly referred to as optimal stopping or explore/exploit trade-offs. While there are many interesting ideas in the area, their solutions cannot be straightforwardly applied to EA because they solve problems fundamentally different from those we face.

The secretary problem is probably the most famous example of an optimal stopping problem, but it is not a good fit for analyzing EA decisions. Briefly, the thought experiment sets out to figure out how many secretaries to interview before you hire one. Given the conditions of the scenario, there is a mathematical solution - interview 37% of the potential secretaries during the time you can afford to spend interviewing, and after the 37% mark, hire the first secretary who is as good or beats the best found in that exploration phase.

The reason that this cannot be applied to our question is that it assumes that you can quickly tell which secretary is better than another, but in EA, problems are very difficult to compare. For example, is deworming or bed nets better? It’s very unclear, and that’s in a relatively well studied area. Comparing animal rights research to preventing AI x-risk is even more fraught with ambiguity. Other limitations of the secretary problem are discussed in the comments of this post here.

The multi-armed bandit problem is similarly limited. If you are at a casino with multiple slot machines (sometimes called one-armed bandits), and you don’t know what each machine’s probability of payoff is, which arms do you pull and in what order? There are multiple solutions to this problem and its variants, however its applications to EA are limited.

For example, it assumes that when you pull an arm, you instantly know what your rewards are. This is a hard assumption to make in EA. Even if we had perfect knowledge about the results of different actions, it would still be unclear how to value those results. Even if you are an ethical anti-realist, there could be considerations that you hadn’t thought of before that change the size or even the sign of your expected effect (e.g., your stance on speciesism or population ethics).

Despite these and other unlisted problems, there are still some useful takeaways from the literature. Those I found most useful were the arm analogy, the Gittins index, and the observation that explore / exploit decisions depends largely on the time you have left.

 

What counts as “pulling an arm” in the multi-armed bandit?

In the multi-armed bandit problem, the Gittins index is a solution. It says to pull arms you haven’t pulled before, because they could reveal an arm that will beat your current best. However, if after you’ve pulled the arm a certain number of times and it still hasn’t beat your current best guess, you can move on.

 

So what counts as pulling an arm when it comes to EA? Is it starting a charity or working for an organization? This doesn’t seem right. Even though I have never worked for Homeopaths Without Borders, I can know that utility payoff won’t be good. I already have, in some sense, pulled the arm.

This leads to the idea that pulling an arm is analogous to activities that gather information about the option under consideration. Thus the most natural equivalent to pulling an arm is doing a unit of time of learning. The unit of time is relatively arbitrary and can be cut up into very large or small amounts, so I’ll just use a day for simplicity’s sake. Learning can be done through doing the option or researching through other methods.

This speaks to the question of how long to give a cause a chance before giving up. For example, if you are not convinced of a cause in the EA community, how long should you keep reading new articles about it, engaging in debates with its supporters and so forth, before you stop and start researching other possible contenders?  A concrete example would be, if you know a lot about NTDs, but know very little about international trade reform, it would be more valuable to research the latter more.

 

Explore / exploit decisions depend on the time you have left

Another useful heuristic that the optimal stopping literature gave is that how long you spend exploring versus exploiting depends on how long you have left to exploit your best option. This fits with intuition fairly well. For example, if you are on your deathbed, you probably shouldn’t waste any time in trying to make new friends, but rather use your precious last hours with people who you already know and love. However, if you’re in college and have many decades left of your life, you should probably invest a lot of time making friends, because if you find a great new friend, you will be able to enjoy that relationship for decades to come.

This applies to charity in the sense of how many working years you have left, although to be more specific, it’s not how many working years you have left, but how many years you’ll be able to capitalize on your research.

 

Value of doing

Time left

After brainstorming, we came up with 10 factors that affect how many years you should expect to be able to capitalize on your exploration phase. I expect these to vary enormously based on personality, history, choices, environment, etc, so one person’s answers cannot be generalized to other people. Nonetheless, you can definitely apply the same process to yourself as we did and see what the results are.

The factors are:

  1. Expected retirement age. When do you think you will retire?
  2. Health degeneration. This is a minor factor in the developed world and affected our calculations negligibly. However, this could be different based on your own risk factors.
  3. World change. The world probably will change after you’ve done your research and switched to action. What odds do you place on the changes affecting your action to the point where you have to go back to the drawing board?
  4. Value drift. How likely do you think that you will stop being an EA due to changing your mind, losing motivation, etc.?
  5. Research obsoletion. What odds do you put on finding a new body of ideas / research that completely obsoletes your previous research? For example, many of my altruistic plans and research prior to EA got completely trashed upon learning about concepts such as counterfactuals and the base rate fallacy.
  6. Pinker Effect. Pinker’s book, Better Angels of Our Nature, makes a great case that the world is getting better. Maybe if we spend too much time researching the remaining interventions will be less effective than the ones we know about now.
  7. Inspiring others. You maybe be able to inspire people who will roughly, or very closely, enact your values, thus increasing the amount of person-hours that your research could capitalize on.
  8. Greater hiring ability. If you gain an ability to hire people, you can further increase your ability to affect the world.
  9. Close-mindedness freeze1. It is a common conception that the elderly are more close-minded than the young. If you think your mind will effectively close early on, you should make sure to do all of your research before that, so you don’t lock in on the sub-optimal conclusion.
  10. Flake drift. This is the counterpart to close-mindedness freeze, and is the idea that even if you come to a great conclusion, maybe you, personality-wise, can only stay on any given project for a certain amount of time before you get bored and move on.
  11. Unknown unknowns. No matter how thoroughly you’ve thought through something, there are always things you haven’t thought of.

I will cover some of these factors up close.

Value Drift

Theoretical Approach

The most commonly cited concern about how long you will be able to capitalize on your research is value drift. Many things could cause this such as burnout, having children who then take priority, getting distracted, etc.

An important thing to keep in mind with value drift are that there are degrees of it. Scaling back your involvement by 10% is not the same as giving up on EA altogether and becoming a surf bum in Hawaii. Burning out so that you need a two week vacation is different from so thoroughly burning out that you never give another hour of your time.

The risk of value drift is a very personal one so cannot be generalized easily, but by the same token, it is very easy to be biased towards oneself. People typically have the end of history fallacy in terms of their personality, consistently under-predicting how much they will change in the future. In fact, since I’ve joined the EA movement I’ve seen a substantial percentage of people be very enthusiastic and involved at the beginning, only to completely switch or lose motivation a few months or years later. The movement is still very young, so I suspect an even larger proportion to leave as time goes on.

 

Likelihood of value drift is influenced by:

  • Community. People are very influenced by their peers. The greater a percentage of people you hang out with who share your values, the less likely they are to shift.

  • Career capital. If you build up your career capital that’s only really useful in the charity field, you’re less likely to drift because switching sectors would set you back to square one.

  • How long you’ve been an EA. If you meet somebody and they say they just decided to go vegan that very day, what odds do you put on them being vegan 10 years from now? You probably put them very low, and that makes sense. If they’ve already been vegan for 30 years, you’d put them much higher. Likewise with EA. If you’ve been an EA for 10 years, you’ll likely stay that way. If you’re new, you should probably put high odds on your values drifting, even if you are really excited at this very moment.

 
Empirical Approach

Are there any empirical studies that shed light on the issue? Unfortunately, there is little data on the issue. There were some interesting studies on how many people who became social workers stayed in the field, but the literature was inconsistent and the measurement only a rough proxy. For example, if somebody leaves government work to run a nonprofit women’s shelter, does that count as leaving social work? Likewise, what’s the relevant reference class for EAs leaving the movement? Should I put myself in the category of those who donated $20 to AMF then forgot about GiveWell? Or maybe it should be the people who’ve started charities in the area? That seems like reference class tennis to me, and I do not have a current solution to that issue. In the end, the empirical approach did not provide much new information.

At the end considering all of these factors, for myself, I put a 15% chance on major value drift, and 45% chance on minor, with both of these most likely to happen earlier on and less likely as time goes on. This had a large effect on my endline predicted working years left, which is to be expected.

 

Pinker Effect

The world is getting better, which is fantastic, but could it be bad for our altruistic endeavors? Could we run out of all of the good options?  I think the answer is no, unless you’re working exclusively in global health.

Let’s start with global poverty, especially global health. It is undeniably getting better and at an incredibly fast rate. It would be unsurprising that 60 years from now, it will be considered strange that anybody went without bed nets or their recommended vaccines. If at the end of your research, you decide that global health is your top cause, you should get started right away, as all of the good opportunities are indeed getting snatched up.

However, global health is not the only cause. Some causes are not getting better but worse, such as animal welfare or environmental degradation, so there might be more to be done to help in the future.

An additional benefit of the world getting better is that we’re getting better at helping too. There might actually be more effective interventions in the future because people have spent a longer time thinking about and testing different strategies. For example, medieval activists didn’t have the opportunity to provide vaccines because they didn’t exist yet.

In the end, the biggest factor for us is that we are becoming wiser and expanding our moral circle. There are likely other causes, such as bug suffering, that could be extremely valuable and neglected, that society will ignore, like factory farming, for decades or centuries to come.

So, the good news is that the world will still have problems after your research*, so don’t worry too much about it getting better. You should probably not have this dramatically affect how many working years you have left.  

*Just as a note, in case it wasn’t clear because tone can be lost in writing, I am 100% joking that it’s good news that there will still be problems when we’re older. It would obviously be great news if there were no more suffering.

 

Potential for a larger team / inspiring others

I have the fortune to be on a team of like-minded individuals, such that we can have a high level of coordination. This means that I can focus on research while others focus on doing the action that we currently think is the highest value. The higher alignment this is, the more my effective sphere of influence is. If I can completely rely on one other person, and them on me, we can get twice as much done as a single person.

 

This is true at the very high level of alignment with a small number of people, but could also be true with a large number of people but with less value and epistemic alignment. To illustrate the point, if you do the research and it inspires somebody to start the exact charity that you would have wanted, action is highly delegable, and if your comparative advantage is research, you should continue researching and then propagating your research, advocating for others to act upon it. This is the general strategy of GiveWell when it comes to recommending where others give, and 80,000 Hours in terms of its career research.

 

The question is - how delegable is action? Money is fairly straightforward. A dollar given to AMF by somebody who hates science does the same amount of good as a dollar given by a science geek. However, if a charity is run by one person instead of the other, many different choices will be made that will affect the effectiveness of the charity. For example, the science-disregarding person might hear anecdotes of bed nets being used as fishing nets and switch to a different intervention, whereas the science geek might read the literature and see that while this occasionally happens, it’s swamped by the positive effects.

There are some examples in the EA community of researchers inspiring charities, and founders stepping back from their roles as CEOs, which can provide a rough outside view. You can check how valuable the charity continued to be, according to the founder’s values and epistemics, compared to how it was or would have been had they continued to run the charity. Based on the examples I am familiar with, approximately 15% got better after handing off, 45% stayed the same, and 40% got worse2. However, this changes if you take into account how much time the founder or researcher invested in the charity at the beginning, ranging from simply writing a blog post about the idea to spending multiple years setting up the organization. With heavy investment, 0% got worse, 85% stayed the same, and 15% got better.

There are other factors aside from founder time that affect delegability:

  • How good your ideas are. If your ideas are terrible to everybody but yourself, you will have to enact them yourself because you won’t be able to persuade anybody to follow your advice.

  • How persuasive you are. Even if your ideas are great, if you find it difficult to persuade people of simple things, persuading them to start a charity based on your advice will be impossible.

  • How palatable your ideas are. If your best option is starting a global poverty charity, most people think that is a worthy cause. If your best option is fighting factory farming, many people think this is not actually a problem, so there will be fewer people willing to organize against it.

Despite our relatively pessimistic views on delegability, this still represented a huge increase in our number of “effective working years” and thus the value of upfront research.  

 

Conclusion on working years remaining

To put together all of these considerations, I started off by assuming that I would retire at the normal age, due to factors pulling for and against late retirement. This left me with 40 years. Then I took off or added expected years of work, based on probabilities I put on the different factors. Results will vary based on your personality, choices, and environment. For myself, after putting in hours and hours of work and thought and calculations, I ended up, anti-climatically, with 40.2 years of expected work.

 

This was not what I was expecting, but it was still worth the effort. Initially I had simply thought about value drift and applied a steep discount to my work, but I had not taken into account any positives, or thought about the whole picture. I recommend others trying this exercise as well, because it could affect your decisions.

Flow-through effects

It has been argued that the effects of doing good now compound. That if you inspire one person to do earning-to-give, then you will continue inspiring new people, and they will too, thus “earning interest on your interest”. I believe this is an oversimplification of the effects.

For one, say you start a direct poverty charity and that inspires approximately one new charity per year, and those charities have the same “inspiration rate”. This won’t go on forever until everybody in the world is starting direct poverty charities. It’s not a exponential curve, but rather an s-shaped curve. There are the initial low hanging fruit, exponential growth for a period of time, then a diminishing amount. However, this isn’t the end of the story. After all of these charities start, they don’t last forever. People retire, charities shut down, problems are solved, etc. So really after the tapering, there is a probably relatively linear comedown.

Additionally, compounding benefits apply to doing good later as well. It’s not like if you start a charity 10 years from now, nobody will care anymore. However, there is still a penalty for starting later. If you spent 39 years researching, then spent 1 year doing, you’d only have 1 year of inspiring, so only one extra charity started because of you, rather than if you had spent 1 year researching and 39 years doing, at which point you’d have far more charities inspired by you.

It is important to note that this model compares acting now, or doing the same action except years later. This means it does not take into account the increased value of your best option that you reap from more research.

Furthermore, this is probably an overly optimistic scenario. There are many more ways to deviate from doing good through doing rather than giving. If you are earning to give and you inspire another person to earn to give, if they donate to the same charity or set of charities as you, it’s easy to see how much good they are doing by your standards. Starting a charity or working for another is much more complicated because of the diversity of options. Depending on how pluralistic your values and epistemics are, inspiring others is more or less good.

This reasoning is analogous to another consideration, which is that doing builds more capacity and resources than researching. Research of this sort is relatively cheap to run, with just the costs of salaries. However, running a direct charity requires far more employees and direct costs, such that one must build up a larger donor network to run it. For example, to run our research program costs $25,000 USD this year, whereas to run Charity Science Health will cost $250,000 for the first year, and could well reach the multiple million dollar per year mark. On the other hand, if the cause you end up choosing is very different from your initial top option, then many of the donor network you built up for that first charity will not be interested in your next choice.

Nonetheless, this ends up not being too large of a consideration, because building up resources likely follows an s-shaped curve. This means that even if you start a few years later than your counterfactual self, you will eventually more or less catch up with them in terms of resources.

 

Learning by Doing

There’s a great quote from Brian Tomasik: “There's a quote attributed to Abraham Lincoln (perhaps incorrectly): "Give me six hours to chop down a tree, and I will spend four hours sharpening the axe." This nicely illustrates the idea of front-loading learning, but I would modify the recommendation a little. First try taking a few whacks, to see how sharp the axe is. Get experience with chopping. Identify which parts of the process will be a bottleneck (axe sharpness, your stamina, etc.). Then do some axe sharpening or resting or whatever, come back, and try some more. Keep repeating the process, identifying along which variables you need most improvement, and then refine those. This agile approach avoids waiting to the last minute, only to discover that you've overlooked the most important limiting factor.”

 

This covers a rather important advantage of doing - that you’re not just doing. Learning never stops. How much should we take this into account?

 

I think that this is definitely important because it helps determine which options are realistic and helps calibrate your probabilities. Indeed, historically there have been many things that maybe I could have learned via research, but I probably wouldn’t have without getting my hands dirty.

On the other hand, learning by doing is learning by anecdotes. Learning through reading is learning through thousands of anecdotes, otherwise known as science, or even just picking up the individual anecdotes of many others. Additionally, there are some things that you can simply never “learn by doing”, which includes many crucial considerations. For example, you can’t just work for a charity and naturally pick up which is better, frequentism or Bayes, or whether you should be speciesist or not. Those are things that need explicit reasoning and research.

Furthermore, learning by doing is very costly per amount learned compared to direct learning. Getting a job or starting a project in an arena is a huge investment which is hard to pull back from once you’ve started.

On the other hand, you can risk losing touch with reality if you do not have some hands-on experience. It also lessens the gap between learning via research compared to implementing your top option.

Fortunately for me I share an office with a direct implementation organization, so I get the benefits of both worlds, and I have not felt the need to completely address this question. This may be hard to replicate, but some alternatives, like befriending those doing direct work, might confer similar benefits.

 

Value of Research

The value of research is, rather straightforwardly, the increased value of your best choice. A great example is that when I started my altruistic career as a child, I saved kelp. My grandmother had told me that kelp were alive. I took this to mean that it was sentient, and then spent many hours in the summer saving kelp from drying out on the beach and having a painful, drawn out death. In retrospect this was adorable, but 0% effective. Given a vast increase in knowledge since then, I have since learned that kelp are not sentient, and given my increased understanding of the world, am helping people at a much larger scale. The value of my best option increased enormously.

The key question then is how much does the marginal amount of research increase the value of your best option. This is impossible to answer precisely because we’d need to know the end result, and if we did, we wouldn’t need to do the research. Fortunately we have a way to deal with uncertainty in this domain, which is the expected value of information. Peter Hurford has a great post on this which is generally the method I followed. I just added the concepts of remaining years left to figure out how it compared to doing.

Which brings me to last concept, that you should switch from researching to acting once the expected value of marginal research equals the expected value of acting. The expected value value of research will go up for a while, then start going down as you’ve thought of most of the relevant considerations. It will also start going down in life as you have less and less time to be able to capitalize on this knowledge, which will eventually nudge you into action.

To calculate the marginal value of one additional year spent researching, you can follow this formula:

[(Change in value of best option) x (Percentage of value added by added year researching) x (Working years left - Years spent researching)] - (Value achieved if you researched one less year).

Simplified this is simply value of t+1 years of research minus the value of t years of research.

Calculate this for each year until the calculation gives a number less than 0, at which point switch to doing.

Of note, in this model, I assume that the percentage of value added per year of research is a consistent percentage of the remaining value. This means you get closer and closer to 100%, but never there. So if I expect a value increase of 10 times if I researched forever, and to achieve 50% of the value for each additional year of time, I would expect to get 5x the value the first year, then 50% of the remaining 5, so 50% x 5 = 2.5 the next year, etc.

Here’s a worked example:

Expected change in value of best option = 5 times better than current option

Proportion of remaining potential value for each marginal year of research = 70%

Working years total = 40

[70% x 5 times better x (40 years - 1 year researching)] - 40 years at current value if just research= 96.5. This is positive, so try the next year.

[((5-3.5) x 70% + 3.5) x (40 years - 2 years researching)] - [70% x 5 times better x (40 years - 1 year researching)] = 172.9-136.5= 36.4. This is positive, so try again.

[((5-4.55) x 70% +4.55) x (40 years - 3 years researching)] - [((5-3.5) x 70% + 3.5) x (40 years - 2 years researching)] = 180 - 172.9 = 7.1. This is positive but close to 0, so we’re getting close.

[((5-4.865) x 70% +4.865) x (40 years - 4 years researching)] - [((5-4.55) x 70% +4.55) x (40 years - 3 years researching)] = 178.5 - 180 = -1.5. This is negative, but just barely, so it indicates that you should spend a little under 4 years researching before moving on to acting.

Of course there are many limitations to this calculation. The three main ones are:

  • Immense uncertainty. Each of the parameters are best guesses, but “guess” is a key term. How can you tell whether you’re at 70% of the value of research or at 10%? Likewise, how many working years do you have left? These are all highly uncertain so cannot be taken literally, but they’re better than no estimate whatsoever.

  • Time consuming. This might be a consequence of my method, and I’m sure there are a more elegant ways to calculate this. If you know, please do send me an email or leave a comment with a better solution.

  • Simplifying assumptions. The working years left included delegability, but it didn’t take into account the nuances of it. It might complicate the formula quite a lot to take into account that some delegation can be done while researching, whereas others you have to take time off of research to get rolling.

We ran these calculations with a variety of optimistic, pessimistic and best-guess scenarios and all of the results came out in the 2 to 8 person-year range. The next question is what to do with these numbers. Two to eight years is a wide range and the numbers are uncertain thus subject to wide fluctuations based on new information. Our conclusion has been to follow the general process of:

  • List all possibilities. Make a list of all crucial considerations and possible causes to investigate.

  • Divide into chunks. Divide the average time from the calculations above into equal chunks for the crucial considerations and causes, based on previous such research. We came to about 1 month on each consideration/cause.

  • Divide chunks in two. Divide those chunks in two, in this case 2 weeks. Do an initial run through of all the crucial considerations, taking 2 weeks on each one.

  • Allocate remaining based on need. Based on the progress and new information from this initial passover, budget the aggregated 2 weeks per consideration left to the most promising / in need considerations.

  • Re-do calculations. At the end of this re-assess and re-do the calculations based on all of the new information. Spend longer or less time on crucial considerations based on this.

  • Repeat. Do the same for cause comparison.

The advantages of this method compared to others considered are that it’s time saving, deadlines making you work faster, and the benefit of seeing things with fresh eyes. It makes sense too because the calculations are only rough approximations, so do not give enough precision to make day-to-day decisions in any case.

 

Spaced Research Throughout Rest of Life

This is half the puzzle. You cannot simply research once and then call it a day. The world changes and there will be new considerations. Thus part of the solution is to do spaced out research phases throughout the rest of your life. So, how should they be spaced out? We’ve decided to postpone on that decision until after the initial phase of research, but here are some contenders we thought of:

  • Sabbatical model, where once every few years you take a year or a few months off to incorporate new considerations.

  • Vacation model, where you take multiple one week long “vacations” per year to do research.

  • Project based, where you start a charity / project, stay with it until you can step back without hurting it, then do another round of research between projects.

  • Progressive investment in delegation, where you start with low energy levels of trying to inspire, say posting a single blog post on the topic, then progressively intensify your energy based on interest. If somebody starts a project based on the blog, then go back to research. If nobody does, try more actively advocating it. If that doesn’t work, keep investing more until you get to the point of starting it yourself.

  • Needs basis, where you take time off on a needs basis.

  • Completely passive, where you simply do the research in your spare time after the initial up front investment.

  • Split your time. Always devote a certain percentage of your working time to doing research.

  • Spaced repetition, where you space out your research closer together at the beginning, then space it out further and further as time goes on and you’re more confident in your beliefs.

  • Always have an “R&D” division of your organization, where there’s always at least one person who’s full time thinking about what to do next, while there are also people executing on the current best plan.

Our current best guess is a combination of progressive investment based on delegability and project based. This seems to take advantage of delegability without having to rely on it entirely.

 

Remaining Questions

These considerations are currently incomplete. Some of the weaknesses we plan on investigating further as separate crucial considerations, some we might come back to if we think optimal stopping has been stopped at the optimal time or not. These gaps include:

  • Knightian uncertainty. What should we do when there is enormous uncertainty around an issue? Is it better to give your best guess at a number, or is it better to follow a cluster approach? Maybe both or neither?

  • Complexity of equation. The current equation is very simplified. It does not take into account different sorts of delegation. Is it worth making a more complicated formula to see how this affects things?

  • Value percentage achieved per year. One key input into the formula is what percentage of the value of research you achieve per year. Currently we just add this based on a soft subjective sense. Is there any way we could improve our estimates?

  • Learning by doing. The model we used does not take into account the benefits of learning by doing.

 

Ways to help

So there you have it. My current best thoughts on how long to spend researching versus doing. I would love your help. You can help by:

  • Pointing out ways I could improve my reasoning, especially in ways that will lead to a changed conclusion. If it’s just something that will reduce my confidence by a percentage point or two, that’s less useful.

  • Thinking of alternative approaches that beat my current one. This is generally a good way to help, because one’s choice can only be so good as the best option you’ve thought of.

  • Point me towards optimal stopping scenarios in the official literature that most apply to EA. I wasn’t able to find any that corresponded that well with the EA situation, but that might have been a failure of thinking of the right keyword on my part.

 

Footnotes

[1] While this isn’t technically about how long you can exploit your best choice, I thought it was relevant so included it anyways.

[2] For obvious reasons, I cannot publicly elaborate on what these numbers are based on.

Comments4
Sorted by Click to highlight new comments since: Today at 11:25 PM

Thank you for the post, it is certainly very interesting to read. I learned a lot.

Another vantage-point for identifying where AND when to direct one's research & energy:

Newly-OPENED avenues; when a new technology, or a radical change in policy, action, has just occurred. Those moments of change, especially when something crosses the 'threshold of viability', are key times to locate your research and re-evaluation. The actions which are available in that early-stage of viability also tend to have an OUTSIZED influence overall. Those shifts in viability open a  range of actions to us, briefly, before behaviors settle-back into a new equilibrium.

Very interesting! That's great you did a sensitivity analysis, though it is a little surprising the range was so small. Did you do scenario where you might become convinced of the value of far future computer consciousnesses and therefore the effectiveness might be ~10^40 times as much?

Thanks. Good question. While researching this I did include a probability that I would be convinced of far future causes, but given the monster length of my post as is, decided not to include it. :P

My 95% confidence range of increased value of the best option actually ranges from 80% as good as my current one (ie making a poorer decision than I’d previously come to) to one million times better, because I put a greater than 2.5% chance of being convinced of a far future cause. However, the bulk of my probability lies closer to the smaller amounts, averaging out to the ~500x range.

However, I think that this will change dramatically over the years. I am trying to prioritize considerations that rule out large swathes of the decision space. I think that near the beginning of researching I will be able to make a decision on some calls that might rule out the higher values, narrowing my confidence interval so it no longer includes the extremely high numbers. This would lower the value of the marginal year of research quite a lot. That’s hard to include in the calculations though, and it very well might not happen, or may actually become wider or have higher upper bounds.