14

[link] 'Crucial Considerations and Wise Philanthropy', by Nick Bostrom

On July 9th, 2014, Nick Bostrom gave a talk on Crucial Considerations and Wise Philanthropy at Good Done Right, a conference on effective altruism held at All Souls College, Oxford. I found the talk so valuable that I decided to transcribe it. You can find it here.

Comments (2)

Comment author: RomeoStevens 20 March 2017 07:53:10PM *  3 points [-]

When I zoom out on what sort of thing is happening when an agent engages in deliberative ladders it seems like they are struggling to deal with a multiplicative search space as an agent optimized for additive search spaces. Expanding on this. When I look at human cognition, the structure and limitations of working and associative memory, our innate risk tolerance and hyperbolic discounting, our pursuit of comparative advantage, as well as reasonable conjectures about the payoff distribution in the ancestral environment, I see an additive search space. That is to say that if you have a bunch of slot machines each with a different payout, we find the best one (or more accurately, the minimal set that will satisfy our needs along the various dimensions of payout) and keep pulling. In contrast, we now find ourselves in a potentially multiplicative search space. IE the payout of any given slot machine can (via sign considerations) potentially affects the payout of all others.

This drastically changes the calculus of the exploration-exploitation tradeoff. We're not even sure that the problem is tractable because we don't know the size of the search space. But one thing it definitely prescribes is dramatically more investment in exploration vs exploitation. The number of new crucial considerations discovered from such efforts might give us some data in the sense that if your trajectory asymptotes you have gained some knowledge about the search space, whereas if your trajectory remains spiky with large course corrections you suspect there is still a lot of value to further exploration.

What is the outside view of crucial consideration discovery? What sort of activities are people engaged in when they discover new candidate crucial considerations?

Another lens for looking at this is to say that qualitative model updates (where new distinctions are made) are drastically more important than quantitative model updates where you change the weight or value of some existing distinction within the model. This implies that if you find yourself investing in quantitative model disputes that there is more value elsewhere.

I believe it is possible to push on this by encouraging model diffing as in the recent thread on Alice and Bob's discussion but with an added focus on comparing distinctions that the two people are making over comparing values/weights of those distinctions. Eventually gathering together a more explicit idea of what all the distinctions all the different people are making can potentially then be an input into structured analytic techniques useful for finding holes, such as taxonimization. Harvesting distinctions from the existing AI literature is one potentially useful input to this.

I am looking for people who have had similar thoughts to discuss this with as well as further discussion of search strategies.

Comment author: John_Maxwell_IV 23 March 2017 02:01:35AM *  0 points [-]

Previously I had wondered whether effective altruism was a "prediction-complete" problem--that is, whether learning to predict things accurately should be considered a prerequisite for EA activity (if you're willing to grant that the far future is of tremendous importance). But the other day it occurred to me that it might be sufficient to simply be well-calibrated. If you really are well calibrated, and it really does appear to be the case that if you say something is 90% probable it actually happens 90% of the time, then you don't need to know how to predict everything--it should be sufficient to just look for areas where you are currently assigning a 90% probability to this being a good thing, and then focus your EA activities there.

(There's a flaw in this argument if calibration is domain-specific.)