Hide table of contents

*Cross-posted from my blog*

Even if we don't takeoff very quickly we still have to deal with potential Superintelligence.

There are lots of important uncertainties around intelligence that impact our response,  these are called by the community crucial considerations. The interlocking (non-exhaustive) list of crucial considerations list I tend to think about are things like:

  • Will man-made Intelligence be sufficiently like current machine learning system so that you can expect safety solutions for ML to be at all transferable to it?
  • Will man-made intelligence be neat or messy (and how does that impact the takeoff speed)
  • Will a developing intelligence (whether a pure AI or human/machine hybrid) be able to get a decisive strategic advantage? Can they do so without war?
  • Can we make a good world with intelligence augmentation or is co-ordination too hard?
  • Should we expect a singleton to be stable? 

These cannot be known ahead of time, until we have developed intelligence or got pretty far down that road. We can estimate the current probabilities with our current knowledge but new information is coming in all the time.

Answering all these questions is an exercise in forecasting, making educated guesses. The better we can reduce the uncertainty around these questions, the better we can allocate resources to making sure the future of intelligence is beneficial to all. 

It is worthwhile to look at how forecasting is best done (with the caveat that even the best forecasting isn't very good at looking more than 5 years out). The state of the art, as lots of people are probably familiar, is Superforecasting, a methodology to follow developed by the Good Judgement Project. It is worth having a brief overview of what the Good Judgement Project was, so we can get an idea of why it might be worthwhile adopting it's lessons and why they might not apply.

Good Judgement Project

What they did 

From wikipedia:

The study employed several thousand people as volunteer forecasters.[13] Using personality-trait tests, training methods and strategies the researchers at GJP were able to select forecasting participants with less cognitive bias than the average person; as the forecasting contest continued the researchers were able to further down select these individuals in groups of so-called superforecasters. The last season of the GJP enlisted a total of 260 superforecasters.

What they found to work

What they found to work was groups of people with diverse viewpoints of the world but a certain way of thinking. That way of thinking (take from the end of the book) was:

  • Looking at both statistical probabilities of events and the specifics of one event to create an updated scenario.
  • Looking at a problem from multiple different view points and synthesising them?
  • Updating well after getting new information
  • Breaking the problem down into sub-problems (fermi) that can be estimated numerically and combined
  • Striving to distinguish as many degrees of doubt as possible - be as precise in your estimates as you can
  • Learning to forecast better by doing it
  • Work well in teams - understanding others view points

How relevant is it?

It may not be relevant at all. It was focused on questions that were explicitly just about tractable. At a sweet spot between too hard and too easy. We are dealing with questions that are too hard and too far out. But we don't really have anything better. Even prediction markets can not help us as we can't resolve these questions before hand and judge betters accuracy on problems. 

What lessons should we learn?

So how well is the community doing at following the lessons of Superforecasting? I shall focus on the Open Philanthropy Project and 80,000 hours, because they seem indicative of the Effective Altruism's response in general.

In part, I can't tell, I am led to believe that discussion has moved off-line and into private forums. If outsiders can't tell the thought processes behind the decisions then they cannot suggests improvements or alternative views to synthesize. What can we tell from their actions and public statements? I shall use three sources.

So what can we see from these documents

  • A focus on the concrete problems and a lack of investment in the novel
  • A focus on AI rather than IA
  • A narrow range of relevant subjects suggested for study, especially on the technical side
  • A lack of a break down of the open questions

Let us take each in turn.

Focus on concrete problems

It seems that work is heavily weighted to concrete technical problems now. 38 million of 43 million dollars has been donated to teams looking at current AI techniques.  The rest has been donated to policy (FHI) or reasoning theory (MIRI). There is no acknowledgement anywhere that current AI techniques might not lead directly to future AI. Also this quote goes against them integrating multiple view points. From the strategy document. 

We believe that AI and machine learning researchers are the people best positioned to make many assessments that will be important to us, such as which technical problems seem tractable and high-potential and which researchers have impressive accomplishments.

This strategy is not getting a diverse view point to help forecaster what is likely to be useful. So this is quite worrying for how good we think their predictions will be about their subjects (considering that experts did worse predicting their events in their own fields compared to fields outside their own in the Good Judgement Project). It might be that the Open Philanthropy Project thinks that only AI projects based on current ML are currently technically tractable. But if that was the case you would expect more weight on AI policy/research as they might be able to gather more perspectives on what intelligence is (and how it might be made), I would also expect more calls for novel technical approaches.

Counterpoint

Reading the grant for OpenAI:

In fact, much of our other work in this cause aims primarily to help build a general field and culture that can react to a wide variety of potential future situations, and prioritizes this goal above supporting any particular line of research.

So the grant to OpenAI seems to be mainly about culture building. So the question is if current ML work does not lead to AI directly, e.g. if there is something like my resource allocation paradigm or something not yet thought of needed for AGI as the base-layer, will OpenAI  and the team their adopt it and lead the field?  Does OpenAI provide the necessary openness to multiple view points and exploration of other possibilities missing from the grants themselves? From what I can tell as an outsider it is only interested in deep learning. I would be pleased to learn otherwise though!

A focus on AI with no IA

While Superintelligence is bearish on Intelligence augmentation as a solution to the AI problem, that doesn't mean that it should not be considered at all. We may gain more knowledge that makes it more promising. But if you only search in one field, you will never get knowledge about other fields so you can update your probabilities. One of the super forecasters explicitly chose to get his news from randomised sources, in order to not be too biased. Funding lots of AI people and then asking them how their projects are going is the equivalent of getting information from one news source. There are IA projects like neuralink and kernel , so it is not as if there are no IA projects. I don't think either of them are looking at the software required on the computer to make a useful brain prosthesis (or whether we can make one that is not physically connected) at the moment, so this seems like a neglected question.

A narrow range of relevant subjects suggested for study

Currently 80,000 hours does not suggest learning any psychology or neuroscience for technical subjects. You can quite happily do machine learning without these things, but if there is a probability that current machine learning is not sufficient then the technical AI safety community needs something to steer itself with ourside of knowledge of machine learning. Kaj sotala has suggested psychology is neglected so there has been some discussion. But the reply was that ML is more useful for technical work right now for AI safety, which is true. This might be a tragedy of the commons problem of career choice, which I think was high-lighted in William Macaskill's speech, which I've not watched yet. There is a need for a diverse people set of people working on AI safety in the long term, but the best way of getting ahead is to optimize for the short term, which could lead to everyone knowing the same things and group think. On this point, to reduce this potential problem I think we should suggest technical people take a minor interest as well (be it cognitive science/psychology/sociology/neuroscience/philosophy/computer hardware) and try and integrate their knowledge of that with their normal ML or IA work. Encourage this by trying to make sure teams have people with diverse backgrounds and sub-interests so that they can make better predictions about where their work will go.

A lack of breakdown of the open questions and a current lack of updating

The only mention of probability in the strategy document is

I believe there’s a nontrivial probability that transformative AI will be developed within the next 20 years, with enormous global consequences.

And it is not quantified or broken down. We don't have fermi style breakdown (as recommended by super forecasting) of what he thinks the questions we need answering to increase or decrease our confidence.  He mentions at least two other points where there might be more details forthcoming.

I’ve long worried that it’s simply too difficult to make meaningful statements (even probabilistic ones) about the future course of technology and its implications. However, I’ve gradually changed my view on this topic, partly due to reading I’ve done on personal time. It will be challenging to assemble and present the key data points, but I hope to do so at some point this year.

Looking through the blogs I can't see the breakdown I also cannot see the update mentioned below

In about a year, we’ll formally review our progress and reconsider how senior staff time is allocated.

These two things that they had suggested that would happen would be very interesting to see and might enable us to see more of the internal workings and give better criticisms.

Suggestions

In order to have more accurate forecasts, I would like to see Open Philanthropy Project consult not just AI and ML viewpoints when trying to forecast what will be important work or the important organisations. 

They should look at allocating resources, finding advisers and perhaps building networks within the psychology and neuroscience communities and also possibly the nascent IA community to get their viewpoints on what intelligence is. This will enable them to update on what they think is the more probable approach to solving the super intelligence problem.

80,000 hours should encourage people studying technical AI subjects to have a broad background in other intelligence related fields as well as ML so that they can have more information on how the field should shift, if it is going to.

I'd also encourage Open Philanthropy Project to send out the data points and updates that they said they were going to do.

Whatever we end up forecasting about what we should do about the future of intelligence, I want to try and navigate a way through that improves human's freedom and autonomy, whilst not neglecting safety and existential risk reduction.

2

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Without taking the time to reply to the post as a whole, a few things to be aware of…

Efforts to Improve the Accuracy of Our Judgments and Forecasts

Tetlock forecasting grants 1 and 2

What Do We Know about AI Timelines?

Some AI forecasting grants: 1, 2, 3.

Thanks for the links. It would have been nice to have got them when I emailed OPP a few days ago with a draft of this article.

I look forward to seeing the fruits of "Making Conversations Smarter, Faster"

I'm going to dig into the AI timeline stuff, but from what I have seen from similar things, there is an inferential step missing. The question is "Will HLMI (of any technology) might happen with probability X by Y" and the action is then "we should invest in most of the money in a community for machine learning people and people working on AI safety for machine learning". I think worth asking the question, "Do you expect HLMI to come from X technology". If you want to invest lots in that class of technology.

Rodney Brooks has an interesting blog about the future of robotics and AI. Worth keeping an eye on as a dissenter, and might be an example of someone who has said we will have intelligent agents by 2050, but doesn't think it will be current ML.

This post is a bait-and-switch: It starts off with a discussion of the Good Judgement Project and what lessons it teaches us about forecasting superintelligence. However, starting with the section "What lessons should we learn?", you switch from a general discussion of these techniques towards making a narrow point about which areas of expertise forecasters should rely on, an opinion which I suspect the author arrived at through means not strongly motivated from the Good Judgement Project.

While I also suspect the Good Judgement Project could have valuable lessons on superintelligence forecasting, I think that taking verbal descriptions of the how superforecasters make good predictions and citing them for arguments about loosely related specific policies is a poor way to do that. As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster. In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

Moreover, among the list of suggestions in the section "What they found to work", you almost entirely focus on the second one, "Looking at a problem from multiple different view points and synthesising them?" to make your argument. You can also be said to be relying on the last suggestion to the extent they say essentially the same thing, that we should rely on multiple points of view. The only exception is that you rely on the fifth suggestion, "Striving to distinguish as many degrees of doubt as possible - be as precise in your estimates as you can", when you argue their strategy documents should have more explicit probability estimates. In response to that, keep in mind that these forecasters are specifically tested on giving well-calibrated probabilistic predictions. Therefore I expect that this overestimates the importance of precise probability estimates in other contexts. My hunch is that giving numerically precise subjective probability estimates is useful in discussions among people already trained to have a good subjective impression of what these probabilities mean, but among people without such training the effect of using precise probabilities is neutral or harmful. However, I have no evidence for this hunch.

I disapprove of this bait-and-switch. I think it deceptively builds a case for diversity in intelligence forecasting, and adds confusion to both the topics it discusses.

Sorry if you felt I was being deceptive. The list of areas of expertise I mentioned in the 80K hours section was relatively broad and not meant to be exhaustive. I could add physics and economics off the top of my head. I'm sure there were many more. I was considering each AGI team as having to do small amounts of forecasting about the likely success and usefulness of their projects. I think building it in the superforecasting mindset at all levels of endeavours could be valuable, without having to rely on explicit superforecasters for every decision.

In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

It would be great to have a full team of forecasters working on intelligence in general (so they would have something to correlate their answers on Superintelligence). I was being moderate in my demands in how much Open Philanthropy Project should change how they make forecasts about what is good to do. I just wanted it to be directionally correct.

As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster

There was a simple thing people could do to improve their predictions.

From the book:

One result was particularly surprised me was the effect of a tutorial covering some basic concepts that we'll explore in this book and are summarized in the Ten Commandments appendix. It took only sixty minutes to read and yet it improved accuracy by roughly 10% through the entire tournament year.

The ten commandment appendix is where I got the list of things to do. I figure if I managed to get Open Philosophy Project to try and follow them, things would improve. But I agree them getting good forecasters somehow would be a lot better.

Does that clear up where I was coming from?

Curated and popular this week
Relevant opportunities