Comment author: WillPearson 17 September 2017 10:35:49AM 0 points [-]

I'm having trouble seeing how an individual benevolent person would hold ownership of a company (unless they were the only benevolent person). It would require the owner to think that they knew the best about how to distribute the fruits of the company.

This seems unlikely, as they are trying to help lots of people they need lots of data about what people need to meet their interests. This data collection seems like it would be best be done in a collective manner (as getting more data from more people about what is needed should give a more accurate view of the needs, and this data should be shared for efficiency).

So why wouldn't the benevolent individual give their share of the company to whatever collective system that determined the needs of the world? They could still be ceo, so that they could manage the company better (as they have good data about that). It seems like the capitalist system would morph into either socialism or charity-sector owned means of production, if everyone were benevolent.

I do however agree that socialism is not inherently selfless (nor is the system where charities own the means of production).

There are lots of other potential systems as well apart from the charity-ism I described above. I'm interested in what happens when sufficiently advanced micro-manufacturing enables everyone to own the means of production (this is also not inherently self-less or selfish). You could look at systems where people only rent the means of production from the state.

Comment author: Peter_Hurford  (EA Profile) 09 September 2017 09:35:14PM *  1 point [-]

What does "systemic change" actually refer to? I don't think I ever understood the term.

Comment author: WillPearson 12 September 2017 08:24:06AM 1 point [-]

My personal idea of it is a broad church. So the systems that govern our lives, government and the economy distribute resources in a certain way. These can have a huge impact on the world. They are neglected because it involves fighting an uphill struggle against vested interests.

Someone in a monarchy campaigning for democracy would be an example of someone who is aiming for systemic change. Someone who has an idea to strengthen the UN so that it could help co-ordinate regulation/taxes better between countries (so that companies don't just move to low tax, low worker protection, low environmental regulation areas) is aiming for systemic change.

Comment author: WillPearson 09 September 2017 07:20:34PM 1 point [-]

I'm not sure if there are many EAs interested in it, because of potential low tractability. But I am interested in "systemic change" as a cause area.

Comment author: RobBensinger 06 September 2017 08:12:51AM 1 point [-]

"Existential risk" has the advantage over "long-term future" and "far future" that it sounds like a technical term, so people are more likely to Google it if they haven't encountered it (though admittedly this won't fully address people who think they know what it means without actually knowing). In contrast, someone might just assume they know what "long-term future" and "far future" means, and if they do Google those terms they'll have a harder time getting a relevant or consistent definition. Plus "long-term future" still has the problem that it suggests existential risk can't be a near-term issue, even though some people working on existential risk are focusing on nearer-term scenarios than, e.g., some people working on factory farming abolition.

I think "global catastrophic risk" or "technological risk" would work fine for this purpose, though, and avoids the main concerns raised for both categories. ("Technological risk" also strikes me as a more informative / relevant / joint-carving category than the others considered, since x-risk and far future can overlap more with environmentalism, animal welfare, etc.)

Comment author: WillPearson 09 September 2017 06:59:32PM *  0 points [-]

Just a heads up "technological risks" ignores all the non-anthropogenic catastrophic risks. Global catastrophic risks seems good.

Comment author: WillPearson 01 September 2017 09:59:52PM 1 point [-]

I think this is part of the backdrop to my investigation into the normal computer control problem. People don't have control over their own computers. the bad actors that do get control could be criminals or a malicious state (or AIs).

Comment author: purplepeople 01 September 2017 08:24:30PM 5 points [-]

The last chapter of Global Catastrophic Risks (Bostrom and Circovic) covers global totalitarianism. Among other things they mention how improved lie-detection technology, anti-aging research (to mitigate risks of regime change), and drugs to increase docility in the population could plausibly make a totalitarian system permanent and stable. Obviously an unfriendly AGI could easily do this as well.

Comment author: WillPearson 01 September 2017 09:47:02PM *  0 points [-]

The increasing docility could be a stealth existential risk increaser, in that people would be less willing to challenge other peoples ideas and so slow or stop entirely technological progress we need to save ourselves from super volcanoes and other environmental threats

Comment author: lukeprog 30 August 2017 10:37:43PM 2 points [-]

Without taking the time to reply to the post as a whole, a few things to be aware of…

Efforts to Improve the Accuracy of Our Judgments and Forecasts

Tetlock forecasting grants 1 and 2

What Do We Know about AI Timelines?

Some AI forecasting grants: 1, 2, 3.

Comment author: WillPearson 31 August 2017 08:12:40AM 0 points [-]

Thanks for the links. It would have been nice to have got them when I emailed OPP a few days ago with a draft of this article.

I look forward to seeing the fruits of "Making Conversations Smarter, Faster"

I'm going to dig into the AI timeline stuff, but from what I have seen from similar things, there is an inferential step missing. The question is "Will HLMI (of any technology) might happen with probability X by Y" and the action is then "we should invest in most of the money in a community for machine learning people and people working on AI safety for machine learning". I think worth asking the question, "Do you expect HLMI to come from X technology". If you want to invest lots in that class of technology.

Rodney Brooks has an interesting blog about the future of robotics and AI. Worth keeping an eye on as a dissenter, and might be an example of someone who has said we will have intelligent agents by 2050, but doesn't think it will be current ML.

Comment author: itaibn 30 August 2017 12:01:38PM 2 points [-]

This post is a bait-and-switch: It starts off with a discussion of the Good Judgement Project and what lessons it teaches us about forecasting superintelligence. However, starting with the section "What lessons should we learn?", you switch from a general discussion of these techniques towards making a narrow point about which areas of expertise forecasters should rely on, an opinion which I suspect the author arrived at through means not strongly motivated from the Good Judgement Project.

While I also suspect the Good Judgement Project could have valuable lessons on superintelligence forecasting, I think that taking verbal descriptions of the how superforecasters make good predictions and citing them for arguments about loosely related specific policies is a poor way to do that. As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster. In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

Moreover, among the list of suggestions in the section "What they found to work", you almost entirely focus on the second one, "Looking at a problem from multiple different view points and synthesising them?" to make your argument. You can also be said to be relying on the last suggestion to the extent they say essentially the same thing, that we should rely on multiple points of view. The only exception is that you rely on the fifth suggestion, "Striving to distinguish as many degrees of doubt as possible - be as precise in your estimates as you can", when you argue their strategy documents should have more explicit probability estimates. In response to that, keep in mind that these forecasters are specifically tested on giving well-calibrated probabilistic predictions. Therefore I expect that this overestimates the importance of precise probability estimates in other contexts. My hunch is that giving numerically precise subjective probability estimates is useful in discussions among people already trained to have a good subjective impression of what these probabilities mean, but among people without such training the effect of using precise probabilities is neutral or harmful. However, I have no evidence for this hunch.

I disapprove of this bait-and-switch. I think it deceptively builds a case for diversity in intelligence forecasting, and adds confusion to both the topics it discusses.

Comment author: WillPearson 30 August 2017 07:27:00PM *  0 points [-]

Sorry if you felt I was being deceptive. The list of areas of expertise I mentioned in the 80K hours section was relatively broad and not meant to be exhaustive. I could add physics and economics off the top of my head. I'm sure there were many more. I was considering each AGI team as having to do small amounts of forecasting about the likely success and usefulness of their projects. I think building it in the superforecasting mindset at all levels of endeavours could be valuable, without having to rely on explicit superforecasters for every decision.

In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

It would be great to have a full team of forecasters working on intelligence in general (so they would have something to correlate their answers on Superintelligence). I was being moderate in my demands in how much Open Philanthropy Project should change how they make forecasts about what is good to do. I just wanted it to be directionally correct.

As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster

There was a simple thing people could do to improve their predictions.

From the book:

One result was particularly surprised me was the effect of a tutorial covering some basic concepts that we'll explore in this book and are summarized in the Ten Commandments appendix. It took only sixty minutes to read and yet it improved accuracy by roughly 10% through the entire tournament year.

The ten commandment appendix is where I got the list of things to do. I figure if I managed to get Open Philosophy Project to try and follow them, things would improve. But I agree them getting good forecasters somehow would be a lot better.

Does that clear up where I was coming from?

3

Looking at how Superforecasting might improve some EA projects response to Superintelligence

*Cross-posted from my blog * Even if we don't takeoff very quickly we still have to deal with potential Superintelligence . There are lots of important uncertainties around intelligence that impact our response,  these are called by the community crucial considerations . The interlocking (non-exhaustive) list of crucial considerations list... Read More
In response to Open Thread #38
Comment author: WillPearson 22 August 2017 01:04:31PM 3 points [-]

One last one.

I'm writing more on my blog about my approach to intelligence augmentation

I'll be coding and thinking about how to judge it's impact this week (a lot of it depends on things like hard vs soft takeoff, possibilities of singletons and other crucial considerations). I'm also up for spending a few hours helping people with IA or autonomy based EA work, if anyone needs it?

Comment author: WillPearson 26 August 2017 11:47:04AM 0 points [-]

I've written up my outline of the ITN argument for improving autonomy.

I'd like feedback please!

View more: Next