In response to comment by Sanjay on Open Thread #38
Comment author: Halstead 24 August 2017 11:31:23PM 2 points [-]

i've also contacted them and they didn't reply. It's a bit unclear how they got to the rankings they did - there's not much explanation given.

In response to comment by Halstead on Open Thread #38
Comment author: WillPearson 25 August 2017 12:35:53PM 1 point [-]

Thanks, good to know, but a bit dispiriting.

In response to Open Thread #38
Comment author: rhys_lindmark 24 August 2017 04:19:19PM 1 point [-]

I'm interested in quantifying the impact of blockchain and cryptocurrency from a ITN perspective. My instinct is that the technology could be powerful from a "root cause incentive" perspective, from a "breaking game theory" perspective, and from a "change how money works" perspective. I'll have a more full post about this soon, but here's some of my initial thoughts on the subject:

  1. https://medium.com/@RhysLindmark/creating-a-humanist-blockchain-future-2-effective-altruism-blockchain-833a260724ee
  2. https://medium.com/@RhysLindmark/10-doing-good-together-coordinating-the-effective-altruist-community-with-blockchain-188c4b7aa4b0

I'd be especially interested in hearing from people who think blockchain/crypto should NOT be a focus of the EA community! (e.g. It's clearly not neglected!)

Comment author: WillPearson 25 August 2017 09:42:10AM *  1 point [-]

Impact seems solid, it is relatively neglected (at least with regards to charities). I think tractability is where blockchains might fall down. It seems easy to do an ICO, but less easy to get people to get people interested in being part of the incentive system.

In order to estimate the tractability, I would back up a level. What blockchain (at least how you are using the phrase) is about is alternate incentive mechanisms to fiat money. There have been lots of alternate currencies in the past. There have probably been economic work on these things and they might be able to give you an outside view on how likely it is that a currency will gain traction in general. You can also look for research to see if there have been any charity based currencies, and how successful they have been.

You can then update that estimate with the improvements that crypto brings (smart contracts/distributed ledgers). But also the risks, having to hard fork the currency due to smart contract errors and having to make sure the people you want to participate in the system have enough crypto/computer security knowledge to treat their wallets safely (which may or may not mean using exchanges). I think the risks (currently) make me think it is intractable, but that might just mean we should look for tractable ways of mitigating those risks.

I'm interested in reading your analysis!

Comment author: poppingtonic 23 August 2017 12:12:49PM 1 point [-]

I think that the Good Judgment Project (founded by Philip Tetlock, the author of Superforecasting) is trying to build this with their experiments.

Comment author: WillPearson 23 August 2017 02:03:00PM 0 points [-]

I'd not thought to look at it, I assumed it was/stayed an IARPA thing and so focused on world affairs. Thanks!

It looks like it has become a for-profit endeavour now with an open component.

From the looks of it there are no ways to submit questions and you can't see the models of the world used to make the predictions, so I'm not sure if charities (or people investing in charities) can gain much value from it.

We would want questions of the form: if intervention Y occurs what is the expected magnitude of outcome Z.

I'm not sure how best to tackle this.

Comment author: WillPearson 21 July 2017 06:11:49PM *  0 points [-]

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

Both of these would impinge on the vital sets of others though (slavery directly, pollution by disrupting the natural environment people rely on). So it would still be a bad outcome from the autonomy viewpoint if these things happened.

The autonomy viewpoint is only arguing that lots of actions should physically possible for people, all the actions physically possible aren't necessarily morally allowed.

Which of these three scenarios is best?:

  1. No one has guns so no-one gets shot
  2. Everyone has guns and people get shot because they have accidents
  3. Everyone has guns but no-one gets shot because they are well trained and smart.

The autonomy viewpoint argues that the third is the best possible outcome and tries to work towards it. There are legitimate uses for guns.

I don't go into how these things should be regulated, as this is a very complicated subject. I'll just point out that to get the robust free society that I want you would need to not regulate the ability to do these things, but make sure the incentive structures and education is correct.

I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values.

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is. So the best I can do is either claim it as a value, which I do, and refer to other value systems that people might share. I suppose I could talk about other people who value autonomy in itself. Would you find that convincing?

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors.

Unless it is omniscient I don't see how it will see all threats to itself. It may lose a gamble on the logical induction lottery and make an ill-advised change to itself.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive.

I'm personally highly skeptical that this will happen.

I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

How would that be allowed if those people might create a competitor AI?

you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social contexts.

Like I said in the article, if I was convinced of decisive strategic advantage my views of the future would be very different. However as I am not, I have to think that the future will remain similar to the present in many ways.

Comment author: WillPearson 22 August 2017 05:38:24PM 0 points [-]

This post covers what I want to get out of my autonomy measure. I think I might have pitched it wrong to start with. It is more along the lines of trying to increase the world economy than a base value. It also covers some of my initial forrays into the research into whether freedom is psychologically useful (I need to find out how reliable these are).

In response to Open Thread #38
Comment author: WillPearson 22 August 2017 01:04:31PM 3 points [-]

One last one.

I'm writing more on my blog about my approach to intelligence augmentation

I'll be coding and thinking about how to judge it's impact this week (a lot of it depends on things like hard vs soft takeoff, possibilities of singletons and other crucial considerations). I'm also up for spending a few hours helping people with IA or autonomy based EA work, if anyone needs it?

In response to Open Thread #38
Comment author: WillPearson 22 August 2017 10:19:04AM 3 points [-]

I've been reading Superforecasting and my take away is that to have good predictions at the world you need to have a multiplicity of view points and quantify and breakdown the estimates fermi-style.

So my question is, has there been any collective attempts at model building for prediction purposes? Try and get all the hedgehogs together with their big ideas and synthesize them to form a collective fox-y model?

I know there are prediction markets, but you don't know what information that a price has synthesized so it is hard to bet on them, if you only have a small bit of information and do not think you know better than the market as a whole.

It would seem that if we could share a pool of predictive power between us we could make better decisions about how to intervene in the world.

In response to Open Thread #38
Comment author: WillPearson 22 August 2017 10:04:51AM 2 points [-]

Drawdown a book on possible climate change solutions seems EA relevant. It is interesting that it only allows peer reviewed data/models in it and systematically surveys all the solutions they could find.

5

Open Thread #38

Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don't have enough karma to post on the main forum.  
Comment author: kbog  (EA Profile) 20 August 2017 06:34:18PM *  0 points [-]

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

Still, this is not right. There are plenty of arguments you can give to a paper clipper, such as Kant's argument for the categorical imperative, or Sidgwick's argument for utilitarianism, or many others.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

I don't see why we should want to break ties, since it presupposes that our preferred metric judges the different options to be equal. Moreover, your pluralist metric will end up with ties too.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value.

Sure, but that's not an argument for having pluralism over intrinsic values.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

If a single ASI is unstable and liable to collapse, then basically every view would count that as a problem, because it implies destruction of civilization and so on. It doesn't have anything to do with autonomy in particular.

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

AI being non-morally perfect doesn't imply that it would be racist, oppressive, or generally as bad or worse as existing or alternative institutions.

If they are not free of surveillance, then they have not left the society.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

Comment author: WillPearson 21 August 2017 09:02:34AM *  0 points [-]

I think we want different things from our moral systems. I think my morality/value is complicated and best represented by different heuristics that guide how I think or what I aim for. It would take more time than I am willing to invest at the moment to try and explain my views fully.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

Why should we care about someone's desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.

Why care about freedom at all?

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.

Comment author: kbog  (EA Profile) 16 August 2017 01:06:03PM *  0 points [-]

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is.

The is-ought problem doesn't say that you can't intrinsically value anything. It just says that it's hard. There's lots of ways to argue for intrinsically valuing things, and I have a reason to intrinsically value well-being, so why should I divert attention to something else?

Unless it is omniscient I don't see how it will see all threats to itself.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

Democratic oversight, international cooperation, good values in AI, FDT to facilitate coordination, stuff like that.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen".

How would that be allowed if those people might create a competitor AI?

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

Comment author: WillPearson 18 August 2017 06:13:11PM *  0 points [-]

The is-ought problem doesn't say that you can't intrinsically value anything

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

That said I do think I can argue for a plurality of intrinsic values.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value. Having another that is not at odds with it and generally correlates with it allows you to optimise for that. For example you could optimise for political freedom which would probabilistically lead to more eudaemonia, even if it is not the case that more political freedom always leads to more eudaemonia under all cases. As you can't measure the eudaemonia of everyone.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen"

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

If they are not free of surveillance, then they have not left the society. I think it a preferable world if we can allow everyone to have supercomputers because they are smart and wise enough to use them well.

View more: Prev | Next