Comment author: LukeDing 29 October 2017 10:23:50AM 15 points [-]

I hate posting as I worry a lot about saying ill-considered or trivial things. But in the spirit of Eliezer’s post I will have a go.

This post reminds me of some of my experiences, and I really like the $20 note on the floor analogy.

I was a derivatives trader for over 20 years and was last at a large hedge fund. In the early days I was managing new types of currency options at a relatively sleepy British investment bank focusing on servicing clients. After a while I thought some of these options were underpriced by the market due to inadequacy of the models. I wanted to take proprietary positions by buying these options from other banks instead of selling them to clients, but management initially resisted on the lines of why do I think all the other banks are wrong especially some of them are much larger and supposed to be much more sophisticated. But after a year or so I did manage to buy these options and made quite a lot of money which helped to set me on my career in trading.

What I noticed over the years is that these anomalies tend to happen and persist when 1. There is a new product (so there is less existing expertise to start with) 2. The demand for the product is growing very quickly (so there is a rapid rise of less price sensitive and less informed participants). This also generates complacency as the product providers are making easy profits and vested interests could build in not disturbing the system. 3. Extra potency may arise if the product is important enough to affect the market or indeed the society it operates in creating a feedback loop (what George Soros calls reflexivity). The development of credit derivatives and subsequent bust could be a devastating example of this. And perhaps ‘the Big Short’ is a good illustration of Eliezer’s points. 4. Or you have an existing product but the market in which it operates in is changing rapidly eg. when OPEC failed to hold oil price above $80 in 2014 in face of rapid decline in alternative energy costs.

I wonder if the above observations could be applied more generally according to Eliezer’s ideas. Perhaps there are more opportunities to ‘find real $20 on the floor’ when the above conditions are present. Cryptocurrencies and blockchain for example? And in other areas undergoing rapid changes.

Comment author: ESRogs 30 October 2017 01:20:10AM 2 points [-]
  1. Extra potency may arise if the product is important enough to affect the market or indeed the society it operates in creating a feedback loop (what George Soros calls reflexivity). The development of credit derivatives and subsequent bust could be a devastating example of this. And perhaps ‘the Big Short’ is a good illustration of Eliezer’s points.

Could you say more about this point? I don't think I understand it.

My best guess is that it means that when changes to the price of an asset result in changes out in the world, which in turn cause the asset price to change again in the same direction, then the asset price is likely to be wrong, and one can expect a correction. Is that it?

In response to Lunar Colony
Comment author: kbog  (EA Profile) 20 December 2016 09:31:31PM *  13 points [-]

As far as I can tell there is zero serious basis for going to other planets in order to save humanity and it's an idea which stays alive merely because of science fiction fantasies and publicity statements from Elon Musk and the like. I've yet to see a likely catastrophic scenario where having a human space colony would be useful that would not be much more easily protected against with infrastructure on Earth.

-Can it help prevent x-risk events? Nope, there's nothing it can do for us except tourism and moon rocks.

-Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there's an asteroid, go somewhere safe on Earth. If there's cascading global warming, move to the Yukon. If there's a nuclear war, go to a fallout shelter. If there's a pandemic, build a biosphere.

-Can it bring people back to Earth after an extended period of isolation? Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water. Whatever resources you have will degrade with inefficiencies and damage. Your only hope is to just wait for however many years or millennia it takes for Earth to become habitable again and then jump back in a prepackaged spacecraft. But, as noted above, it's vastly easier to just do this in a shelter on Earth.

-It's physically impossible to terraform the Moon with conceivable technology, as it has month-long days, and far too little gravity to sustain an atmosphere.

-"But don't we need to leave the planet EVENTUALLY?" Maybe, but if we have multiple centuries or millennia then you should wait for better general technology and AI to be developed to make space travel easy, instead of funneling piles of money into it now.

I really fail to see the logic behind "Earth might become slightly less habitable in the future, so we need to go to an extremely isolated, totally barren wasteland that is absolutely inhospitable to all carbon-based life in order to survive." Whatever happens to Earth, it's still not going to have 200 degree temperature swings, a totally sterile geology, cancerous space radiation, unhealthy minimal gravity and a multibillion dollar week-long commute.

In response to comment by kbog  (EA Profile) on Lunar Colony
Comment author: ESRogs 22 December 2016 06:13:21PM 1 point [-]

Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there's an asteroid, go somewhere safe on Earth...

What if it's a big asteroid?

Comment author: ESRogs 02 December 2016 09:08:20AM 0 points [-]

Note that this is particularly an argument about money. I think that there are important reasons to skew work towards scenarios where AI comes particularly soon, but I think it’s easier to get leverage over that as a researcher choosing what to work on (for instance doing short-term safety work with longer-term implications firmly in view) than as a funder.

I didn't understand this part. Are you saying that funders can't choose whether to fund short-term or long-term work (either because they can't tell which is which, or there aren't enough options to choose from)?

Comment author: ESRogs 09 June 2015 06:05:34PM 2 points [-]

The project was successfully funded for $19,000. We found the fundraising process to take slightly longer and be slightly more difficult than we were expecting.

Hey Kerry, I'm listed as one of the funders on the eaventures.org front page, but I didn't hear anything about this fund raise. Should I have?

Comment author: ESRogs 12 May 2015 05:17:36AM 2 points [-]

The world in which Bostrom did not publish Superintelligence, and therefore Elon Musk, Bill Gates, and Paul Allen didn't turn to "our side" yet.

Has Paul Allen come round to advocating caution and AI safety? The sources I can find right now suggest Allen is not especially worried.

http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/

Comment author: Owen_Cotton-Barratt 12 December 2014 01:14:44PM *  1 point [-]

It looks like you've multiplied the second term (E[log(1+d/X)]) through by an X. Can you do that within an expectation, given that X isn't a constant?

You're multiplying by X inside the log. That amounts to adding log(X), and an expectation of a sum is just the sum of the expectation. But this does seem to change exactly what you're maximising.

Even once you have E[log(d+X)] as a maximization target, I'd describe that as maximizing the log of the sum of your wealth and the world's. And it seems like a quite different goal from maximizing the log-wealth of the world. Is there another step I'm missing?

My derivation goes: Let Y denote the wealth of the world not controlled by you. By assumption Y is an independent variable (note: this assumption seems questionable, and without it the two conclusions definitely come apart, as the investor may have opportunities which increase the wealth of the rest of the world but not the wealth of the investor).

So X = Y + d.

So E(d/X)

= E(d/(Y+d))

~ E(log(1 + d/(Y+d)))

= E(log((Y + d + d)/Y))

= E(log(1+2d)/Y))

~ E(2d/Y)

= 2 E(d/Y)

= 2 E(log(1+d/Y))

= 2 E(log(Y + d) - log(Y))

= 2 E(log(X) - log(Y))

= 2 E(log(X)) - 2 E(log(Y))

Now since Y is independent, E(log(Y)) is constant, so maximising this is equivalent to maximising E(log(X)).

Edit: how did you do the code? I had difficulty with formatting, hence the excess line breaks.

Comment author: ESRogs 12 December 2014 06:29:12PM 0 points [-]

You're multiplying by X inside the log.

Good catch, my bad.

Edit: how did you do the code? I had difficulty with formatting, hence the excess line breaks.

Add four spaces at the beginning of a line to make it appear as code.

~ E(log(1 + d/(Y+d)))

= E(log((Y + d + d)/Y))

How did you get from one of these steps to the other? Shouldn't the second be E[log((Y+d+d)/(Y+d))]?

Comment author: Paul_Christiano 24 October 2014 10:13:36PM 1 point [-]

log(X+d) ~ log(X) + d/X, for small d. So maximizing E[d/X] is equivalent to maximizing E[log(d+X)].

Comment author: ESRogs 12 December 2014 10:38:03AM 1 point [-]

Hmm, I've thought about this some more and I actually still don't understand it. I might just be being dense, but I feel like you've made a very interesting claim here that would be important if true, so I'd really like to understand it. Perhaps others can benefit as well.

Here's what I was able to work out for myself. Given that log(X+d) ~ log(X) + d/X, then:

d/X ~ log(X+d) - log(X)
d/X ~ log((X+d)/X)
d/X ~ log(1 + d/X)

So maximizing E[d/X] should be approximately equivalent to maximizing E[log(1 + d/X)]. This is looking closer to what you said, but there are two things I still don't understand.

  1. It looks like you've multiplied the second term (E[log(1+d/X)]) through by an X. Can you do that within an expectation, given that X isn't a constant?

  2. Even once you have E[log(d+X)] as a maximization target, I'd describe that as maximizing the log of the sum of your wealth and the world's. And it seems like a quite different goal from maximizing the log-wealth of the world. Is there another step I'm missing?

Comment author: ESRogs 24 October 2014 06:19:28PM 0 points [-]

A simple argument suggests that an investor concerned with maximizing their influence ought to maximize the expected fraction of world wealth they control. This means that the value of an extra dollar of investment returns should vary inversely with the total wealth of the world. This means that the investor should act as if they were maximizing the expected log-wealth of the world.

Could someone explain how the final sentence follows from the others?

If I understand correctly, the first sentence says an investor should maximize E(wealth-of-the-investor / wealth-of-the-world), while the final sentence says they should maximize E(log(wealth-of-the-world)). Is that right? How does that follow?

Comment author: Paul_Christiano 29 September 2014 04:55:20AM 4 points [-]

I basically believe the efficient market hypothesis, but the inference/implication here doesn't seem right to me.

The risk from investing in individual stocks rather than broad indices is pretty minor (and for philanthropic capital I think it is completely negligible), and if you accept the EMH the returns are the same. There are a number of other considerations (tax issues, small amounts of domain expertise or insight, hedging) that might lead one to purchase individual stocks instead. The amounts of money involved don't have to be too large before it starts to look worthwhile to be thinking about these issues.

Comment author: ESRogs 02 October 2014 04:21:20AM 0 points [-]

The risk from investing in individual stocks rather than broad indices is pretty minor

This depends a lot on how many stocks you're buying, right? Or would you still make this claim if someone were buying < 10 stocks? < 5?