Comment author: Daniel_Eth 19 January 2018 11:48:01PM *  18 points [-]

Thanks for taking the time to write this post. I have a few comments - some supportive, and some in disagreement with what you wrote.

I find your worries about Peak Oil to be unsupported. In the last several years, the US has found tons of natural gas that it can access - perhaps even 100 years or more. On top of this, renewables are finally starting to really prove their worth - with both wind and solar reaching new heights. Solar in particular has improved drastically - exponential decay in cost over decades (with cost finally reaching parity with fossil fuels in many parts of the world), exponential increase in installations, etc. If fossil fuels really were running out that would arguably be a good thing - as it would increase the price of fossil fuels and make the transition to solar even quicker (and we'd have a better chance of avoiding the worst effects of climate change). Unfortunately, the opposite seems more likely - as ice in the arctic melts, more fossil fuels (that are now currently under the ice) will become accessible.

I think "The Limits of Growth" is not a particularly useful guide to our situation. This report might have been a reasonable thing to worry about in 1972, but I think a lot has changed since then that we need to take into account. First off, yes, obviously exponential growth with finite resources will eventually hit a wall, and obviously the universe is finite. But the truth is that while there are limits - we're not even remotely close to these limits. There are several specific technological trends in that each seem likely to turn LTG type thinking about limits in the near term on their head, including clean energy, AI, nanotechnology, and biotechnology. We are so far from the limits of these technologies - yet even modest improvements will let us surpass the limits of the world today. Regarding the fact that the 1970-2000 data fits with the predictions of LTG - this point is just silly. LTG's prediction can be roughly summarized as "the status quo continues with things going good until around 2020 to 2030, and then stuff starts going terribly." The controversial claim isn't the first part about stuff continuing to go well for a while, but the second part about stuff then going terribly. The fact that we've continued to do well (as their model predicted!) doesn't mean that the second part of their model will go as predicted and things will follow by going terribly.

I have no idea how plausible a Malthusian disaster in Sub-Saharan Africa is. I know that climate change has the potential to cause massive famines and mass migrations - and I agree that has the potential to increase right wing extremists in Europe (and that this would all be terrible). I don't know what the projected timeframe on that is, though. I also hadn't heard of most of the other problems you listed in this section. Unfortunately, after reading your section on peak oil which struck me as both unsubstantiated (I mean no offense by this - just being straightforward) and also somewhat biased (for instance I can sense some resentment of "elites" in your writing, among other things), I now don't know how much faith to have in your analysis of the Sub-Saharan African situation (which I feel much less qualified to judge than the other section).

I agree it is good for people to be thinking about these sorts of things, and I would encourage more research into the area. Also, I hadn't heard of Transafrican Water pipeline Project, and agree that it would make sense for EAs to evaluate it for whether it would be an effective use of charitable donations.

Comment author: RyanCarey 20 January 2018 09:56:31AM 6 points [-]

obviously the universe is finite

We can go only as far as to say that the accessible universe is finite according to prevailing current theories.

Comment author: RyanCarey 14 January 2018 12:01:42PM *  2 points [-]

I haven't read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.

The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]

For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.

We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.

Comment author: RyanCarey 10 January 2018 08:57:39PM 0 points [-]

I revisited this question earlier today. Here's my analysis with rough made-up numbers.

I think each extra time you donate blood, it saves less than 0.02 expected lives.

Suppose half the benefits come from the red blood cells.

Each blood donation gives half a unit of red blood cells. (Because a unit of blood is ~300ml)

Each red blood cell transfusion uses 2-3 units on average, and saves a life <5% of the time.

So on average every ~5 donations would save 0.1 lives (supposing the red blood cells are half of the impact)

But each marginal unit of blood is worth much less than the average because blood is kept in reserve for when it's most needed.

So it should be less than 0.02 lives saved per donation, and possibly much less. If saving a life via AMF costs a few thousand dollars, and most EAs should value their time at least tens of dollars an hour, then pretty-much all EAs should not donate their blood, at least as far as altruistic reasons go.

I could be way wrong here, especially if the components other than red blood cells are providing a large fraction of the value.

Comment author: RyanCarey 31 December 2017 02:37:10PM 2 points [-]

Perhaps you could re-evaluation this question in light of Bostrom's findings in Astronomical Waste? The overriding impacts relate to risk of extinction of all life (which alien contact could bring about, or perhaps could avoid) rather than opportunity costs of technological development.

Comment author: FlorentBerthet 31 December 2017 10:17:24AM 0 points [-]

Any investment related to ET is probably not cost-effective, because there are probably no ET in our Universe (or at least not in our neighborhood).

Here's my take on why: https://www.quora.com/Is-the-Fermi-Paradox-and-the-concept-of-parallel-universes-related-in-any-way/answer/Florent-Berthet.

Also, watch this excellent talk by Stuart Armstrong on the Fermi Paradox: https://www.youtube.com/watch?v=zQTfuI-9jIo&index=663&list=FLxEpt5QlyYGAge0ot24tuug

Comment author: RyanCarey 31 December 2017 02:29:16PM 5 points [-]

Many cost-effective interventions probably don't work. You have to look at the probabilities.

Comment author: RyanCarey 29 December 2017 12:20:20AM 12 points [-]

Re page 9, I think the talk of a civilization maintaining exponential growth is unconvincing. The growth rate of a civilization should ultimately be bounded cubically (your civ grows outward like a sphere), whereas the risk is exponential. Exponentials in general defeat polynomials, giving finite EV in the limit of t, regardless of the parameters.

Comment author: Simon_Jenkins 21 December 2017 04:15:17PM 2 points [-]

1) The winner of the last lottery, Tim, wrote several paragraphs explaining his choice of where to send the winnings. Is this required/expected of future winners? I can understand that a winner selecting a non-EA cause might end up having to convince CEA of their decision, but if I win and just want to give the money to a bona fide EA cause, do I have to say anything about my thought process?

2) Are there advocacy-related reasons for donating directly to charities instead of joining such a lottery? For example, if I'm trying to increase my impact by convincing others to join EA, and someone asks where I donate, there seems to be a cost associated with describing a complicated lottery scheme that may end up with my money going to a cause that I think is ineffective or possibly even bad. It seems likely that people would be confused by the scheme and put off, or even think that I was being swindled.

2b) Relatedly, while I personally trust that the complexities of the scheme arise from a desire to optimise it for fairness and other considerations, I worry that the explanations may be off-putting to some. I appreciate that they are in beta, so I will try to be constructive: I would like to see something like an interactive page with colourful buttons and neat graphics that explains how the scheme works. The boxes (A,B,C,G) are a great start, but I think that for example the equations would be best kept behind an expanding box, or even on another page. The headers as they are are good (though might be better framed as questions like "how will the winner be chosen?"). My take-home point here is that having all of the information on one page is intimidating. These are suggestions largely based on my personal experience of looking through the page.

Comment author: RyanCarey 21 December 2017 04:29:27PM 1 point [-]

2) Are there advocacy-related reasons for donating directly to charities instead of joining such a lottery? For example, if I'm trying to increase my impact by convincing others to join EA, and someone asks where I donate, there seems to be a cost associated with describing a complicated lottery scheme that may end up with my money going to a cause that I think is ineffective or possibly even bad

Yes, but on the other hand, the audacity of the scheme will surely get some attention, and those who are attracted by it will probably be intelligent, analytical types.

Comment author: Ben_Todd 21 December 2017 10:49:01AM 1 point [-]

Thanks!

Nvidia (who make GPUs used for ML) saw their share price approximately doubl, after quadrupling last year.

Do you have an impression of whether this is due to crypto mining or ML progress?

Comment author: RyanCarey 21 December 2017 11:49:38AM 1 point [-]

Intuitively, it's largely the ML - this is what they brand on, and revenue figures bear this out. Datacenter hardware (e.g. Tesla/Volta) are about 1/5th of their revenue currently, up 5x this year [1]. Whereas crypto is only a few percent of their revenue, and halved this quarter, despite the stock price going up.

  1. http://s22.q4cdn.com/364334381/files/doc_financials/quarterly_reports/2018/Rev_by_Mkt_Qtrly_Trend_Q318.pdf
  2. Crypto is only about ~3% of revenue: https://www.cnbc.com/2017/11/09/nvidia-crytpocurrency-sales-drop-by-more-than-half-from-prior-quarter.html
Comment author: RyanCarey 21 December 2017 02:40:33AM *  2 points [-]

"given that this paper assumes the humans choose the wrong action by accident less than 1% of the time, it seems that the AI should assign a very large amount of evidence to a shutdown command... instead the AI seems to simply ignore it?"

That's kind-of the point, isn't it? A value learning system will only "learn" over certain variables, according to the size of the learning space, and the prior that it is given. The examples show how if it has an error in the parameterized reward function (or equivalently in the prior), then a bad outcome will ensue. Although I agree that the examples do say much that is not also presented in the text. Anyway, it is also clear by this point that there is room for improvement on my presentation!

Comment author: RyanCarey 21 December 2017 02:03:27AM *  14 points [-]

This is a great post, and I think it's a really valuable service that you're providing - last year's version is, at the present time of writing, the forum's equal most upvoted post of all time.

Also, I think we're pretty strongly in agreement. A year ago, I gave to GCRI. This year, I gave to MIRI, based on my positive experience working there, though GCRI has improved since. It would be good to see more funds going to both of these.

View more: Prev | Next