Comment author: FlorentBerthet 31 December 2017 10:17:24AM 0 points [-]

Any investment related to ET is probably not cost-effective, because there are probably no ET in our Universe (or at least not in our neighborhood).

Here's my take on why: https://www.quora.com/Is-the-Fermi-Paradox-and-the-concept-of-parallel-universes-related-in-any-way/answer/Florent-Berthet.

Also, watch this excellent talk by Stuart Armstrong on the Fermi Paradox: https://www.youtube.com/watch?v=zQTfuI-9jIo&index=663&list=FLxEpt5QlyYGAge0ot24tuug

Comment author: RyanCarey 31 December 2017 02:29:16PM 5 points [-]

Many cost-effective interventions probably don't work. You have to look at the probabilities.

Comment author: RyanCarey 29 December 2017 12:20:20AM 12 points [-]

Re page 9, I think the talk of a civilization maintaining exponential growth is unconvincing. The growth rate of a civilization should ultimately be bounded cubically (your civ grows outward like a sphere), whereas the risk is exponential. Exponentials in general defeat polynomials, giving finite EV in the limit of t, regardless of the parameters.

Comment author: Simon_Jenkins 21 December 2017 04:15:17PM 2 points [-]

1) The winner of the last lottery, Tim, wrote several paragraphs explaining his choice of where to send the winnings. Is this required/expected of future winners? I can understand that a winner selecting a non-EA cause might end up having to convince CEA of their decision, but if I win and just want to give the money to a bona fide EA cause, do I have to say anything about my thought process?

2) Are there advocacy-related reasons for donating directly to charities instead of joining such a lottery? For example, if I'm trying to increase my impact by convincing others to join EA, and someone asks where I donate, there seems to be a cost associated with describing a complicated lottery scheme that may end up with my money going to a cause that I think is ineffective or possibly even bad. It seems likely that people would be confused by the scheme and put off, or even think that I was being swindled.

2b) Relatedly, while I personally trust that the complexities of the scheme arise from a desire to optimise it for fairness and other considerations, I worry that the explanations may be off-putting to some. I appreciate that they are in beta, so I will try to be constructive: I would like to see something like an interactive page with colourful buttons and neat graphics that explains how the scheme works. The boxes (A,B,C,G) are a great start, but I think that for example the equations would be best kept behind an expanding box, or even on another page. The headers as they are are good (though might be better framed as questions like "how will the winner be chosen?"). My take-home point here is that having all of the information on one page is intimidating. These are suggestions largely based on my personal experience of looking through the page.

Comment author: RyanCarey 21 December 2017 04:29:27PM 1 point [-]

2) Are there advocacy-related reasons for donating directly to charities instead of joining such a lottery? For example, if I'm trying to increase my impact by convincing others to join EA, and someone asks where I donate, there seems to be a cost associated with describing a complicated lottery scheme that may end up with my money going to a cause that I think is ineffective or possibly even bad

Yes, but on the other hand, the audacity of the scheme will surely get some attention, and those who are attracted by it will probably be intelligent, analytical types.

Comment author: Ben_Todd 21 December 2017 10:49:01AM 1 point [-]

Thanks!

Nvidia (who make GPUs used for ML) saw their share price approximately doubl, after quadrupling last year.

Do you have an impression of whether this is due to crypto mining or ML progress?

Comment author: RyanCarey 21 December 2017 11:49:38AM 1 point [-]

Intuitively, it's largely the ML - this is what they brand on, and revenue figures bear this out. Datacenter hardware (e.g. Tesla/Volta) are about 1/5th of their revenue currently, up 5x this year [1]. Whereas crypto is only a few percent of their revenue, and halved this quarter, despite the stock price going up.

  1. http://s22.q4cdn.com/364334381/files/doc_financials/quarterly_reports/2018/Rev_by_Mkt_Qtrly_Trend_Q318.pdf
  2. Crypto is only about ~3% of revenue: https://www.cnbc.com/2017/11/09/nvidia-crytpocurrency-sales-drop-by-more-than-half-from-prior-quarter.html
Comment author: RyanCarey 21 December 2017 02:40:33AM *  2 points [-]

"given that this paper assumes the humans choose the wrong action by accident less than 1% of the time, it seems that the AI should assign a very large amount of evidence to a shutdown command... instead the AI seems to simply ignore it?"

That's kind-of the point, isn't it? A value learning system will only "learn" over certain variables, according to the size of the learning space, and the prior that it is given. The examples show how if it has an error in the parameterized reward function (or equivalently in the prior), then a bad outcome will ensue. Although I agree that the examples do say much that is not also presented in the text. Anyway, it is also clear by this point that there is room for improvement on my presentation!

Comment author: RyanCarey 21 December 2017 02:03:27AM *  14 points [-]

This is a great post, and I think it's a really valuable service that you're providing - last year's version is, at the present time of writing, the forum's equal most upvoted post of all time.

Also, I think we're pretty strongly in agreement. A year ago, I gave to GCRI. This year, I gave to MIRI, based on my positive experience working there, though GCRI has improved since. It would be good to see more funds going to both of these.

Comment author: casebash 20 December 2017 08:02:31AM 0 points [-]

Would love to see LW2.0 become the new code base, but it still undergoing rapid changes at the moment and isn't completely stable.

Comment author: RyanCarey 20 December 2017 01:44:34PM 2 points [-]

Sure, although the tech team could presumably just wait six months while they work on other stuff.

Comment author: RyanCarey 19 December 2017 08:14:04PM *  9 points [-]

That is an excellent update. The strategic directions broadly make sense to me for all of the teams, and I, like many people, am really happy with the ways CEA has improved over the last year.

One item of feedback on the post: the description of mistakes is a bit long, boring, and over-the-top. Many of these things are not actually very important issues.

One suggestion re the EA Forum revamp: the lesserwrong.com site is looking pretty great these days. My main gripes --- things like the front being slightly small for my preferences --- could be easily fixed with some restyling. Some of their features, like including sequences of archived material, could also be ideal for the EA Forum use case. IDK whether the codebase is good but recall that the EA Forum was originally created by restyling LessWrong1, so the notion of stealing that code comes from a healthy tradition! Also, This last part is probably a bit too crazy (and too much work), but one can imagine a case where you post content (and accept comments) from both sites at once.

That aside, it's really appreciated that you guys have taken the forum over this year. And in general, it's great to see all of this progress, so here's to 2018!

Comment author: Khorton 17 December 2017 12:21:29PM 1 point [-]

What happens if a non-EA wins?

Comment author: RyanCarey 17 December 2017 02:32:48PM *  4 points [-]

Ideally, non-EAs can enter and win. As Carl said, on a first cut analysis, what you're doing doesn't depend on what other people do. You're simply buying a 1/m chance of donating m times your contribution, and if other EAs or non-EAs want to do the same, then all power to them.

In practice, CEA technically gets to make the final donation decision. But I can't see them violating a donor's choice.

Comment author: RyanCarey 27 November 2017 02:18:29PM 1 point [-]

This points to another feature of the landscape model: entrepreneurs are locally situated by their existing background knowledge, and this is part of what lets them do what they do. Attempts to “move” them are likely to both meet with resistance and be ultimately counterproductive.... So what we need is much more granular cause prioritization, ideally right down to the size of a problem that can be worked on by an individual or team.

Or maybe we just need to work with people who are actually cause neutral?

View more: Prev | Next