Comment author: Ben_Todd 21 December 2017 10:49:01AM 1 point [-]


Nvidia (who make GPUs used for ML) saw their share price approximately doubl, after quadrupling last year.

Do you have an impression of whether this is due to crypto mining or ML progress?

Comment author: RyanCarey 21 December 2017 11:49:38AM 1 point [-]

Intuitively, it's largely the ML - this is what they brand on, and revenue figures bear this out. Datacenter hardware (e.g. Tesla/Volta) are about 1/5th of their revenue currently, up 5x this year [1]. Whereas crypto is only a few percent of their revenue, and halved this quarter, despite the stock price going up.

  2. Crypto is only about ~3% of revenue:
Comment author: RyanCarey 21 December 2017 02:40:33AM *  2 points [-]

"given that this paper assumes the humans choose the wrong action by accident less than 1% of the time, it seems that the AI should assign a very large amount of evidence to a shutdown command... instead the AI seems to simply ignore it?"

That's kind-of the point, isn't it? A value learning system will only "learn" over certain variables, according to the size of the learning space, and the prior that it is given. The examples show how if it has an error in the parameterized reward function (or equivalently in the prior), then a bad outcome will ensue. Although I agree that the examples do say much that is not also presented in the text. Anyway, it is also clear by this point that there is room for improvement on my presentation!

Comment author: RyanCarey 21 December 2017 02:03:27AM *  14 points [-]

This is a great post, and I think it's a really valuable service that you're providing - last year's version is, at the present time of writing, the forum's equal most upvoted post of all time.

Also, I think we're pretty strongly in agreement. A year ago, I gave to GCRI. This year, I gave to MIRI, based on my positive experience working there, though GCRI has improved since. It would be good to see more funds going to both of these.

Comment author: casebash 20 December 2017 08:02:31AM 0 points [-]

Would love to see LW2.0 become the new code base, but it still undergoing rapid changes at the moment and isn't completely stable.

Comment author: RyanCarey 20 December 2017 01:44:34PM 2 points [-]

Sure, although the tech team could presumably just wait six months while they work on other stuff.

Comment author: RyanCarey 19 December 2017 08:14:04PM *  9 points [-]

That is an excellent update. The strategic directions broadly make sense to me for all of the teams, and I, like many people, am really happy with the ways CEA has improved over the last year.

One item of feedback on the post: the description of mistakes is a bit long, boring, and over-the-top. Many of these things are not actually very important issues.

One suggestion re the EA Forum revamp: the site is looking pretty great these days. My main gripes --- things like the front being slightly small for my preferences --- could be easily fixed with some restyling. Some of their features, like including sequences of archived material, could also be ideal for the EA Forum use case. IDK whether the codebase is good but recall that the EA Forum was originally created by restyling LessWrong1, so the notion of stealing that code comes from a healthy tradition! Also, This last part is probably a bit too crazy (and too much work), but one can imagine a case where you post content (and accept comments) from both sites at once.

That aside, it's really appreciated that you guys have taken the forum over this year. And in general, it's great to see all of this progress, so here's to 2018!

Comment author: Khorton 17 December 2017 12:21:29PM 1 point [-]

What happens if a non-EA wins?

Comment author: RyanCarey 17 December 2017 02:32:48PM *  4 points [-]

Ideally, non-EAs can enter and win. As Carl said, on a first cut analysis, what you're doing doesn't depend on what other people do. You're simply buying a 1/m chance of donating m times your contribution, and if other EAs or non-EAs want to do the same, then all power to them.

In practice, CEA technically gets to make the final donation decision. But I can't see them violating a donor's choice.

Comment author: RyanCarey 27 November 2017 02:18:29PM 1 point [-]

This points to another feature of the landscape model: entrepreneurs are locally situated by their existing background knowledge, and this is part of what lets them do what they do. Attempts to “move” them are likely to both meet with resistance and be ultimately counterproductive.... So what we need is much more granular cause prioritization, ideally right down to the size of a problem that can be worked on by an individual or team.

Or maybe we just need to work with people who are actually cause neutral?

Comment author: Benito 25 November 2017 05:34:38PM *  3 points [-]

I agree with Jess, I'd love to hear more about the decision making. I think that the EA Grants programme has been the highest impact thing CEA has done in the past 2-3 years, and think it could be orders of magnitude more impactful if they can reliably expect to get funding for good projects. That would require that (a) it is done regularly and (b) people can know the reasons CEA uses to decide on what projects to fund.

Responding to why building online tools for intellectual progress takes multiple people's full time jobs: The original reddit codebase that LW 1.0 forked from was on the order of 4 years of 4 people's full time work, so say at least 10 person years of coding (we have had so far maybe 1 person year of full time coding work, and LW 2.0 has an entirely original codebase). While we're able to steal some of their insights (so we built a lot of the final product directly without having to fail and rebuild multiple times) LW 2.0 is building a lot of original features like an eigenkarma system, a sequences feature, and a bunch of other things that aren't currently in existence. We have still not yet built 50% of the features the site will have once we stop working on it.

Then also there's content curation and new epistemic and content norms to set up which takes time, and user interviews with writers in the community, and a ton of other things. The strategic overview points in the sorts of directions we'll likely build things.

Comment author: RyanCarey 26 November 2017 05:49:32AM 1 point [-]

I'd love to hear more about the decision making. I think that the EA Grants programme has been the highest impact thing CEA has done in the past 2-3 years, and think it could be orders of magnitude more impactful if they can reliably expect to get funding for good projects.

I agree with this. Although note that a lot of things would have to happen for EA grants to get more than 1 order of magnitude better. (They might have to make several improvements e.g. larger grants, more frequent grants, better recruitment of grantees, etc etc.)

Comment author: Jess_Whittlestone 25 November 2017 10:52:54AM 1 point [-]

Did SlateStarCodex even exist before 2009? I'm sceptical - the post archives only go back to 2013: Maybe not a big deal but does suggest at least some of your sample were just choosing options randomly/dishonestly.

Comment author: RyanCarey 25 November 2017 02:43:05PM 3 points [-]

They could also be referring to earlier writing by the same author at other addresses.

Comment author: Jess_Whittlestone 25 November 2017 11:02:23AM *  5 points [-]

This may be a bit late, but: I'd like to see a bit more explanation/justification of why the particular grants were chosen, and how you decided how much to fund - especially when some of the amounts are pretty big, and there's a lot of variation among the grants. e.g. £60,000 to revamp LessWrong sounds like a really large amount to me, and I'm struggling to imagine what that's being spent on.

Comment author: RyanCarey 25 November 2017 02:34:00PM *  2 points [-]

60k GBP doesn't sound like too much to me to revamp LessWrong at all.

  • probably years of time were spent on design/coding/content-curation for LW1, right?
  • LW has dozens of features that aren't available off the shelf
  • Starting the EA forum took a couple months of time. Remaking LessWrong will involve more content/moderator work, more design, and an order of magnitude more coding.

So it could easily take 1-2 person-years.

View more: Prev | Next