7

Reflections on Berkeley REACH

This post covers my findings so far in the experiment of running the Berkeley Rationality and Effective Altruism Community Hub (REACH). Crossposted on EA Forum and LessWrong . tl;dr REACH has been running since March 2018 (around three months) It’s doing well Hundreds of people have enjoyed REACH During the day, there... Read More
Comment author: sdspikes 13 March 2018 11:33:42PM 0 points [-]

Publishing shorter and more digestible information more frequently, rather than publishing sprawling research less frequently. By taking the same amount of information and breaking it down into “minimal publishing units,” we make it easier for ourselves and others to understand and build upon, and get quicker feedback loops.

<3

Comment author: RyanCarey 19 December 2017 08:14:04PM *  9 points [-]

That is an excellent update. The strategic directions broadly make sense to me for all of the teams, and I, like many people, am really happy with the ways CEA has improved over the last year.

One item of feedback on the post: the description of mistakes is a bit long, boring, and over-the-top. Many of these things are not actually very important issues.

One suggestion re the EA Forum revamp: the lesserwrong.com site is looking pretty great these days. My main gripes --- things like the front being slightly small for my preferences --- could be easily fixed with some restyling. Some of their features, like including sequences of archived material, could also be ideal for the EA Forum use case. IDK whether the codebase is good but recall that the EA Forum was originally created by restyling LessWrong1, so the notion of stealing that code comes from a healthy tradition! Also, This last part is probably a bit too crazy (and too much work), but one can imagine a case where you post content (and accept comments) from both sites at once.

That aside, it's really appreciated that you guys have taken the forum over this year. And in general, it's great to see all of this progress, so here's to 2018!

Comment author: sdspikes 19 December 2017 08:21:11PM 5 points [-]

Yeah, we have talked to the LW 2.0 team a bit about the possibility of using their codebase as a starting point or possibly doing some kind of actual integration, but we're still in the speculative phase at this point :)

Comment author: sdspikes 16 December 2017 05:04:09AM 2 points [-]

It looks like these all require relocating to Oxford, is that accurate?

Comment author: kbog  (EA Profile) 28 February 2017 08:51:49PM *  0 points [-]

It depends on the context. In many places there are people who really don't know what they're talking about and have easily corrected, false beliefs. Plus, most places on the Internet protect anonymity. If you are careful it is very easy to avoid having an effect that is net negative on the whole, in my experience.

Comment author: sdspikes 01 March 2017 01:50:13AM 1 point [-]

As a Stanford CS (BS/MS '10) grad who took AI/Machine Learning courses in college from Andrew Ng, worked at Udacity with Sebastian Thrun, etc. I have mostly been unimpressed by non-technical folks trying to convince me that AI safety (not caused by explicit human malfeasance) is a credible issue.

Maybe I have "easily corrected, false beliefs" but the people I've talked to at MIRI and CFAR have been pretty unconvincing to me, as was the book Superintelligence.

My perception is that MIRI has focused in on an extremely specific kind of AI that to me seems unlikely to do much harm unless someone is recklessly playing with fire (or intentionally trying to set one). I'll grant that that's possible, but that's a human problem, not an AI problem, and requires a human solution.

You don't try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.

But maybe you do make friendly nuclear power plants? Not sure if this analogy worked out for me or not.

Comment author: sdspikes 13 August 2015 05:29:35PM 1 point [-]

There's already a coursera course, but I don't know how good it is: https://www.coursera.org/learn/altruism