Comment author: SoerenMind  (EA Profile) 08 June 2017 09:00:41PM *  3 points [-]

I got linked here while browsing a pretty random blog on deep learning, you're getting attention! (https://medium.com/intuitionmachine/seven-deadly-sins-and-ai-safety-5601ae6932c3)

Comment author: SoerenMind  (EA Profile) 03 May 2017 09:54:53AM 7 points [-]

Whether or not this turns out to be a high-impact caiae area, I'd like to give some encouragement for doing and writing up such an exploratory cause analysis, I think this is high EV.

Comment author: Austen_Forrester 10 March 2017 05:44:00AM -1 points [-]

Those guiding principles are good. However, I wished you would include one that was against doing massive harm to the world. CEA endorses the “Foundational Research Institute,” a pseudo-think tank that promotes dangerous ideas of mass-termination of human and non-human life, not excluding extinction. By promoting this organization, CEA is promoting human, animal, and environmental terrorism on the grandest scale. Self-styled “effective altruists” try to pass themselves off as benevolent, but the reality is that they themselves are one of the biggest threats to the world by promoting terrorism and anti-spirituality under the cloak of altruism.

Comment author: SoerenMind  (EA Profile) 10 March 2017 09:57:05AM *  16 points [-]

Fair point about not doing harm but I feel like you're giving the Foundational Research Institute a treatment which is both unfair and unnecessary to get your point across.

Comment author: SoerenMind  (EA Profile) 09 March 2017 11:16:36AM 2 points [-]

Congrats on making this, it seems like a modest but still powerful way forward.

Have you thought about making it possible for community members to officially endorsement these principles?

Comment author: SoerenMind  (EA Profile) 13 February 2017 02:39:10PM 3 points [-]

If the funding for a problem with known total funding needs (e.g. creating drug x which costs $1b) goes up 10x, its solvability will go up 10x too - how do you resolve that this will make problems with low funding look very intractable? I guess the high neglectedness makes up for it. But this definition of solvability doesn't quite capture my intuition.

Comment author: Ben_Todd 05 February 2017 05:31:16PM 8 points [-]

Thanks for the post. I broadly agree.

There are some more remarks on "gaps" in EA here: https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/

Two quick additions:

1) I'm not sure spending on RCTs is especially promising. Well-run RCTs that actually have power to update you can easily cost tens of millions of dollars, so you'd need to be considering spending hundreds of millions for it to be worth it. We're only just getting to this scale. GiveWell has considered funding RCTs in the past and rejected it, I think for this reason (though I'm not sure).

2) It might be interesting for someone to think more about multi-arm bandit problems, since it seems like it could be a good analogy for cause selection. An approximate solution is to exploit your best opportunity 90% of the time, then randomly select another opportunity to explore 10% of the time. https://en.wikipedia.org/wiki/Multi-armed_bandit

Comment author: SoerenMind  (EA Profile) 06 February 2017 11:41:46PM *  5 points [-]

An approximate solution is to exploit your best opportunity 90% of the time, then randomly select another opportunity to explore 10% of the time.

This is the epsilon-greedy strategy with epsilon = 0.1, which is probably a good rule of thumb for when one's prior for each of the causes has a slim-tailed distribution (e.g. Gaussian). The optimal value of epsilon increases with the variance in our prior for each of the causes. So if we have a cause and our confidence interval for its cost effectiveness goes over more than an order of magnitude (high variance), a higher value of epsilon could be better. Point is - the rule of thumb doesn't really apply when you think some causes are much better than others and you have plenty of uncertainty.

That said, if you had realistic priors for the effectiveness of each cause, you can calculate an optimal solution using Gittins indeces.

Comment author: SoerenMind  (EA Profile) 02 February 2017 04:24:04PM 1 point [-]

If I donated 10% of my income then this would mean that I have to work ~11% longer to get to that same target savings.

That doesn't seem right unless you have zero expenses. You'd need to multiply 11% with (monthly income / monthly savings).

Comment author: RyanCarey 29 December 2016 01:50:09AM *  8 points [-]

Well, you might be getting toward the frontiers of where published AI Safety-focused books can take you. From here, you might want to look to AI Safety agendas and specific papers, and AI textbooks.

MIRI has a couple of technical agendas for more foundational and more machine learning-based research on AI Safety. Dario Amodei of OpenAI, and some other researchers also put out a machine learning-focused agenda. These agendas cite and are cited by a bunch of useful work. There's also great unpublished work by Paul Christiano on AI Control.

In order to understand and contribute to current research, you will also want to do some background reading. Jan Leike (now of Deepmind) has put out a good syllabus of relevant reading materials through 80,000 Hours that includes some good suggestions. Personally, for a math student like yourself wanting to start out with theoretical computer science, George Boolos' book Computability and Logic might be useful. Learning Python and Tensorflow is also great in general.

To increase your chance of working on this career, you might want to look toward the entry requirements for some specific grad schools. You might also want to go for some internships at these groups (or at other groups that do similar work).

In academia some labs analyzing safety problems are:

  • UC Berkeley (especially Russell)
  • Cambridge (Adrian Weller)
  • ANU (Hutter)
  • Montreal Institute for Learning Algorithms
  • Oxford
  • Louisville (Yampolskiy)

In industry, Deepmind and OpenAI both have safety-focused teams.

Working on grad school or internships in any of these places (notwithstanding that you won't necessarily end up in a safety-focused team) would be a sweet step toward working on AI Safety as a career.

Feel free to reach out by email at (my first name) at intelligence.org with further questions, or for more personalized suggestions. (And the same offer goes to similarly interested readers)

Comment author: SoerenMind  (EA Profile) 31 December 2016 02:54:29PM 0 points [-]

Any details on safety work in Montreal and Oxford (other than FHI I assume)? I might use that for an application there.

Comment author: Evan_Gaensbauer 09 November 2016 12:06:53PM 3 points [-]

[epistemic status: this pattern-matches behaviour I've seen on LessWrong before, so I'm suspicious there may be a mass downvoter here. It could be a coincidence. Not above 40% confident at this point. Feel free to ignore.]

Someone keeps consistently downvoting Kerry's comments. I've been on LessWrong for a while, where that was an occasional nuisance for everyone, but a real bother for the few users who go the brunt of it. I imagine there's a future for the EA Forum where the almost universal upvoting stops, and more downvoting begins. In all honesty, I'd think that'd lead to healthier discourse. However, I'd like to denormalize mass downvoting all of one user's comments. Whoever you are, even if you're really mad at Kerry right now, I think we can at least agree we don't want to set a precedent of only downvoting comments without giving feedback and why we disagree. I'd like to set a precedent we do.

Joey and Michael have both weighed in that they think a CEA team spending a lot of time on this relative to a little time isn't worth it. Kerry agreed. Be assured CEA staffers aren't wasting time and valuable donor money, then. Even if you think this whole thread is a stupid idea of Kerry's, and his suggestions are stupid too, please come out and say why so whatever problem you perceive may be resolved.

Comment author: SoerenMind  (EA Profile) 09 November 2016 05:10:58PM 1 point [-]

Can any mods see where the down votes come from and if there's a patter?

Comment author: SoerenMind  (EA Profile) 15 August 2016 07:11:51AM 1 point [-]

My guess is that the common objections are so non-obvious to find that few people with objections will take the trouble to go to the page. Just a guess, you have the page view stats...

I'm not quite sure how to fix it, you could have an FAQ tab at the top that leads there. A dropdown on the 'About' link (like on the 80k page) could be very effective.

View more: Next