Comment author: Zeke_Sherman 28 March 2017 02:47:01AM 1 point [-]

This is odd. Personally my reaction is that I want to get to a project before other people do. Does bad research really make it harder to find good research? This doesn't seem like a likely phenomenon to me.

Comment author: Raemon 29 March 2017 11:09:06PM 1 point [-]

How could bad research not make it harder to find good research? When you're looking for the research, you have to look through additional things before you find the good research, and good research is fairly costly to ascertain in the first place.

Comment author: Raemon 25 March 2017 10:32:07PM *  7 points [-]

Thanks for doing this!

My sense is what people are missing is a set of social incentives to get started. Looking at any one of these, they feel overwhelming, they feel like they require skills that I don't have. It feels like if I start working on it, then EITHER I'm blocking someone whose better qualified from working on it OR someone who's better qualified will do it anyway and my efforts will be futile.

Or, in the case of research, my bad quality research will make it harder for people to find good quality research.

Or, in the case of something like "start one of the charities Givewell wants people to start", it feels like... just, a LOT of work.

And... this is all true. Kind of. But it's also true that the way people get good at things is by doing them. And I think it's sort of necessary for people to throw themselves into projects they aren't prepared for, as long as they can get tight feedback looks that enable them to improve.

I have half-formed opinions about what's needed to resolve that, that can be summarized as "better triaged mentorship." I'll try to write up more detailed thoughts soon.

Comment author: Raemon 19 March 2017 08:54:08PM 2 points [-]

Glad to see the plans laid out.

I think it'd have made more sense to do the "EA Funds" experiment in Quarter 4, where it ties in more with people's annual giving habits.

I do think it may be valuable to try even if the donations are not counterfactual (for purposes of being able to coordinate donations better)

In response to Open Thread #36
Comment author: Evan_Gaensbauer 17 March 2017 03:26:28AM 3 points [-]

As people age their lives become more difficult. Physically and mentally, they just aren't where they previously were. Most effective altruists are younger people, and they may not take into consideration how risky it can be to not have any savings cushion in the case things change. We can't necessarily count on pension plans to cover us in our old age. We can't assume our health will always be what it is now. A lot of people will face harder times in the future, and being put in the mindset of assuming one won't face personal hardship, so one need not save money, is reckless.

It's one thing if someone aspires to be wealthy, retire at age 30 like Mr. Money Mustache, or live a luxurious retirement. But it's dangerous to create a culture in EA where people might be accused of hypocrisy to even save enough for retirement to cover their own basic living expenses. It's also dangerous for us to presume that each of our lives will go so easily we can work until we die, or we won't get sick. While talking about these things in the abstract may be well and fine, I want to register my conviction using social influence, i.e., peer pressure, alone to normalize "don't/no need to save for retirement" as practical advice among effective altruists is potentially dangerous.

Comment author: Raemon 17 March 2017 03:04:41PM 1 point [-]

Very much agreed. I was pretty worried to see the initial responses saying 'saving for retirement isn't EA'.

In response to EA Funds Beta Launch
Comment author: Raemon 05 March 2017 08:53:26PM *  2 points [-]

I currently believe MIRI is the best technical choice for Far Future concerns, but that meta-ish human-capital building orgs like 80k or CFAR are plausibly the second-best choice.

Are those the sorts of things that would fall under "Far Future" or "Movement Building?"

Comment author: kbog  (EA Profile) 28 February 2017 08:51:49PM *  0 points [-]

It depends on the context. In many places there are people who really don't know what they're talking about and have easily corrected, false beliefs. Plus, most places on the Internet protect anonymity. If you are careful it is very easy to avoid having an effect that is net negative on the whole, in my experience.

Comment author: Raemon 01 March 2017 05:29:32PM 5 points [-]

While I didn't elaborate on my thoughts in the OP, essentially I was aiming to say "if you'd like to play a role in advocating for AI safety, the first steps are to gain skills so you can persuade the right people effectively. I think some people jump from "become convinced that AI is an issue" to "immediately start arguing with people on the internet".

If you want to do that, I'd say it's important to:

a) gain a firm understanding of AI and AI safety, b) gain an understanding common objections and modes of thought surrounding those objections. b) practice engaging with people in a way that actually has a positive impact (do this practice on lower-stakes issues, not AI). My experience is that positive interactions involve a lot of work and emotional labor.

(I still argue occasionally about AI on the internet and I think I've regretted it basically every time)

I think it makes more sense to aim for high-impact influence, where you cultivate a lot of valuable skills that gets you hired at actual AI research firms where you can then shape the culture in a way that prioritizes safety.

Comment author: jsteinhardt 28 February 2017 06:48:15AM 10 points [-]

I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don't understand much about AI. I think outreach by non-experts could end up being net-negative.

Comment author: Raemon 28 February 2017 08:47:21PM 0 points [-]

I agree with this concern, thanks. When I rewrite this post in a more finalized form I'll include reasoning like this.

Comment author: AlexMennen 27 February 2017 04:59:23AM 3 points [-]

5) Look at the MIRI and 80k AI Safety syllabus, and see if how much of it looks like something you'd be excited to learn. If applicable to you, consider diving into that so you can contribute to the cutting edge of knowledge. This may make most sense if you do it through

...

Comment author: Raemon 27 February 2017 05:25:51AM 2 points [-]

Thanks, fixed. I had gotten partway through updating that to say something more comprehensive, decided I needed more time to think about it, and then accidentally saved it anyway.

Comment author: lifelonglearner 26 February 2017 04:42:09AM 2 points [-]

I'm sure what their respective funding constraints are.

Should there be a "not" in the middle here, or are you just saying that you have good info on their funding situation?

Comment author: Raemon 26 February 2017 06:14:27AM 2 points [-]

Heh, correct. Will update soon when I have a non phone to do it.

Comment author: Paul_Crowley 25 February 2017 09:05:30PM 2 points [-]

Nitpick: "England" here probably wants to be something like "the south-east of England". There's not a lot you could do from Newcastle that you couldn't do from Stockholm; you need to be within travel distance of Oxford, Cambridge, or London.

Comment author: Raemon 25 February 2017 10:37:20PM 1 point [-]

Thanks, fixed.

Actually, is anyone other than DeepMind in London? (the section I brought this up was on volunteering, which I assume is less relevant for DeepMind than FHI)

View more: Next