Comment author: Elizabeth 24 June 2017 05:23:01PM -1 points [-]

I think costly signaling is the wrong phrase here. Costly signaling is about gain for the signaler. This seems better modeled as people trying to indirectly purchase the good "rich people donate lots to charity.". Similar to people who are unwilling to donate to the government (so they don't think the government is better at spending money than they are) but do advocate for higher taxes (meaning they think the government is better at spending money than other people are). They're trying to purchase the good "higher taxes for everyone".

Comment author: Raemon 24 June 2017 05:33:00PM 4 points [-]

Maybe, but the thing I'm trying to get at here is "a bunch of people saying that rich people should donate to X" is a less credible signal than "a bunch of people saying X thing is important enough that they are willing to donate to it themselves."


Earning to Give as Costly Signalling

There's a background belief that informs a lot of my Effective Altruism thinking, that might be a good time to challenge: I think most of the value of most earning-to-give is primarily a sort of costly signaling to attract the attention of the extremely rich (who completely dwarf the funding capabilities... Read More
Comment author: Raemon 18 June 2017 05:29:48PM 2 points [-]

I responded here

Givewell's made it their mission to find the best nonprofits working in exactly the near-term, urgent but high-impact space.

Comment author: John_Maxwell_IV 17 June 2017 04:50:33AM 1 point [-]
Comment author: Raemon 18 June 2017 05:29:23PM 0 points [-]

I was about to excitedly list my own contribution here, and then actually clicked yours and... ah. I see. :P

(It is a great idea but not quite in the spirit of the thing I was about to share. lol)

Comment author: Zeke_Sherman 28 March 2017 02:47:01AM 1 point [-]

This is odd. Personally my reaction is that I want to get to a project before other people do. Does bad research really make it harder to find good research? This doesn't seem like a likely phenomenon to me.

Comment author: Raemon 29 March 2017 11:09:06PM 1 point [-]

How could bad research not make it harder to find good research? When you're looking for the research, you have to look through additional things before you find the good research, and good research is fairly costly to ascertain in the first place.

Comment author: Raemon 25 March 2017 10:32:07PM *  8 points [-]

Thanks for doing this!

My sense is what people are missing is a set of social incentives to get started. Looking at any one of these, they feel overwhelming, they feel like they require skills that I don't have. It feels like if I start working on it, then EITHER I'm blocking someone whose better qualified from working on it OR someone who's better qualified will do it anyway and my efforts will be futile.

Or, in the case of research, my bad quality research will make it harder for people to find good quality research.

Or, in the case of something like "start one of the charities Givewell wants people to start", it feels like... just, a LOT of work.

And... this is all true. Kind of. But it's also true that the way people get good at things is by doing them. And I think it's sort of necessary for people to throw themselves into projects they aren't prepared for, as long as they can get tight feedback looks that enable them to improve.

I have half-formed opinions about what's needed to resolve that, that can be summarized as "better triaged mentorship." I'll try to write up more detailed thoughts soon.

Comment author: Raemon 19 March 2017 08:54:08PM 2 points [-]

Glad to see the plans laid out.

I think it'd have made more sense to do the "EA Funds" experiment in Quarter 4, where it ties in more with people's annual giving habits.

I do think it may be valuable to try even if the donations are not counterfactual (for purposes of being able to coordinate donations better)

In response to Open Thread #36
Comment author: Evan_Gaensbauer 17 March 2017 03:26:28AM 3 points [-]

As people age their lives become more difficult. Physically and mentally, they just aren't where they previously were. Most effective altruists are younger people, and they may not take into consideration how risky it can be to not have any savings cushion in the case things change. We can't necessarily count on pension plans to cover us in our old age. We can't assume our health will always be what it is now. A lot of people will face harder times in the future, and being put in the mindset of assuming one won't face personal hardship, so one need not save money, is reckless.

It's one thing if someone aspires to be wealthy, retire at age 30 like Mr. Money Mustache, or live a luxurious retirement. But it's dangerous to create a culture in EA where people might be accused of hypocrisy to even save enough for retirement to cover their own basic living expenses. It's also dangerous for us to presume that each of our lives will go so easily we can work until we die, or we won't get sick. While talking about these things in the abstract may be well and fine, I want to register my conviction using social influence, i.e., peer pressure, alone to normalize "don't/no need to save for retirement" as practical advice among effective altruists is potentially dangerous.

Comment author: Raemon 17 March 2017 03:04:41PM 1 point [-]

Very much agreed. I was pretty worried to see the initial responses saying 'saving for retirement isn't EA'.

In response to EA Funds Beta Launch
Comment author: Raemon 05 March 2017 08:53:26PM *  2 points [-]

I currently believe MIRI is the best technical choice for Far Future concerns, but that meta-ish human-capital building orgs like 80k or CFAR are plausibly the second-best choice.

Are those the sorts of things that would fall under "Far Future" or "Movement Building?"

Comment author: kbog  (EA Profile) 28 February 2017 08:51:49PM *  0 points [-]

It depends on the context. In many places there are people who really don't know what they're talking about and have easily corrected, false beliefs. Plus, most places on the Internet protect anonymity. If you are careful it is very easy to avoid having an effect that is net negative on the whole, in my experience.

Comment author: Raemon 01 March 2017 05:29:32PM 5 points [-]

While I didn't elaborate on my thoughts in the OP, essentially I was aiming to say "if you'd like to play a role in advocating for AI safety, the first steps are to gain skills so you can persuade the right people effectively. I think some people jump from "become convinced that AI is an issue" to "immediately start arguing with people on the internet".

If you want to do that, I'd say it's important to:

a) gain a firm understanding of AI and AI safety, b) gain an understanding common objections and modes of thought surrounding those objections. b) practice engaging with people in a way that actually has a positive impact (do this practice on lower-stakes issues, not AI). My experience is that positive interactions involve a lot of work and emotional labor.

(I still argue occasionally about AI on the internet and I think I've regretted it basically every time)

I think it makes more sense to aim for high-impact influence, where you cultivate a lot of valuable skills that gets you hired at actual AI research firms where you can then shape the culture in a way that prioritizes safety.

View more: Next