Comment author: jserv 01 December 2017 05:44:42PM *  1 point [-]

Thanks for sharing this. As a previous volunteer I understand where you're coming from completely. Unfortunately the scene you described in the woman's house is one that occurs even in the United Kingdom. The conversation you had with the site visitor is quite moving, if you remember anything more specific about her answers I'd be interested to read them.

I have been doing some research on volunteer programmes, especially those that take volunteers abroad and the 'voluntourism' industry. Like Liam, I'm wondering if there is scope for EA to compile a list of the more effective volunteer organisations.

From what I can tell, the key difference seems to be in whether the charity is searching specifically for volunteers with skills that are not locally available.

I am considering taking a voluntary placement with VSO in 2018, one that I have selected for its emphasis on skills and anti-poverty goals. Any other recommendations or comments would be very welcome.

Comment author: Liam_Donovan 01 December 2017 08:23:37PM 1 point [-]

Maybe JPAL-IPA field research qualifies in some sense?

Comment author: turchin 29 November 2017 03:52:30PM 0 points [-]

There are types of arguments which doesn't depend on my motivation, like "deals" and "questions".

For example, if I say "I will sell you 10 paperclips if you will not kill me", - in that case, my motivation is an evidence that I will stick to my side of the deal.

Comment author: Liam_Donovan 01 December 2017 01:59:34PM 0 points [-]

This doesn't make sense either: for example, your questions could be selected in a biased manner to manipulate the AI, and you could be being disingenuous when dealmaking. Generally, it seems like good epistemic practice to discount arguments of any form, including questions, when the person making them is existentially biased towards one side of the discussion

Comment author: Lila 27 November 2017 05:36:50PM 0 points [-]

Is the ai supposed to read this explanation? Seems like it tips your hand?

Comment author: Liam_Donovan 01 December 2017 01:41:58PM *  1 point [-]

Wouldn't this be an issue with or without an explanation? It seems like an AI can reasonably infer from other actions humans in general, or Alexey in particular, take that they are highly motivated to argue against being exterminated. IDK if I'm missing something obvious -- I don't know much about AI safety.

Comment author: Liam_Donovan 25 November 2017 03:08:57PM 0 points [-]

Are there, in fact, any such trips organized by EA charities?

Comment author: Zeke_Sherman 22 November 2017 01:20:00AM *  0 points [-]

It's nice to imagine things. But I'll wait for actual EAs to tell me about what does or doesn't upset them before drawing conclusions about what they think.

Comment author: Liam_Donovan 24 November 2017 11:29:22PM -1 points [-]

Considering that most people would be unhappy to be told that they're more likely to be a rapist because of their race, we should have a strong prior that many Effective Altruists would feel the same way. What strong evidence do you have that, in fact, minorities in EA are just fine with being told their race makes them more likely to be rapists? Seems like a very strange assumption.

Apart from Lila's argument, this "non-white people are more likely to be rapists" is a terrible line of thinking because (IMO) it's likely to build racist modes of thought: assigning negative characteristics to minorities based on dubious evidence seems very likely to strengthen bad cognitive patterns and weaken good judgement around related issues.

If the evidence were incontrovertible, this might be acceptable, but it's nowhere near the required standard of proof to overcome the strong prior that humans are equally likely to commit crimes regardless of race (among other reasons, because race is largely a social construct). Additionally, the long history of using false statistics and "science" to bolster white supremacy should make one more skeptical of numbers like this.