Comment author: richard_ngo 15 August 2018 09:23:16PM *  2 points [-]

I think it's a mischaracterisation to think of virtue ethics in terms of choosing the most virtuous actions (in fact, one common objection to virtue ethics is that it doesn't help very much in choosing actions). I think virtue ethics is probably more about being the most virtuous. There's a difference: e.g. you might not be virtuous if you choose normally-virtuous actions for the wrong reasons.

For similar reasons, I disagree with cole_haus that virtue ethicists choose actions to produce the most virtuous outcomes (although there is at least one school of virtue ethics which seems vaguely consequentialist, the eudaimonists. See Note however that I haven't actually looked into virtue ethics in much detail.

Edit: contractarianism is a fourth approach which doesn't fit neatly into either division

Comment author: MvdSteeg 13 August 2018 07:14:29AM 1 point [-]

It seems strange to me that only pharmaceutical companies would have to achieve said index. What is it about a Viagra company that makes them more responsible for solving global health issues than e.g. IKEA?

The only thing I can come up with on the fly is that they take up resources from the same pool of researchers. I'm not sure that's a satisfactory reason for disadvantaging one company over another, though.

What if nation-wide company taxes were raised by a tiny margin and pharmaceutical companies could compete for DALY-subsidies?

(I realize the chance of me having a better idea than the writers of the book is rather miniscule. Just looking for holes in my view)

Comment author: richard_ngo 14 August 2018 01:30:35PM 0 points [-]

My default position would be that IKEA have an equal obligation, but that it's much more difficult and less efficient to try and make IKEA fulfill that obligation.

Comment author: richard_ngo 14 August 2018 01:15:21PM *  6 points [-]

A few doubts:

  1. It seems like MSR requires a multiverse large enough to have many well-correlated agents, but not large enough to run into the problems involved with infinite ethics. Most of my credence is on no multiverse or infinite multiverse, although I'm not particularly well-read on this issue.

  2. My broad intuition is something like "Insofar as we can know about the values of other civilisations, they're probably similar to our own. Insofar as we can't, MSR isn't relevant." There are probably exceptions, though (e.g. we could guess the direction in which an r-selected civilisation's values would vary from our own).

  3. I worry that MSR is susceptible to self-mugging of some sort. I don't have a particular example, but the general idea is that you're correlated with other agents even if you're being very irrational. And so you might end up doing things which seem arbitrarily irrational. But this is just a half-fledged thought, not a proper objection.

  4. And lastly, I would have much more confidence in FDT and superrationality in general if there were a sensible metric of similarity between agents, apart from correlation (because if you always cooperate in prisoner's dilemmas, then your choices are perfectly correlated with CooperateBot, but intuitively it'd still be more rational to defect against CooperateBot, because your decision algorithm isn't similar to CooperateBot in the same way that it's similar to your psychological twin). I guess this requires a solution to logical uncertainty, though.

Happy to discuss this more with you in person. Also, I suggest you cross-post to Less Wrong.

Comment author: richard_ngo 12 June 2018 12:28:58AM 2 points [-]

As a followup to byanyothername's questions: Could you say a little about what distinguishes your coaching from something like a CFAR workshop?

Comment author: richard_ngo 05 June 2018 01:13:07PM 5 points [-]

Kudos for doing this. The main piece of advice which comes to mind is to make sure to push this via university EA groups. I don't think you explicitly identified students as a target demographic in your post, but current students and new grads have the three traits which make the hotel such an attractive proposition: they're unusually time-rich, cash-poor, and willing to relocate.