Comment author: CalebWithers  (EA Profile) 07 May 2018 12:33:14PM *  9 points [-]

Thanks for writing this - it seems worthwhile to be strategic about potential "value drift", and this list is definitely useful in that regard.

I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.

In the vein of Denise_Melchin's comment on Joey's post, I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don't think that attempting to override these other values is generally sustainable for long-term motivation.

This hypothesis would point away from pledges or 'locking in' (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to "reduce the risk of value drift", we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one's impactfulness.

Comment author: mhpage 10 January 2018 12:04:57PM 13 points [-]

This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.

Comment author: CalebWithers  (EA Profile) 14 January 2018 12:35:07AM 0 points [-]

In the same vein as this comment and its replies: I'm disposed to framing the three as expansions of the "moral circle". See, for example:

Comment author: CalebWithers  (EA Profile) 14 December 2017 02:17:53AM *  1 point [-]

I'm weakly confident that EA thought leaders who would consider seriously the implication of ideas like quantum immortality generally take a less mystic, reductionist view of quantum mechanics, consciousness and personal identity, along the lines of the following:

Comment author: Tee 02 September 2017 08:23:10PM 2 points [-]

09/02/17 Post Update: The previously truncated graphs "This cause is the top priority" and "This cause is the top or near top priority" have been adjusted in order to better present the data

Comment author: CalebWithers  (EA Profile) 04 September 2017 02:31:12AM 3 points [-]

It seems that the numbers in the top priority paragraph don't match up with the chart

Comment author: CalebWithers  (EA Profile) 03 August 2017 01:32:45AM *  3 points [-]

I'll throw in Bostrom's 'Crucial Considerations and Wise Philanthropy', on "considerations that radically change the expected value of pursuing some high-level subgoal".

In response to EA Funds Beta Launch
Comment author: CalebWithers  (EA Profile) 17 March 2017 12:23:12AM 4 points [-]

A thought: EA funds could be well-suited for inclusion in wills, given that they're somewhat robust to changes in the charity effectiveness landscape

Comment author: CalebWithers  (EA Profile) 12 February 2017 03:43:35AM 1 point [-]

Second, we should generally focus safety research today on fast takeoff scenarios. Since there will be much less safety work in total in these scenarios, extra work is likely to have a much larger marginal effect.

Does this assumption depend on how pessimistic/optimistic one is about our chances of achieving alignment in different take-off scenarios, i.e. what our position on a curve something like this is expected to be for a given takeoff scenario?

Comment author: CalebWithers  (EA Profile) 06 January 2017 07:51:14AM *  0 points [-]

Thanks Paul and Carl for getting this off the ground!

I unfortunately haven't been able to arrange to contribute tax-deductibly in time (I am outside of the US), but for anyone considering running future lotteries:

I think this is a great idea, and intend to contribute my annual donations - currently in the high 4-figures - through donation lotteries such as this if they are available in the future.

Comment author: CalebWithers  (EA Profile) 20 December 2016 11:13:31PM *  0 points [-]
Comment author: CalebWithers  (EA Profile) 13 November 2016 08:55:14AM 1 point [-]
Comment author: CalebWithers  (EA Profile) 18 December 2016 07:52:06AM *  0 points [-]

View more: Next