Comment author: DavidNash 17 August 2017 02:26:52PM 0 points [-]

I think there's one happening in London in November that is discussing questions of this nature - it may be worth seeing if they will add it to the schedule if it's not already there.

Comment author: MichaelPlant 17 August 2017 01:53:57PM 0 points [-]

This is sort of a meta-comment, but there's loads of important stuff here, each of which could have it's own thread. Could I suggest someone (else), organises a (small) conference to discuss some of these things?

I've got quite a few things to add on the ITN framework but nothing I can say in a few words. Relatedly, I've also been working on a method for 'cause search' - a ways of finding all the big causes in a given domain - which is the step before cause prio, but that's not something I can write out succinctly either (yet, anyway).

Comment author: John_Maxwell_IV 17 August 2017 06:03:22AM 1 point [-]

Fair points. I'm sorry.

Comment author: Peter_Hurford  (EA Profile) 17 August 2017 01:33:41AM 0 points [-]

I don't know, but I would guess that people would give up conservation under reflective equilibrium (assuming and insofar as conservation is, in fact, net negative).

Comment author: Roxanne_Heston  (EA Profile) 17 August 2017 01:29:53AM 0 points [-]

The next upcoming deadline is August 30th, with new application deadlines every quarter. You can find more details about this here: https://www.eaglobal.org/eagx-when/

In response to Introducing Enthea
Comment author: geoffreymiller  (EA Profile) 16 August 2017 11:54:36PM 1 point [-]

Psychedelics could bring many benefits, but the EA community needs to be careful not to become associated with flaky New Age beliefs. I think we can do this best by being very specific about how psychedelics could help with certain 'intention setting', e.g. 1) expanding the moral circle: promoting empathy, turning abstract recognition of others beings' sentience into a more gut-level connection to their suffering; 2) career re-sets: helping people step back from their daily routines and aspirations to consider alternative careers, lifestyles, and communities; 80k hours applications; 3) far-future goal setting: getting more motivated to reduce X-risk by envisioning far-future possibilities more vividly, as in Bostrom's 'Letter from Utopia' 4) recalibrating utility ceilings: becoming more familiar with states of extreme elation and contentment can remind EAs that we're fighting for trillions of future beings to be able to experience those states whenever they want.

Comment author: geoffreymiller  (EA Profile) 16 August 2017 10:25:58PM 3 points [-]

I would love to see some '40,000 hours' materials for mid-career people pivoting into EA work.

Our skills, needs, constraints, and opportunities are quite different from 20-somethings. For example, if one has financial commitments (child support, mortgage, debts, alimony), it's not realistic to go back to grad school or an unpaid internship to re-train. We also have geographical constraints -- partners, kids in school, dependent parents, established friendships, community commitments. And in mid-life, our 'crystallized intelligence' (stock of knowledge) is much higher than a 20-something's, but our 'fluid intelligence' (ability to solve abstract new problems quickly) is somewhat lower -- so it's easier to learn things that relate to our existing expertise, but harder to learn coding, data science, or finance from scratch.

On the upside, a '40k project' would allow EA to bring in a huge amount of talent -- people with credentials, domain knowledge, social experience, leadership skills, professional networks, prestige, and name recognition. Plus, incomes that would allow substantially larger donations than 20-something can manage.

Comment author: geoffreymiller  (EA Profile) 16 August 2017 09:52:08PM 1 point [-]

Excellent post; as a psych professor I agree that psych and cognitive science are relevant to AI safety, and it's surprising that our insights from studying animal and human minds for the last 150 years haven't been integrating into mainstream AI safety work.

The key problem, I think, is that AI safety seems to assume that there will be some super-powerful deep learning system attached to some general-purpose utility function connected to a general-purpose reward system, and we have to get the utility/reward system exactly aligned with our moral interests.

That's not the way any animal mind has ever emerged in evolutionary history. Instead, minds emerge as large numbers of domain-specific mental adaptations to solve certain problems, and they're coordinated by superordinate 'modes of operation' called emotions and motivations. These can be described as implementing utility functions, but that's not their function -- promoting reproductive success is. Some animals also evolve some 'moral machinery' for nepotism, reciprocity, in-group cohesion, norm-policing, and virtue-signaling, but those mechanisms are also distinct and often at odds.

Maybe we'll be able to design AGIs that deviate markedly from this standard 'massively modular' animal-brain architecture, but we have no proof-of-concept for thinking that will work. Until then, it seems useful to consider what psychology has learned about preferences, motivations, emotions, moral intuitions, and domain-specific forms of reinforcement learning.

Comment author: geoffreymiller  (EA Profile) 16 August 2017 09:36:00PM 2 points [-]

I agree that growing EA in China will be important, given China's increasing wealth, clout, confidence, and global influence. If EA fails to reach a critical mass in China, its global impact will be handicapped in 2 to 4 decades. But, as Austen Forrester mentioned in another comment, the charity sector may not be the best beachhead for a Chinese EA movement.

Some other options: First, I imagine China's government would be motivated to thinking hard about X-risks, particularly in AI and bioweapons -- and they'd have the decisiveness, centralized control, and resources to really make a difference. If they can build 20,000 miles of high-speed rail in just one decade, they could probably make substantial progress on any challenge that catches the Politburo's attention. Also, they tend to take a much longer-term perspective than Western 'democracies', planning fairly far into the mid to late 21st century. And of course if they don't take AI X-risk seriously, all other AI safety work elsewhere may prove futile.

Second, China is very concerned about 'soft power' -- global influence through its perceived magnanimity. This is likely to happen through government do-gooding rather than from private charitable donations. But gov't do-gooding could be nudged into more utilitarian directions with some influence from EA insights -- e.g. China eliminating tropical diseases in areas of Africa where it's already a neocolonialist resource-extraction power, or reducing global poverty or improving governance in countries that could become thriving markets for its exports.

Third, lab meat & animal welfare: China's government knows that a big source of subjective well-being for people, and a contributor to 'social stability', is meat consumption. They consume more than half of all pork globally, and have a 'strategic pork reserve': https://www.cnbc.com/id/100795405. But they plan to reduce meat consumption by 50% for climate change reasons: https://www.theguardian.com/world/2016/jun/20/chinas-meat-consumption-climate-change This probably creates a concern for the gov't: people love their pork, but if they're told to simply stop eating it in the service of reducing global warming, they will be unhappy. The solution could be lab-grown meat. If China invested heavily in that technology, they could have all the climate-change benefits of reduced livestock farming, but people wouldn't be resentful and unhappy about having to eat less meat. So that seems like a no-brainer to get the Chinese gov't interested in lab meat.

Fourth, with rising affluence, young Chinese middle-class people are likely to have the kind of moral/existential/meaning-of-life crises that hit the US baby boomers in the 1960s. They may be looking for something genuinely meaningful to do with their lives beyond workaholism & consumerism. I think 80k hours could prove very effective in filling this gap, if it developed materials suited to the Chinese cultural, economic, and educational context.

Comment author: Ben_West  (EA Profile) 16 August 2017 09:27:41PM 0 points [-]

I'm curious if you think that the "reflective equilibrium" position of the average person is net negative?

E.g. many people who would describe themselves as "conservationists" probably also think that suffering is bad. If they moved into reflective equilibrium, would they give up the conservation or the anti-suffering principles (where these conflict)?

Comment author: Denkenberger 16 August 2017 09:20:37PM 0 points [-]

If illicit drugs were greater expenditure than grains, that would be amazing. But I agree, not that important to the argument.

Comment author: geoffreymiller  (EA Profile) 16 August 2017 09:07:15PM 1 point [-]

From my perspective as an evolutionary psychologist, I wouldn't expect us to have reliable or coherent intuitions about utility aggregation for any groups larger than about 150 people, for any time-spans beyond two generations, or for any non-human sentient beings.

This is why consequentialist thought experiments like this so often strike me as demanding the impossible of human moral intuitions -- like expecting us to be able to reconcile our 'intuitive physics' concept of 'impetus' with current models of quantum gravity.

Whenever we take our moral intuitions beyond their 'environment of evolutionary adaptedness' (EEA), there's no reason to expect they can be reconciled with serious consequentialist analysis. And even within the EEA, there's no reason to expect out moral intuitions will be utilitarian rather than selfish + nepotistic + in-groupish + a bit of virtue-signaling.

Comment author: Lee_Sharkey 16 August 2017 08:30:07PM 1 point [-]

Hey Denkenberger, thanks for your comment. I too tend to weight the future heavily and I think there are some reasons to believe that DPR could have nontrivial benefits with this set of preferences. This was in fact why, as Michael mentions above:

"FWIW, I think the mental health impact of DPR is about 80% of it's value, but when I asked Lee the same question (before telling him my view) I think he said it was about 30% (we were potentially using different moral philosophies)." because I think DPR's effects on the far future could be the source of most of its expected value.

DPR sits at the juncture between international development & economic growth, global & mental health, national & international crime, terrorism, conflict & security, and human rights. I think we should expect solving the world drug problem to improve some or all of these issues, as Michael argued in the series.

I think it could be easy to overlook the expected benefits of significant reductions in funding and motivation for crime, corruption, terrorism, and conflict for fostering a stable, trusting global system. My weak conjecture is that such reductions would bring an array of global benefits composed of reduced out-group fear (on community and international levels), stronger institutions, and richer societies.

DPR might thus offer a step in the right direction towards solving issues of global coordination, which in turn may increase our expectations for solving the coordination problem for AI and, thence, the long-term future. I admit this is a fairly hand-wavy notion and that the causal chains are undesirably long and uncertain, relying on unpredictable assumptions (such as the timing of an intelligence takeoff compared with the length of time it would take to observe the international social benefits, for a start). My confidence intervals are therefore commensurately wide, but still I struggle to think of ways in which it could be net negative for global coordination. So almost all of my probability weight is positive. Multiplied by humanity's cosmic endowment, I weigh this relatively heavily. Of course, there may be other, more certain activities that we can do to improve the EV of humanity's future, and I think there are, but I don't think DPR is obviously a waste of time if that's what we care about.

Comment author: Lee_Sharkey 16 August 2017 05:46:19PM 1 point [-]

(note: Lee wrote the pain section but we both did editing, so I'm unsure whether to use 'I' or 'we' here)

I align myself Michael's comment.

Comment author: kbog  (EA Profile) 16 August 2017 01:12:49PM *  0 points [-]

MIRI/FHI have never published anything which talks about any view of consciousness. There is a huge difference between inferring based on things that people happen to write outside of the organization, and the actual research being published by the organization. In the second case, it's relevant to the research, whether it's an official value of the organization or not. In the first case, it's not obvious why it's relevant at all.

Luke affirmed elsewhere that Open Phil really heavily leans towards his view on consciousness and moral status.

Comment author: kbog  (EA Profile) 16 August 2017 01:06:03PM *  0 points [-]

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is.

The is-ought problem doesn't say that you can't intrinsically value anything. It just says that it's hard. There's lots of ways to argue for intrinsically valuing things, and I have a reason to intrinsically value well-being, so why should I divert attention to something else?

Unless it is omniscient I don't see how it will see all threats to itself.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

Democratic oversight, international cooperation, good values in AI, FDT to facilitate coordination, stuff like that.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen".

How would that be allowed if those people might create a competitor AI?

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

Comment author: MichaelPlant 16 August 2017 10:26:33AM *  1 point [-]

Thanks for the comment, although I largely feel you're accusing me/us of things I'm not guilty of. (note: Lee wrote the pain section but we both did editing, so I'm unsure whether to use 'I' or 'we' here)

What I see this series of post as doing is suggesting DPR to the EA world as a cause worth taking seriously. I don't insist on particular policy suggestions. I haven't made my mind up and others are free to draw their own conclusions.

One issue we highlight is the lack of pain medication in part A of the world, whilst noting there is too much in part B, but that we wont talk about B. That doesn't seem unreasonable to do in an essay limited in scope, unless it's obvious changing the situation in A would obviously lead to it becoming like B. It's not obvious (although we can argue about it) so we left it out. Indeed, given the use of psychedelics to treat addiciton (see footnote 27), you might think that part of DPR is important because you worry about the opiate crisis.

Further, as I claim in part 1, there are multiple arguments for different types of DPR. So it's not sufficient to claim one part would backfire to say we shouldn't be interested in any of it. There are lots of ways we could do DPR, and you could change everything else whilst leaving opiates unchanged. By analogy, seems that I'm saying something like "X will reduce crimes apart from murders" and you're replying "but you should think about stopping murders" which strikes me as irrelevant.

Here's the quote where I mentioned this in part 3:

Perhaps we should legalise all those drugs up to and including cannabis on the graph of harms I used earlier, but no further. This would mean legalising everything apart from amphetamines, cocaine and heroin (and presumably keeping tobacco and alcohol legal too) [note: graph now added; must have been lost in transmission]

I'm slightly unsure how to response to your point about original analysis, which feels unhelpfully personal. In section 2.1 above I say why drugs have been made illegal, but I didn't want to get stuck into that because I took the real objective to be explaining why DPR might do good. I also suggest a range of policies (in part 3) and how they each solve different parts of the problems. I'm not claiming to be the first to write about DPR. What I thought was missing was an analysis that brings all the different arguments together, as I also discuss in part 3, and, further, brings it to the attention to EA. If you already know lots about DPR the argumentative pay-off only comes in part 4 where I explain why this might be more cost-effective that causes EAs already support. If I'd just written part 4 you (or others) would be justified in complaining I hadn't made the case!

Finally, FWIW, I think the largest ammount of value from DPR would come from tackling mental health with new methods, and that doesn't have the obvious backfire worries. I'm not really sure how to think about the heroin epidemic, nor do I see it as necessary for me to provide an answer. If you happen to have a solution to the opiate crisis and can give me a cost-effectiveness model, then I can build that in to what I do have. I'm not expecting you to have a solution, nor I think I need one to be able to deal with other parts of the topic.

Comment author: John_Maxwell_IV 16 August 2017 08:16:44AM *  2 points [-]

it's a relevantly different problem from that of under-prescription in the developing world

Seems like it could potentially be pretty relevant if "optimal" levels of prescription tend to slide towards heroin epidemics, or something like that.

this is already a huge document

That's fair. I guess I mainly wanted to ensure that you spent some time thinking about this before actually working on DPR.

[Rant incoming]

I am generally frustrated with EAs for not brainstorming how their projects might backfire. In my view, the sign of a given intervention is much more important than the tractability/cost-effectiveness, and it seems like you devoted more space to the second two. Sign uncertainty should be high by default.

I am also frustrated by the fact that I feel like in this particular case, the 'EA way' of thinking about things is actually worse than the way the average American voter thinks about them. Like, if I proposed to an average American voter that we should legalize all drugs, they would probably immediately say something like "well what about the heroin epidemic", and this seems like a completely valid point to bring up! I'm frustrated that EA has somehow caused us to focus on issues like tractability, cost-effectiveness, and neglectedness instead of addressing the issue of whether we should do the darn thing in the first place. And this is a mistake that the average American voter does not make.

This is also related to another thought pattern I see in EA where it seems like people consider EA to be some kind of magical fairy dust that creates effective interventions. Like, I'm sure many gallons of ink have been spent writing about the optimal drug policy and I don't see you making a serious attempt to either summarize the existing literature or contribute something new (e.g. "here is why drugs were made illegal, here's why the thinking is flawed"--cc Chesterton's Fence--"here's a new drug policy that gets us the benefits of the old policy without the costs"). And even if you were doing either of those things, that still doesn't necessarily constitute a basis for action. I might as well randomly choose one of the many memos that have been written over the years and implement the drug policy suggested by that memo. There's no magical fairy dust in the EA forum that makes your memo better than all the other memos that have been written.

That said, you should not take this objection personally because like I said, it is a beef I have with EA culture in general. This series is fine as a pointer to the topic, and you probably just meant to indicate "hey, EAs should be paying more attention to this", so my rant is probably unjustified.

In part 3 I note it's an open question as to whether decriminalisation, legalisation (or even the status quo) is the right response to heroine.

Could you point to the specific passage you're referring to?

As a final pragmatic note, I think if you actually wanted to work on DPR, solving the heroin epidemic could be a good first step to doing that, because that would create room to maneuver politically for legalization reforms.

Comment author: Denkenberger 15 August 2017 11:25:01PM *  0 points [-]

Thanks - very interesting. Could an EA pay for drug medical studies in Portugal? It seems like there are millions of people working on marijuana legalization, and many critics of the war on drugs. I know you are looking more holistically, but overall it doesn't seem that neglected.

For those who think something else is more important, I would be very grateful if you could produce some (very rough) estimates of how many times more cost effective money to their preferred cause is than DPR.

Some people would say ~10^40 times (computer consciousnesses and spreading intergalactically). Of course there are many reasons why this vision may not pan out, but it does seem like we should have a non-negligible probability that we are alone in the galaxy (or even the visible universe) and that we can and have the will to colonize the stars if we don't destroy ourselves. These qualifiers might only take a few orders of magnitude off. Then even if you do not believe in the tractability of AI, there are many other concrete interventions that could reduce existential risk, like asteroid defense and alternate foods. So basically I do not believe the prior of cost effectiveness of global poverty interventions should be strong, so I don't think we should adjust these expected value calculations downward nearly as much as some proposed models have done.

Also, if one does not value the far future, there are other claims of cost effectiveness better than global poverty.

Comment author: MichaelPlant 15 August 2017 02:58:38PM 0 points [-]

Okay. So the source I found was probably wrong. I can't see how this has any significance on the argument, so it would have been more useful to say "this isn't important for the argument, but just so you know ... "

View more: Next