T

Taymon

408 karmaJoined Feb 2015

Posts
1

Sorted by New

Comments
53

I've often thought that there should be separate "phatic" and "substantive" comment sections.

The Fun Theory Sequence (which is on a similar topic) had some things to say about the Culture.

In the last paragraph, did you mean to write "the uncertainty surrounding the expected value of each policy option is high"?

While true, I think most proposed EA policy projects are much too small in scope to be able to move the needle on trust, and so need to take the currently-existing level of trust as a given.

I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term 'technocracy' is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.

I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of "technocracy" is too many different things rolled into one word. This isn't about jargon vs. non-jargon; substituting a more jargon-y word doesn't help. (I think this is part of why it's taken on such negative connotations, because people can easily roll anything they don't like into it; that's not itself a strong reason not to use it, but it's illustrative.)

"Technocracy" works okay-ish in contexts like this thread where we're all mostly speaking in vague generalities to begin with, but when discussing specific policies or even principles for thinking about policy, "I think this is too technocratic" just isn't helpful. More specific things like "I think this policy exposes the people executing it to too much moral hazard", or "I think this policy is too likely to have unknown-unknowns that some other group of people could have warned us about", are better. Indeed, those are very different concerns and I see no reason to believe that EA-in-general errs the same amount, or even in the same direction, for each of them. (If words like "moral hazard" are too jargon-y then you can just replace them with their plain-English definitions.)

I also think that EAs haven't sufficiently considered populism as a tool to deal with moral uncertainty.

I agree that there hasn't been much systematic study of this question (at least not that I'm aware of), and maybe there should be. That being said, I'm deeply skeptical that it's a good idea, and I think most other EAs who've considered it are too, which is why you don't hear it proposed very often.

Some reasons for this include:

  • The public routinely endorses policies or principles that are nonsensical or would obviously result in terrible outcomes. Examples include Philip Tetlock's research on taboo tradeoffs [PDF], and this poll from Reuters (h/t Matt Yglesias): "Nearly 70 percent of Americans, including a majority of Republicans, want the United States to take 'aggressive' action to combat climate change—but only a third would support an extra tax of $100 a year to help."
  • You kind of can't ask the public what they think about complicated questions; they're very diverse and there's a lot of inferential distance. You can do things like polls, but they're often only proxies for what you really want to know, and pollster degrees-of-freedom can cause the results to be biased.
  • When EAs look back on history, and ask ourselves what we would/should have done if we'd been around then—particularly on questions (like whether slavery is good or bad) whose morally correct answers are no longer disputed—it seems to look like we would/should have sided with technocrats over populists, much more often than the reverse. A commonly-cited example is William Wilberforce, largely responsible for the abolition of slavery in the British Empire. Admittedly, I'd like to see some attempt to check how representative this is (though I don't expect that question to be answerable comprehensively).

I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests

In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it's not the most important concern, but that focus on it is actively harmful to concerns that are more important.

For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.

More generally, politics is fun to argue about and people like to look for villains, so there's a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn't get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.

One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.

I'm not sure what to think about other kinds of policies that EA cares about; I can't think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.

I don't think there has been much thinking about whether equally distributed political power should or should not be an end in itself.

On the current margin, that's not really the question; the question is whether it's an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don't feel any qualms about adopting "no" as a working assumption to that question. I do think I value this to some extent, and I think it's right and good for that to affect my views on rich-country policies where the stakes are relatively low, but in the presence of (actual or expected future) mass death or torture, as is the case in the cause areas EA prioritizes, I think these considerations have to give way. It's not impossible that something could change my mind about this, but I don't think it's likely enough that I want to wait for further evidence before going out and doing things.

Of course, there are a bunch of ways that unequally distributed political power could cause problems big enough that EAs ought to worry about them, but now you're no longer talking about it as an end-in-itself, but rather as a means to some other outcome.

Load more