Hide table of contents

TL;DR: 

I think EAs are advocating for more technocratic decision-making without having formally thought about this. 

This makes it an urgent priority for EAs to answer the following questions: 

  • What balance should policymakers strike between technocracy and populism generally?
  • How, if at all, should this balance change when there is a large difference between public and expert opinion?
  • How, if at all, should this balance change when moral and/or empirical uncertainty is extremely high or extremely low?
  • How, if at all, should this balance change when the differences in expected value between plausible policy options is extremely high or extremely low?

 

Definitions:

In this post, by 'populism' I mean 'policymakers making decisions based on public opinion' and don't intend to attach negative connotations to 'populism'. 

(I'd like to find a better word for this because 'populism' is a loosely defined word and has negative connotations. I considered using the word 'democracy' here, but I think the positive connotations for 'democracy' are too large and thought that 'populism' is relatively more neutral (i.e, it seems more socially acceptable to argue for more populism than for less democracy). I also think 'democracy' doesn't convey the extreme end of decisions being made on public opinion. If you can think of a better word, please suggest it).

By technocracy, I mean 'policymakers making decisions based on expert opinion'.

 

Post:

We can think of decisions by policymakers as existing on a spectrum from extremely technocratic to extremely populist.

My view of EA has long been that it clearly advocates for more technocratic decision-making.

The clearest example of this is an investigation into reducing populism and promoting evidence-based policy

Apart from this, here are some subjective impressions which contribute to my view:

  • many EAs think governments should use more expert surveys in decision-making
  • many EAs actively seek to use their expertise to lobby policymakers, but few are pursuing 'grassroots' approaches to policy change (Note: While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making)

 

Off the top of my head, these are the 2 main, broad, downsides to more technocratic decision-making:

  1. More populist decision-making reduces the risks involved in acting under moral and empirical uncertainty, by incorporating a wider range of moral theories and opinions into decision making. So, more technocratic decision-making increases the risks.
  2. More populist decision-making reduces the risks from experts intentionally or unintentionally advocating for unethical policies (under a wide range of moral theories) out of self-interest. So, more technocratic decision-making increases this risk. (This risk is a greater risk than the risk of >50% of the public advocating for unethical policies out of self-interest, because in expectation, unethical policies in the self-interest of ">50% of the public" would be good for more people than unethical policies in the self-interest of experts).
  3. Some people (including a past version of me) consider 'more democratic' decision-making to be inherently good, regardless of outcomes, because they see everyone having more equal political power as an end in itself. So more technocratic decision-making goes against this.

 

And this is the main, broad downside to more populist decision-making:

  1. Public opinion on moral theories and empirical evidence are less likely to be correct and rational than expert opinion, so decisions made on public opinion should on average have lower expected value, under certain moral theories. So, more populist decision-making on average reduces expected value.

 

Having looked up the terms 'technocracy' and 'technocratic' in the EA forum, they come up less often than I think they should for a movement that advocates for a move in this direction.

I also think some of the arguments in the 'decentralising risk' paper, and the responses to it in the EA forum, are debates about the extent to which decision-making should be more 'technocratic' or more 'populist', but the term 'technocratic', or the idea of a technocracy-populism spectrum, neither appears in the paper nor in the responses.

 

Two quotes from the paper:

"Representativeness itself says nothing about whether its philosophical pillars
are wrong or right, but it is risky to rely exclusively on one unrepresentative approach given moral, political and empirical uncertainty."

"Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous."

 

In my opinion, one idea (amongst others) that these sentences get across are:

"There is unusually high moral and empirical uncertainty in decision-making for mitigating existential risk, so the risks of a highly technocratic, and less populist approach are unusually high."

 

The point of this post is:

  1. "Technocracy" and "populism" are being debated in the context of the 'decentralising risk' paper without being named. Naming these approaches, and viewing them as existing on a spectrum, will help clarify arguments.
  2. The fact that 'technocracy' gets named so infrequently by EAs may be a sign that many are advocating for more technocracy without realising it or without realising that the term exists, along with pre-existing criticism of the idea. Apparently someone called Rob Reich brought up the idea that EAs might prefer technocracy to democracy a long time ago.
  3. I think it is an urgent priority for EAs to investigate 4 important questions with regards to institutional decision-making:
  • What balance should policymakers strike between 'technocracy' and 'populism' generally?
  • How, if at all, should this balance change when there is an extremely large difference between public and expert opinion?
  • How, if at all, should this balance change when moral and/or empirical uncertainty is extremely high or extremely low?
  • How, if at all, should this balance change when the differences in expected value between plausible policy options is extremely high?

 

This is an urgent priority because many EAs are already taking action on the assumption that the best answer for all 4 questions is "more technocracy than the status quo", without having carefully considered the questions using evidence and reasoning.

Comments27
Sorted by Click to highlight new comments since:

First of all, thanks for this post. The previous post on this topic (full disclosure: I haven't yet managed to read the paper in detail) poisoned the discourse pretty badly by being largely concerned with meta-debate and by throwing out associations between the authors' dispreferred policy views and various unsavory-sounding concepts. I was worried that this meant nobody would try to address these questions in a constructive manner, and I'm glad someone has.

I also agree that there's been a bit of unreflectiveness in the adoption of a technocratic-by-default baseline assumption in EA. I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats, and I don't think this was attributable to anyone convincing me that my previous viewpoint was wrong, for the most part. (By contrast, while social effects/frog-boiling were probably important in eroding my resistance to adopting EA views on AI safety, the reason I was thinking about adopting such views in the first place was because I read arguments for them that I couldn't refute.) I'm guessing this has happened to other people too. This is probably worrying and I don't think it's necessarily applicable to just this issue.

That said, I didn't know what to actually do about any of this, and after reading this post, I still don't. I think my biggest disagreement is that I don't think the concept of "technocracy" is actually very helpful, even if it's pointing at a real cluster of things.

I'm reading you as advocating that your four key questions be treated as crucial considerations for EA. I don't think this is going to work, because these questions do not actually have general answers. Reality is underpowered. Social science is nowhere near being capable of providing fully-general answers to questions this huge. I don't think it's even capable of providing good heuristics, because this kind of question is what's left after all known-good heuristics have already been taken into account; that's why it keeps coming up again and again. There is just no avoiding addressing these questions on a case-by-case basis for each individual policy that comes up.

One might argue that the concept of "technocracy" is nevertheless useful for reminding people that they need to actually consider this vague cluster of potential risks and downsides when formulating or making the case for a policy, instead of just forgetting about them. My objection here is that, as far as I can tell, EAs already do this. (To give just one example, Eliezer Yudkowsky has explicitly written about moral hazard in AGI development.) If this doesn't change our minds, it's because we think all the alternatives are worse even after accounting for these risks. You can make an argument that we got the assessment wrong, but again, I think it has to be grounded in specifics.

If we don't routinely use the word "technocracy", then maybe that's just because the word tends to mean a lot of different things to a lot of different people; you've adopted a particular convention in this post, but it's far from universal. Even if the meanings are related, they're not precise, and EAs value precision in writing. Routinely describing proposed policies as "populist" or "technocratic" seems likely to result in frequent misunderstandings.

Finally, since it sounds like there are concerns about lack of existing writing in the EAsphere about these questions, I'd like to link some good ones:

  • Scott Alexander's back-and-forth with Glen Weyl (part 1, part 2; don't miss Scott's response in the comments, and I think Weyl said further things on Twitter although I don't have links). Uses the word "technocracy", and is probably the most widely-read explicit discussion of technocracy-vs.-populism in the EAsphere. I think that Scott, at least, cannot reasonably be accused of never having thought about this.
  • Scott's review of Rob Reich's book Just Giving. Doesn't use the word "technocracy", but gets into similar issues, and presumably Reich's perspective in the book comes from many of the same concerns that drove this piece, which I think is what Peter Singer was responding to in the EA Handbook post that you linked. Builds on the earlier post "Against Against Billionaire Philanthropy" (see also highlights from the comments).
  • "Against Multilateralism", by Sarah Constantin. Maybe the EAsphere post that most explicitly lays out the case for something-like-populism (though ultimately not siding with it). Argues with Weyl again, though it actually predates his engagement with Scott and EA. Ends with some promising directions that, if further explored, could maybe be our best hope currently available of making general progress on this class of questions (though I still don't think they rise to the level of crucial considerations).

Thanks for your well thought-out comment.

 

"I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats.” Same!

You're correct in reading my post as "technocracy vs populism is a crucial consideration".


I think social science is unlikely to offer us a good, general  answer to technocracy vs populism, but I think it can offer us a better answer than we currently have, because I feel that we have mostly skipped attempting to take a scientific approach to the question, but nonetheless have accepted 'more technocracy than the status quo' as the answer.

Also, I am confident that social science can offer us useful heuristics for when we look at specific cases.

For example, Scott’s article (thank you for linking it) looks at some positive examples of historical policy changes that (he claims) were mostly technocratic. 

I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other. 

We could also look specifically at how public opinion and expert opinion may have differed at the time as policymakers approached these decisions, to work out if more technocracy or more populism has a better track record under the conditions of a large disagreement between public and expert opinion. 

 

Also, based on the pros and cons of technocracy and populism that I outlined, it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values. 

I think part of what makes existential risk studies so difficult is that these heuristics don’t help, because existential risk studies involves both extremely high uncertainty and plausible policy options with an extremely large range of expected values. 

Possibly, these situations most suit a 'third' approach, where experts lobby the public rather than policymakers directly. If this is successful, public and expert opinion could become very similar, and the more similar they are, the more similar technocratic and populist approaches become, meaning that striking the right balance between them matters considerably less. (This would mean that Nick Bostrom and Toby Ord were way ahead of me by publishing Superintelligence and The Precipice).

 

I like that EA actively thinks about the risks associated with moral uncertainty, but I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests, and I don't think there has been much thinking about whether equally distributed political power should or should not be an end in itself. I also think that EAs haven't sufficiently considered populism as a tool to deal with moral uncertainty.  (I think the focus of moral uncertainty has generally been on experts themselves trying to account for various moral theories when forming  opinions).

 

Also, to clarify, I am not arguing against more technocracy. I think it's entirely reasonable for EAs to conclude that more technocracy is better than the status quo even after considering the risks, but I think it's important for this conclusion to be made in a "scientific / rational / systematic / evidence and careful reasoning " way. Currently, I don't think this is generally the case even if EAs do think about moral uncertainty, for the reasons that I outlined in the paragraph above this one.


I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term 'technocracy' is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.

 

Finally, thanks for all the links!

I also think that EAs haven't sufficiently considered populism as a tool to deal with moral uncertainty.

I agree that there hasn't been much systematic study of this question (at least not that I'm aware of), and maybe there should be. That being said, I'm deeply skeptical that it's a good idea, and I think most other EAs who've considered it are too, which is why you don't hear it proposed very often.

Some reasons for this include:

  • The public routinely endorses policies or principles that are nonsensical or would obviously result in terrible outcomes. Examples include Philip Tetlock's research on taboo tradeoffs [PDF], and this poll from Reuters (h/t Matt Yglesias): "Nearly 70 percent of Americans, including a majority of Republicans, want the United States to take 'aggressive' action to combat climate change—but only a third would support an extra tax of $100 a year to help."
  • You kind of can't ask the public what they think about complicated questions; they're very diverse and there's a lot of inferential distance. You can do things like polls, but they're often only proxies for what you really want to know, and pollster degrees-of-freedom can cause the results to be biased.
  • When EAs look back on history, and ask ourselves what we would/should have done if we'd been around then—particularly on questions (like whether slavery is good or bad) whose morally correct answers are no longer disputed—it seems to look like we would/should have sided with technocrats over populists, much more often than the reverse. A commonly-cited example is William Wilberforce, largely responsible for the abolition of slavery in the British Empire. Admittedly, I'd like to see some attempt to check how representative this is (though I don't expect that question to be answerable comprehensively).

I agree that populism as a tool for dealing with moral uncertainty has obvious weaknesses (thank you for explaining some of these in detail), but in my view the weaknesses are not large enough for a systematic exploration of this question to be not worth the time. 

I also agree that other EAs viewing these weaknesses as too severe would be a good explanation for why this hasn't been done yet.

I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term 'technocracy' is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.

I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of "technocracy" is too many different things rolled into one word. This isn't about jargon vs. non-jargon; substituting a more jargon-y word doesn't help. (I think this is part of why it's taken on such negative connotations, because people can easily roll anything they don't like into it; that's not itself a strong reason not to use it, but it's illustrative.)

"Technocracy" works okay-ish in contexts like this thread where we're all mostly speaking in vague generalities to begin with, but when discussing specific policies or even principles for thinking about policy, "I think this is too technocratic" just isn't helpful. More specific things like "I think this policy exposes the people executing it to too much moral hazard", or "I think this policy is too likely to have unknown-unknowns that some other group of people could have warned us about", are better. Indeed, those are very different concerns and I see no reason to believe that EA-in-general errs the same amount, or even in the same direction, for each of them. (If words like "moral hazard" are too jargon-y then you can just replace them with their plain-English definitions.)

Thanks for the clarification. I agree that this would be a good explanation for why the term 'technocracy' doesn't come up that often in EA.

I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.

This is exactly the kind of thing that I think won't work, because reality is underpowered.

I forgot to link this earlier, but it turns out that some such research already exists (minus the stipulation that it has to be in democratic countries, but I don't think this is necessarily a fatal problem; there are key similarities with politics in non-democratic countries). In 2009, Daron Acemoglu (a highly-respected-including-by-EAs academic who studies governance) and some other people wrote a paper [PDF] arguing that the First French Empire created a natural experiment, and examining the results. Scott reviewed it in a follow-up post to his earlier exchange with Weyl. The authors' conclusion (spoilered because Scott's post encourages readers to try to predict the results in advance) is that

 technocratic-ish policies got better results.

I consider this moderately strong evidence against heuristics in the opposite direction, but very weak evidence in favor of heuristics in the same direction. There are quite a lot of caveats, some of which Scott gets into in the post. One of these is that the broader technocracy-vs.-populism question subsumes a number of other heuristics, which, in real life, we can apply independently of that single-axis variable. (His specific example might be controversial, but I can think of others that are harder to argue with, such as (on the technocratic side) "policies have to be incentive-compatible", or (on the populist side) "don't ignore large groups of people when they tell you you've missed something".) Once we do that, the value of a general catch-all heuristic in one direction or the other will presumably be much diminished.

Also, there are really quite a lot of researcher degrees-of-freedom in a project like this, which makes it very hard to have any confidence that the conclusions were caused by the underlying ground truth and not by the authors' biases. And just on a statistical level, sample sizes are always going to be tiny compared to the size of highly multi-dimensional policyspace.

So that's why I'm pessimistic about this research program, and think we should just try to figure stuff out on a case-by-case basis instead, without waiting for generally-applicable results to come in.

Since you mentioned it, I should clarify that I have no strong opinion on whether EA should be more technocratic or more populist on the current margin. (Though it's probably fair to say that I'm basically in favor of the status quo, because arguments against it mostly consist of claims that EA has missed something important and obvious, and I tend to find these unpersuasive. I suppose one could argue this makes me pro-technocracy, if one thought the status quo was highly technocratic.) In any case, my contention is that it's not a crucial consideration.

Thank you for explaining all of this.

I think we are disagreeing in a general sense about the usefulness of imprecise and unreliable, but systematically obtained answers to big questions, when trying to answer smaller sub-questions. If we think these answers are less useful, we are less likely to decide that 'technocracy vs populism in general' is a crucial consideration. If we think these answers are more useful, we are more likely to decide that 'technocracy vs populism in general' is a crucial consideration.

I do agree the conclusion of Acemoglu's paper (admittedly, it is too long for me to read) is only weak evidence in favour of more technocracy, but if other papers were able to identify more natural experiments and came to similar conclusions, in theory I think that could generate enough evidence for 'more technocracy' (or 'more populism') to be a sufficiently strong prior / heuristic to be useful when looking at individual cases, which is why I still think 'technocracy vs populism' is a crucial consideration.

Update: Having read another comment, it seems likely that expert opinion most replaces other expert opinion in the context of policymaking. That changes my mind on whether technocracy vs populism is a crucial consideration, since it is only relevant to 'promoting evidence-based policy', a very minor EA cause area.

I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests

In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it's not the most important concern, but that focus on it is actively harmful to concerns that are more important.

For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.

More generally, politics is fun to argue about and people like to look for villains, so there's a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn't get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.

One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.

I'm not sure what to think about other kinds of policies that EA cares about; I can't think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.

I don't think there has been much thinking about whether equally distributed political power should or should not be an end in itself.

On the current margin, that's not really the question; the question is whether it's an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don't feel any qualms about adopting "no" as a working assumption to that question. I do think I value this to some extent, and I think it's right and good for that to affect my views on rich-country policies where the stakes are relatively low, but in the presence of (actual or expected future) mass death or torture, as is the case in the cause areas EA prioritizes, I think these considerations have to give way. It's not impossible that something could change my mind about this, but I don't think it's likely enough that I want to wait for further evidence before going out and doing things.

Of course, there are a bunch of ways that unequally distributed political power could cause problems big enough that EAs ought to worry about them, but now you're no longer talking about it as an end-in-itself, but rather as a means to some other outcome.

it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values. 

I'm sorry, I don't understand what the difference is between those things.

I think examples and better wording might help:

With overseas aid budgets, the set of plausible policy options, such as decreasing and increasing the budget by different amounts, has a large range of expected values, and the uncertainty surrounding the expected value of each policy option is low. For this, I think more technocratic approaches are preferable.

With income tax rates, the set of plausible policy options, such as decreasing and increasing income tax rates by different amounts, has a smaller range of expected values, and the uncertainty surrounding the expected value of each policy option is high. For this, I think more populist approaches are preferable.

In the last paragraph, did you mean to write "the uncertainty surrounding the expected value of each policy option is high"?

Yes I did, apologies, just corrected it.

Strong upvoted. You took a relevant debate in EA (including what was being debated in the comments of the other post) and clearly defined and explained both sides in a way that shows why there's members of the community have different intuitions.

Two points, but I want to start with praise. You noticed something important and provided a very useful writeup. I agree that this is an important issue to take seriously.

While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making

I don't think that this is an accurate representation of how policymakers operate, either for elected officials or bureaucrats. My view comes from a gestalt of years of talking with congressional aides, bureaucrats in and around DC, and working at a think tank that does policy research. Simply put, there are so many people trying to make their point in any rich democracy that being "available" is largely equivalent to being ignored.

There are exceptions, particularly academics who publish extensively on a topic and gain publicity for it, but most people who don't actively attempt to participate in governance simply won't. Nobody has enough spare time, and nobody has enough spare energy, to actively seek out points of view and ideas reliably.

More importantly, I think that marginal expert influence mostly crowds out other expert influence, and does not crowd out populist impulses. Here I am more speculative, but my sense is that elected officials get a sense of what the expert/academic view is, as one input in a decision making process that also includes stakeholders, public opinion (based on polling, voting, and focus groups), and party attitudes (activists, other elected officials, aligned media, etc). Hence an EA org that attempts to change views mostly displaces others occupying a similar social / epistemic / political role, not any sense of public opinion.

 

On the bureaucracy side,  expert input, lawmaker input, and stakeholder input are typically the primary influences when considering policy change. Occasionally public pressure will be able to notice something, but the federal registry is very boring, and as the punctuated equilibrium model of politics suggests, most of the time the public isn't paying attention. And bureaucrats usually don't have extra time and energy to go out and find people whose work might be relevant, but they don't have anyone actively presenting. Add that most exciting claims are false, so decisionmakers would really have to read through entire literatures to be confident in a claim, and experts ceding influence goes primarily not to populist impulses but existing stakeholders.

Thanks for your insight. 

Assuming you're right and experts who seek to influence policy do mostly just replace other expert opinion, then the "let's use our expertise to influence policymakers" aspect of EA does not meaningfully make decision-making more technocratic, making the debate between technocracy vs populism only be relevant to EA in the context of 'promoting evidence based policy', but not to the major EA cause areas. That changes my mind and makes me think the technocracy vs populism debate is not a crucial consideration for EA, since it is only important for a minor EA cause area.

If you anyone else reading this has also worked in government and has an opinion on whether experts seeking to influence policymakers mostly replace the opinion of other experts, I'd be interested to hear it!

One downside of considering this question in the abstract is that it downplays the crucial issue of trust. The greater trust between the population and the experts, the more willing the population will be to accept more technocracy.

While true, I think most proposed EA policy projects are much too small in scope to be able to move the needle on trust, and so need to take the currently-existing level of trust as a given.

Thanks for your comment.

I think the issue of trust is very interesting when thinking about technocracy vs populism in the longer term. 

However, I think the risk of the population rejecting decisions is only significant if decisions are extremely technocratic, and this would only a concern if we conclude that extremely technocratic decisions are ideal. I think we are unlikely to conclude this. 

But if we do conclude that extremely technocratic decisions are ideal, I think the ideal approach would be to seek to increase population trust in experts, and aim for a gradual increase in technocracy corresponding to increasing population trust. But it is certainly possible that population trust can't be increased enough to accept an extreme level of technocracy.

(This risk is a greater risk than the risk of >50% of the public advocating for unethical policies out of self-interest, because in expectation, unethical policies in the self-interest of ">50% of the public" would be good for more people than unethical policies in the self-interest of experts)

This seems to have a bunch of hidden assumptions, including both about the relative capabilities of experts vs. the public to assess the effects of policies, as well as about the distribution of potential policies: While constitutions are not really a technocratic constraint on public opinion, one of their functions appears to be to protect minorities from a majority blatantly using policies to suppress them; in a world where the argument fully went through, this function would not be necessary. 

The fact that 'technocracy' gets named so infrequently by EAs may be a sign that many are advocating for more technocracy without realising it or without realising that the term exists, along with pre-existing criticism of the idea.

While this might certainly be true,  the negative connotations of the term "technocracy" might play an important role here as well: Someone who is aware of the concepts and its criticisms might nevertheless be prompted not to use the term in order to avoid knee-jerk reactions,  similar to how someone arguing for more "populist" positions might not use that term, depending on the audience. 

While I am not sure I agree about the strong language regarding urgent priorities, and would also like to find more neutral terms for both sides, I agree that a better understanding of the balance between expert-driven policy and public opinion would be quite useful; I could imagine that which one is better can strongly depend on specific details of a particular policy problem, and that there might be ways of integrating parts of both sides productively: While I do think that Futarchy is unlikely to work, some form of "voting on values" and relying on expertise for predicting how policies would affect values still appears appealing, especially if experts' incentives can be designed to clearly favor prediction accuracy, while avoiding issues with self-fulfilling prophecies. 

Thanks for pointing out the assumptions. I was aware of them but thought that my statement was true despite them, and didn't want to lengthen the post too much. In the future I will add assumptions like these as a footnote, so that people who disagree on the assumptions can think about how that should affect their views on the post as a whole.

I agree that the negative connotations of 'technocracy' is probably a good explanation of why proponents of expert-opinion based policy don't use the word that often. 

This is an important topic that needs more discussion, but I'm not sure that there are many cases where technocracy and popular opinion actually conflict, because there rarely is a well defined public opinion on an issue. In polls, just changing the way you ask a question is asked can flip the results entirely and the responses are likely to be driven by what they believe their social/political group believes rather than any careful consideration. Furthermore, even if a stable  public opinion exists, it is no guarantee that the direction of policy won't be decided by elite opinion/technocrats/interest groups. That would require a significant number of voters to feel strongly enough to take action (protest/change their vote) for the majority view. 

Therefore, I think this discussion could benefit from concrete examples where EA activities is likely to come into a large  direct conflict with public opinion in a concrete way, because I can't think of much EA is currently doing that could lead to such issues. Many of the ways EA currently interacts with the political process (eg Clean Air Taskforce style lobbying for clean energy tax credits, or organisations opposing gain of function research) appear to me as the minutia of funding decisions and regulations that receive too little media attention for there to be a strong and stable public opinion on it. I would expect that to also be the case if EA orgs attempt to influence any other issue that is not a political hot button.

If you'd like to read more about why we might not be able to define a stable public opinion on most issues, I'd  recommend the book Democracy for Realists.

Thanks for your comment. 

I have updated away of considering technocracy vs populism to be a crucial consideration based on arguments that EAs using expertise to influence policymakers are mostly replacing other expert opinion and not public opinion.

I think the best example of EA activity coming into conflict with public opinion would be the campaign against the decrease in the UK's foreign aid budget

And here's a public poll on this question, where 66% supported the decrease.

To clarify, I'm not criticising the campaign, I'm quite strongly in favour of more technocratic decision-making and of more foreign aid.

To me, prediction markets/voting systems are the answer to this problem. It seems reasonable to me to have a fund:
- which gives based on votes weighted by forecasting record 
- or which allows users to upvote/downvote different causes

This shouldn't be all money allocation but I'd like to see EA experimenting with forms of democratisation.

Curated and popular this week
Relevant opportunities