Hide table of contents

In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good.

In my view, this rot comes from incorrect answers to certain practical sociological questions, like:

  1. How important for success is having experience or having been apprenticed to someone experienced?
  2. Is the EA Forum a good tool for collaborative truth-seeking?
  3. How helpful is peer review for collaborative truth-seeking?

...

Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions?

Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right?

I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly, for the most part, of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term).

Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer."

That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me.

High-Level Claims

Claim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you.

There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it.

Let's now turn to Meta-2 from above.

Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer.

Claim 2 can be justified from Claim 1. Note that it is similar, but different. It is weakened by its restriction to sociological questions, and it is strengthened by moving from "people trying hard to get it right, using their expertise as much as you" to merely "wanting to get it right".

The question "when and to what extent should I trust conventional wisdom?" is one that a large group of people are trying very hard to answer correctly. The consensus of most people is that conventional wisdom is pretty good when it comes to designing institutions; at least compared to what a first-principles-reasoner could come up with. Conventional wisdom has incorporated the intuitions, trials, and errors of millions of people. Practical sociological questions are empirical questions, where logical aptitude can only take you so far, and they are one of the domains within which our intuitions are most thoroughly honed, so the ability to productively correct a consensus is likely to be fairly evenly distributed over a big population.

EA Trust-Networks

Claim 3: "Defer to a consensus among EA community members/EA leadership" is a bad strategy for answering practical sociological questions.

This follows from Claim 2. This is a practical sociological question where there exists a conventional answer, so Claim 2 applies. The conventional thing to do when a small group of people proposes new norms for relating to others, new norms for structuring organizations, and new norms for aggregating individuals' beliefs, is to reject those proposals without even bothering to refute them. In general, logic will not get you far when trying to address practical sociological questions, so knockdown refutations are as rare as knockdown defenses.

I really do not want to rest my case on an object-level argument about the virtues of various EA norms. I can make my case, and others can respond with why they don't see it like that, and the status quo will prevail. My contention is that in the likely event that no one can offer an airtight first-principles argument about the utility of certain norms, we should defer to the conventional wisdom of our society as a whole.

Peer Review

Claim 4: High-prestige peer review is the best institution ever invented for collaborative truth-seeking.

This is the conventional answer to a practical sociological question that many people want to get right.

But Einstein didn't get his papers peer-reviewed! I mean "peer review" to include ordinary peer review (at high-prestige venues) plus an "alternate track" in which the entire community of people with the technical expertise to understand the sentences you are writing agree it is correct. This is not me awkwardly redrawing of the borders of concepts to make my claim go through; the physics community's acceptance of general relativity obviously qualifies as high-prestige peer review.

But the pervasiveness of peer review isn't even a century old! Yes, before that, scientists were not subjected to peer review; rather they presented their work to their ... peers ... in Royal Societies who ... reviewed ... their work.

There is much ink spilled on good science that was never peer reviewed, and bad science that passed peer review, and indeed peer review is not a perfect institution, but we also need to consider how much alchemy (and other flawed attempts at truth-seeking) it has spared us from having to attend to. A key challenge of collaborative truth-seeking is that decision-makers do not have time or often the ability to gain expertise in every technical area and correctly evaluate all technical work themselves.

But my main argument for Claim 4 is that many communities have wanted to design the best institution they could for collaborative truth-seeking. And peer review is the institution that has spread over (I think) every academic discipline. It is the institution that governments have chosen as the basis of their decisions, whenever possible.

I'm not saying that everyone involved is perfect at selecting institutions that promote healthy collaborative truth-seeking; far from it. I'm saying they're about as good as you are, and they've been at it for longer.

Claim 5: The comment sections and upvote tallies on LessWrong and the Alignment Forum and the EA Forum do not qualify as high-prestige peer review.

The conventional answer is loud and clear on this topic. I can offer some words of explanation on how this could be, but even if I couldn't, you should believe it anyway, because we can be absolutely sure what the consensus would be: the consensus among laypeople would be to defer to the consensus of academics, and the consensus among academics would be, "What? No, of course not." Some words of explanation as to how this could be: the commenters are not experts; the commenters don't have the ability to prevent publication, so authors don't take their concerns as seriously; if the author gets into an unresolved debate with a commenter 6 levels deep, the default outcome is for the post to stand and nobody to read the debate, and everyone knows this; the whole community is an echo chamber that takes for granted various propositions which have never been rigorously defended to the satisfaction of outsiders. I don't know which of these differences between these online forums and standard academic peer review are important, and neither do you.

At least in AI existential safety, the classic reasons not to submit work to high-status peer reviewed venues is that it's too much work, it's too slow, the normie reviewers won't understand because there's too much "inferential difference", and peer review is just a status-chasing game anyway. If you are a funder, or a policymaker, or someone reporting on the consensus of a research community, conventional wisdom would suggest that you ignore these excuses, and focus on peer-reviewed work. I do apologize for any offense there, but I think it needs to be said; I do not think people who claim this are necessarily arriving at false conclusions in their research, but conventional wisdom would say that demanding better receipts in the form of peer-reviewed publications is a better strategy than giving them the benefit of the doubt because they belong to your philosophical movement, and because you can't find any knockdown counterarguments. If you are an expert researcher, mine the work of people who consider peer-review to be not worth their time, but who seem to be making good points, and see what you can turn into published, peer-reviewed work.

One quick comment on peer review being just about status-chasing, because I have too nice a comparison. There was some Yudkowsky tweet I can't find that refutes the idea that arguments are just for convincing people into doing what you want, not for pursuing the truth. He noted that not only did we evolve to produce convincing arguments; we evolved to believe them. The latter only makes sense if they have some value for pursuing truth. Similarly, human collaborative-truth-seeking norms have evolved not only to produce people who seek status through peer review, but also people who assign status to peer-reviewed work. This only makes sense if peer-reviewers are succeeding at (partially) separating high-quality work from low-quality work.

It is hard for a group to form a consensus around correct claims over incorrect ones; the costs involved are real and cannot be wished away. If you acutely feel the inconvenience of getting your work peer reviewed, and feel that this is evidence of an inefficient system, why are you expecting the existence of a better system? 1) You publish your work, 2) ...., 3) its correctness becomes common knowledge. I'm sure anti-peer-review people can come up with creative proposals for 2. Maybe they will distill the correct claims from the incorrect ones more cost-effectively than peer review, and maybe they won't. My point is that no one should ever have expected the "consensus building" part of collaborative truth-seeking to be costless, so your experience of bearing that cost should not be considered much evidence of inefficiency.

Complaints about the slowness of peer review and the lack of uniformly high-quality reviewers, which are often supposed to strike the reader as a clear indictment of the system, remind me of complaints about high drug prices as an indictment of capitalism. The costs of scrutiny and aggregation for consensus building are real and must be borne somewhere, much like the capital costs of developing drugs. Coming face to face with those costs is just unsurprising. The other social phenomenon I am reminded of is the tendency of revolutionaries to inveigh against the status quo without feeling the need to compare it to an alternative proposal.

I have one more comment on the complaint about supposed "inferential difference" and dumb "normie" reviewers. My entire threat model of AI existential risk, including the contention that our extinction is likely under certain circumstances, has passed peer review at a venue whose brand is backed by AAAI. I suspect that someone worried about AI existential risk whose inclinations are anti-peer review would have been surprised or astonished by that.

How does Rationalist Community Attention/Consensus compare? I'd like to mention a paper of mine published at the top AI theory conference which proves that when a certain parameter of a certain agent is set sufficiently high, the agent will not aim to kill everyone, while still achieving at least human-level intelligence. This follows from Corollary 14 and Corollary 6. I am quite sure most AI safety researchers would have confidently predicted no such theorems ever appearing in the academic literature. And yet there are no traces of any minds being blown. The associated Alignment Forum post only has 22 upvotes and one comment, and I bet you've never heard any of your EA friends discuss it. It hasn't appeared, to my knowledge, in any AI safety syllabuses. People don't seem to bother investigating or discussing whether their concerns with the proposal are surmountable. I'm reluctant to bring up this example since it has the air of a personal grievance, but I think the disinterest from the Rationality Community is erroneous enough that it calls for an autopsy. (To be clear, I'm not saying everyone should be hailing this as an answer to AI existential risk, only that it should definitely be of significant interest.) [EDIT: the most upvoted comment disputes some of what is in this paragraph. Please read my response to it, and my response to his response to that. The errors in the comment are so verifiable that I'm mystified by the upvoting behavior. It's like the upvoters are looking for reasons to write this off, which is kind of exactly my point here.]

But again, if my object-level defenses of peer-review do not satisfy you (which I only add because I expect some people would be interested in an attempted explanation), defer to the conventional wisdom anyway! For potential venues for AI existential safety research, I would say that AAAI, IJCAI, Synthese, and AI Magazine are especially likely to source competent and receptive reviewers, but see the publication records of AI safety researchers for more ideas. (Personal websites often have better publication info than Google scholar).

Conventional Experience

Claim 6: Having experience or having been apprenticed to someone experienced is critically important for success, with rare exceptions.

Conventional wisdom is very unified here. It is a question that lots of people and companies care about getting right. Companies with bad hiring criteria in competitive markets would be outcompeted by companies with better ones. So look around at companies in competitive markets and note how much the companies that have survived until today value experience when they do their hiring, especially for important roles. If you want to be sure that a job done is well, don't hire an EA fresh out of college. Hire someone with a strong track record that a conventional HR department would judge as demonstrably competent. Companies hire people all the time who are not "aligned" with them, i.e. not philosophically motivated to maximize the company's profit, and it works out fine.

An EA friend of mine was looking to hire a personal assistant for his boss, and it didn't even occur to him to look for someone with twenty years of experience and a track record of success! I think it shouldn't have occurred to him to not do that. He had been more focused on finding someone who shared his boss's philosophical commitments. 

If "grown-ups" had been involved at the officer level at FTX, I claim fraud probably would not have occurred. I can't say I predicted FTX's collapse, but I didn't know it was being run by people with no experience. Once I learned the collective experience-level of the management of the Carrick Flynn campaign, I did predict that he would get very few votes. Maybe that was overdetermined, but I was much more pessimistic than my EA community friends.

Many small organizations are founded by EAs who would not succeed at being hired to run an analogous non-EA organization of similar scope and size. I (tentatively) think that these organizations, which are sometimes given the outrageously unsubstantiated denomination "effective organization", are mostly ineffective. Regardless of the frequency with which they end up ineffective, I am more confident that the process by which such organizations are deemed "effective" by the EA community is fairly disconnected from reality. It seems to me to be basically predictable from whether the organization in question is run by friends of friends of the speaker. (It is possible that certain individuals use the term more sparingly and more reliably, but the EA community as a whole seems to use it quite liberally). But I do not only object to the term, of course. Conventional wisdom does not think especially highly of the likelihood that these organizations will be of much importance to society.

Objections

Objection: conventional wisdom is that effective altruism and longtermism are problematic, and AI X-risk is non-existent, but the conventional wisdom is wrong. Conventional wisdom was even more dismissive of them several years ago. But all three have been EA-community-conventional-wisdom for years, and to the extent they have become more mainstream, it is because of people disregarding conventional wisdom while taking EA-community-conventional wisdom seriously.

Yes, these are philosophical positions, not sociological ones, so it is not so outrageous to have a group of philosophers and philosophically-minded college students outperform conventional wisdom by doing first-principles reasoning. This does not contradict my position, which regards sociological questions.

Objection: but if everyone only ever accepted peer-reviewed perspectives on AI X-risk, the field never would have gotten off the ground; initially, everyone would have dismissed it as unsubstantiated. So a conventional answer to the sociological question "Should I trust this non-peer-reviewed first-principles reasoning?" would have interrupted open-minded engagement with AI X-risk.

There is a difference between finding it worthwhile to investigate a claim ("big if true") and accepting a claim. If the president had come across LessWrong before Superintelligence was published, and he decided he shouldn't base policy off of such immature research, he would have been making an unfortunate mistake. But he wouldn't be consigning the field of AI Existential Safety to abandonment, and he would have been following a principle that would usually serve him well. So I claim it is totally acceptable and good for actual decision-makers to only consider peer-reviewed work (or the consensus of experienced academics who take peer review seriously). Meanwhile, researchers should absolutely attend to non-peer-reviewed claims that are "big if true", and aspiring researchers should seek the technical expertise they need to join their ranks. Researchers can turn these "big if true" claims into papers that can be evaluated by ordinary, high-quality peer reviewers. Research funders can aim to fund such efforts. And if you find you can't get a certain claim or idea past peer review after having tried (either as a researcher or funder), then be very reluctant to build a world-view around it.

See a recent tweet of mine on this topic within a short conversation https://twitter.com/Michael05156007/status/1595755696854810624. "Claim: if you take the claims first articulated on LessWrong that have since been defended in peer-reviewed articles and academic books, you get a set of important claims worth taking seriously. Not as true if you skip the second step."

Objection: but there are lots of true claims that appear only in non-peer-reviewed, non-academic-book AI safety research. And the AI safety community agrees.

And how many false claims, that are equally well-respected? They may be some claims that are on their way to being defended in a peer-reviewed paper, or could be if someone put in the effort. I would be interested to see prediction markets about whether some claims will go on to be defended in a peer reviewed article at a high-quality venue.

Currently, it seems to me that many AI safety researchers read "intuition-building" blog posts that support a claim, and consider it to be "worth looking into". And then it seems to everyone that there is lots of talk throughout the AI safety community that this claim is "worth looking into", so people think it's probably pretty plausible. And then people start doing work predicated on the claim being true, and now it's AI-Safety-community-mainstream. This story is not an argument that all intuition-building blog posts introducing a claim are false! I only aim to illustrate how easily the same process can be seeded on a false claim, since you can always discover an intuition that supports a claim.

So I claim the AI safety community does not demand sufficient rigor before internally-generated claims gain the status of commonly-accepted-by-the-community, and I claim it has let false ones through. Much of the AI Safety community has largely abandoned the institution of peer review, where the reviewers are technically experienced and live outside of their echo chamber, especially when it comes to defending their core beliefs. It is just not so surprising to have many matching incorrect views among a community that engages in such insular epistemic practices in violation of conventional approaches to collaborative truth-seeking. I have not argued here that lots of un-peer-reviewed work in AI safety is in fact incorrect, but the reader should not be so convinced that the field is thriving for it to be much evidence when peer review is on trial.

I'll give an example of a specific claim whose acceptance among AI safety researchers has, in my opinion, significantly outpaced the rigor of any defense. There is peer-reviewed work arguing that sufficiently advanced reinforcement learners would under certain conditions very likely kill everyone (Cohen, 2022). Academic books (Superintellgence and Human-Compatible) have made more modest but similar claims. Where is the peer-reviewed work (or academic book) arguing that sufficiently advanced supervised learners trained to imitate human behavior would probably kill everyone? I have an explanation for why it doesn't exist: imitation learners trained to imitate humans probably won't kill everyone, and the AI safety community has never demanded sufficient rigor on this point.

Objection: But what if conventional wisdom on a practical sociological question is wrong? Under this view, we (a small community of plucky and smart individuals) could never do better.

Essentially, yes. On what grounds could we have expected to? That said, I do not think conventional wisdom is fixed. I imagine that on the question of peer review, for example, economics journals would be interested in an empirical evaluation of a proposed alternative to peer review, and depending on the results, this could lead to a change in conventional wisdom in academic institutions. So if you intuit that we can do better than peer review, I would recommend getting a PhD in economics under a highly-respected supervisor, and learn how to investigate institutions like peer review (against proposed alternatives!) with a level of rigor that satisfies high-prestige economists. And I would recommend not putting much credence on that intuition in the meantime. Other pieces of conventional wisdom could evolve in other ways.

Objection: You say [X] about [peer review/importance of conventional experience], but actually [not X] because of [Y].

Recall that I am only offering potential explanations of certain pieces of conventional wisdom on these topics, since I think some readers would be interested in them. But I can't be sure that I am correctly diagnosing why pieces of conventional wisdom are true. I may be totally missing the point. I am engaging in rationalization of conventional wisdom, not open-minded, rational inquiry. My key claim is that we should trust conventional wisdom on practical sociological questions, regardless of the result of an open-minded, rational, gears-level inquiry, so attempting to work it out for ourselves is inappropriate here. (I understand rationalization is often bad, but not here).

Consequences

I mentioned that I believe the consequences of these problems are so bad that I would discourage effective altruists from taking advice from EA community members or taking seriously the contents of the EA Forum or the Alignment Forum. It looks to me like lots of effective altruists (not all of them) are being encouraged to work for so-called effective organizations, many of which just aim to grow the EA community, or they have inexperienced management and so the blind are leading the blind, or they aim to contribute to the development EA-community-consensus through the publication of unrigorous blog posts to the EA Forum or Alignment Forum, most of which would probably be judged as low-quality by external reviewers. It is not uncommon for poor execution to destroy approximately all the value of a well-considered goal.

It looks to me like the AI safety researchers in the EA community are mostly not producing useful research (in the sense of being helpful for avoiding extinction), which is a topic I'll have to spend more time on elsewhere. This depends on certain object-level claims that might be controversial in the AI safety community, but I think someone who formed their object-level beliefs only on the basis of peer-reviewed AI existential safety literature would come to the same conclusion as me.

Gratitude to EA

I think the body of written work that constitutes the philosophy of and application of Effective Altruism is excellent. Likewise for longtermism. I think body of written work on AI existential risk and safety has some excellent and life-or-death-critical chapters. And I am extremely grateful to everyone who made this possible. But my gratitude to these authors and to their colleagues who helped them hone their ideas does not compel me to trust their intuitions and deliberations on practical sociological questions. And it certainly doesn't compel me to trust the intuitions  of other readers who find these authors compelling (i.e., the communities that have formed around these ideas).

Alternatives

Here is my alternative advice for people who find the philosophical ideas of effective altruism compelling.

If there are are topics where you have conventionally well-respected experience picking apart validity from nonsense, try your best. Otherwise do what ordinary people would do: on most topics, defer to conventional wisdom. On technical topics, defer to the people conventionally deferred to--those with conventional markers of expertise--naturally, excluding people who produce claims/arguments you can detect as nonsense.

Then, using that worldview, try to figure out how to do the most good as a professor, politician, civil servant/bureaucrat/diplomat, lobbyist for a think tank, or journalist--the conventional professions for changing the world with your ideas. (If there is a pressing engineering problem, maybe you could be an entrepreneur). Of course there will be exceptions here, especially if you try and fail to get one of those jobs. 80,000 Hours will have some good ideas, even if there are some suggestions mixed in that I would disagree with.

Close contact with some sort of "EA community" probably won't be very helpful in any of these pursuits, although of course it's nice to make friends. Communities are good for networking, so if you're looking for a job on Capitol Hill, by all means ask around to see if anyone has any leads. But this is a pretty low-contact use case for a community.

This is an awkward position for most EA community members to espouse. They might worry about offending a friend or a romantic partner. For most effective altruists, inasmuch as they agree with me, they will be less likely to spend as much time conversing with EA community members (in person or online). So the question of whether this position is correct is absolutely one for which the strategy "Defer to a consensus among active EA community members" is inadequate, since there is selection pressure favoring people who find my position unintuitive.

202

0
0

Reactions

0
0

More posts like this

Comments36
Sorted by Click to highlight new comments since: Today at 3:28 PM

How does Rationalist Community Attention/Consensus compare? I'd like to mention a paper of mine published at the top AI theory conference which proves that when a certain parameter of a certain agent is set sufficiently high, the agent will not aim to kill everyone, while still achieving at least human-level intelligence. This follows from Corollary 14 and Corollary 6. I am quite sure most AI safety researchers would have confidently predicted no such theorems ever appearing in the academic literature. And yet there are no traces of any minds being blown. The associated Alignment Forum post only has 22 upvotes and one comment, and I bet you've never heard any of your EA friends discuss it. It hasn't appeared, to my knowledge, in any AI safety syllabuses. People don't seem to bother investigating or discussing whether their concerns with the proposal are surmountable. I'm reluctant to bring up this example since it has the air of a personal grievance, but I think the disinterest from the Rationality Community is erroneous enough that it calls for an autopsy. (To be clear, I'm not saying everyone should be hailing this as an answer to AI existential risk, only that it should definitely be of significant interest.)

 

I'm someone who has read your work (this paper and FGOIL, the latter of which I have included in a syllabus), and who would like to see more work in similar vein, as well as more formalism in AI safety. I say this to establish my bona fides, the way you established your AI safety bona fides.

 I don't think this paper is  mind-blowing, and I would call it representative of one of the ways in which tailoring theoretical work for the peer-review process can go wrong. In particular, you don't show that "when a certain parameter of a certain agent is set sufficiently high, the agent will not aim to kill everyone", you show something more like "when you can design and implement an agent that acts and updates its beliefs in a certain way and can restrict the initial beliefs to a set containing the desired ones and  incorporate a human into the process who has access to the ground truth of the universe, then you can set a parameter high enough that the agent will not aim to kill everyone" [edit: Michael disputes this last point, see his comment below and my response], which is not at all the same thing. The standard academic failure mode is to make a number of assumptions for tractability that severely lower the relevance of the results (and the more pernicious failure mode is to hide those assumptions).

You'd be right if you said that most AI safety people did not read the paper and come to that conclusion themselves, and even if you said that most weren't even aware of it. Very little of the community has the relevant background for it (and I would like to see a shift in that direction), especially the newcomers that are the targets of syllabi. All that said, I'm confident that you got enough qualified eyes on it that if you had shown what you said in your summary, it would have had an impact similar in scale to what you think is appropriate.

This comment is somewhat of a digression from the main post, but I am concerned that if someone took your comments about the paper at face value, they would come away with an overly negative perception of how the AI safety community engages with academic work.

The standard academic failure mode is to make a number of assumptions for tractability that severely lower the relevance of the results (and the more pernicious failure mode is to hide those assumptions).

Perhaps, but at least these assumptions are stated. Most work leans on similarly strong assumptions (for tractability, brevity, or lack of rigour meaning you don't even realise you are doing it) but doesn't state them.

I'm someone who has read your work (this paper and FGOIL, the latter of which I have included in a syllabus), and who would like to see more work in similar vein, as well as more formalism in AI safety. I say this to establish my bona fides, the way you established your AI safety bona fides.

Thanks! I should have clarified it has received some interest from some people.

you don't show that "when a certain parameter of a certain agent is set sufficiently high, the agent will not aim to kill everyone", you show something more like "when you can design and implement an agent that acts and updates its beliefs in a certain way and can restrict the initial beliefs to a set containing the desired ones and  incorporate a human into the process who has access to the ground truth of the universe, then you can set a parameter high enough that the agent will not aim to kill everyone"

"When you can design and implement an agent that acts and updates its beliefs in a certain way and can restrict the initial beliefs to a set containing the desired ones". That is the "certain agent" I am talking about.  "Restrict" is an odd word choice, since the set can be as large as you like as long as it contains the truth. "and incorporate a human into the process who has access to the ground truth of the universe." This is incorrect; can I ask you to edit your comment? Absolutely nothing is assumed about the human mentor, certainly no access to the ground truth of the universe; it could be a two-year-old or a corpse!  That would just make the Mentor-Level Performance Corollary less impressive.

I don't deny that certain choices about the agent's design make it intractable. This is why my main criticism was "People don't seem to bother investigating or discussing whether their concerns with the proposal are surmountable." Algorithm design for improved tractability is the bread and butter of computer science.

I'll edit to comment to note that you dispute it, but I stand by the comment. The AI system trained is only as safe as the mentor, so the system is only safe if the mentor knows what is safe. By "restrict", I meant for performance reasons, so that it's feasible to train and deploy in new environments.

Again, I like your work and would like to see more similar work from you and others. I am just disputing the way you summarized it in this post, because I think that portrayal makes its lack of splash in the alignment community a much stronger point against the community's epistemics than it deserves.

Thank you for the edit, and thank you again for your interest. I'm still not sure what you mean by a person "having access to the ground truth of the universe". There's just no sense I can think of where it is true that this a requirement for the mentor.

"The system is only safe if the mentor knows what is safe." It's true that if the mentor kills everyone, then the combined mentor-agent system would kill everyone, but surely that fact doesn't weight against this proposal at all. In any case, more importantly a) the agent will not aim to kill everyone regardless of whether the mentor would (Corollary 14), which I think refutes your comment. And b) for no theorem in the paper does the mentor need to know what is safe; for Theorem 11 to be interesting, he just needs to act safely (an important difference for a concept so tricky to articulate!). But I decided these details were beside the point for this post, which is why I only cited Corollary 14 in the OP, not Theorem 11.

Do you have a minute to react to this? Are you satisfied with my response?

I think that most Forum users would agree with most of what you've written here, and I don't see much in the post that I consider controversial (maybe the peer review claim?) . 

I gave the post an upvote, and I'm glad someone wrote down all these sensible points in one place, but I found the conventional wisdom and the vibe of "this will surprise you" to be an odd mix. (Though I might be more normie relative to the EA community than I realize?)

*****

As is the nature of comments, these will mostly be pushback/nitpicks, but the main thrust of the post reads well to me. You generally shouldn't take Forum posts as seriously as peer-reviewed papers in top journals, and you shouldn't expect your philosophy friends to take the place of a strong professional network unless there are unusual circumstances in play.

(That said, I love my philosophy friends, and I'm glad I have a tight-knit set of interesting people to hang out with whenever I'm in the right cities; that's not an easy thing to find these days. I also think the people I've met in EA are unusually good at many conventional virtues — honesty, empathy, and curiosity come to mind.)

If you want to be sure that a job done is well, don't hire an EA fresh out of college. Hire someone with a strong track record that a conventional HR department would judge as demonstrably competent. Companies hire people all the time who are not "aligned" with them, i.e. not philosophically motivated to maximize the company's profit, and it works out fine.

I've seen this debate play out many times online, but empirically, it seems to me like EA-ish orgs with a lot of hiring power (large budgets, strong brands) are more likely than other EA-ish orgs to hire people with strong track records and relevant experience.[1]

My guess is that other orgs EA-ish would also like to hire such people in many cases, but find them to be in short supply! 

People with a lot of experience and competence command high salaries and can be very selective about what they do. Naively, it seems hard to convince such people to take jobs at small charities with low budgets and no brand recognition.

When I put up a job application for a copywriting contractor at CEA , my applicants were something like:

  • 60% people with weak-to-no track records and no EA experience/alignment
  • 30% people with weak-to-no track records and some EA experience/alignment
  • 10% people with good track records (some had EA experience, others didn't)

Fortunately, I had a bunch of things going for me:

  • CEA was a relatively established org with a secure budget; we gave a reliable impression and I could afford to pay competitive rates.
  • Writing and editing are common abilities that are easy to test.

This means I had plenty of people to choose from (10% was 20 applicants), and it was easy to see whether they had skills to go with their track records. I eventually picked an experienced professional from outside the EA community, and she did excellent work.

A lot of jobs aren't like this. If you're trying to find someone for "general operations", that's really hard to measure. If you're trying to find a fish welfare researcher, the talent pool is thin. I'd guess that a lot of orgs just don't have many applicants a conventional HR department would love, so they fall back on young people with good grades and gumption who seem devoted to their missions (in other words, "weak-to-no track records and some EA experience/alignment").

*****

I also think that (some) EA orgs do more to filter for competence than most companies; I've applied to many conventional jobs, and been accepted to some of those, but none of them put me through work testing anywhere near as rigorous and realistic as CEA's.

*****

If "grown-ups" had been involved at the officer level at FTX, I claim fraud probably would not have occurred. I can't say I predicted FTX's collapse, but I didn't know it was being run by people with no experience.

I'm not sure about the "probably" here: there are many counterexamples. Ken Lay and Jeff Skilling are the first who come to mind — lots of experience, lots of fraud. 

If you think that FTX was fraud borne of the need to cover up dumb mistakes, that could point more toward "experience would have prevented the dumb mistakes, and thus the fraud". And lack of experience is surely correlated with bad business outcomes...

...but also, a lot of experienced businesspeople drove their own crypto companies to ruin — I grabbed the first non-FTX crypto fraud I found for comparison, and found Alex Mashinsky at the helm. (Not an experienced financial services guy by any means, but he had decades of business experience and was the CEO of a company with nine-figure revenue well before founding Celsius.)

Many small organizations are founded by EAs who would not succeed at being hired to run an analogous non-EA organization of similar scope and size. I (tentatively) think that these organizations, which are sometimes given the outrageously unsubstantiated denomination "effective organization", are mostly ineffective. 

As people I've edited can attest, I've been fighting against the "effective" label (for orgs, cause areas, etc.) for a long time. I'm glad to have others alongside me in the fight!

Better alternatives: "Promising", "working in a promising area", "potentially highly impactful", "high-potential"... you can't just assume success before you've started.

On the "would not succeed" point: I think that this is true of EA orgs, and also all other types of orgs. Most new businesses fail, and I'm certain the same is true of new charities. This implies that most founders are bad at running organizations, relative to the standard required for success. 

(It could also imply that EA should have fewer and larger orgs, but that's a question too complicated for this comment to cover.)

  1. ^

    Open Philanthropy once ran a research hiring round with something like a thousand applicants; by contrast, when I applied to be Stuart Russell's personal assistant at the relatively new CHAI in 2018, I think I was one of three applicants.

"I've seen this debate play out many times online, but empirically, it seems to me like EA-ish orgs with a lot of hiring power (large budgets, strong brands) are more likely than other EA-ish orgs to hire people with strong track records and relevant experience."

Based on speaking to people at EA orgs (and looking at the orgs' staff lists), I disagree with this. When I have spoken to employees at CEA and Open Phil, the people I've spoken to have either (a) expressed frustration about how focused their org is on hiring EA people for roles that seem to not need it or (b) defended hiring EAs for roles that seem to not need it. (I'm talking about roles in ops, personal assistants, events, finance, etc.)

Maybe I agree with your claim that large EA orgs hire more "diversely" than small EA orgs, but what I read as your implication (large EA orgs do not prioritize value-alignment over experience), I disagree with. I read this as your implication since the point you're responding to isn't focusing on large vs. small orgs.

I could point to specific teams/roles at these orgs which are held by EAs even though they don't seem like they obviously need to be held by EAs. But that feels a little mean and targeted, like I'm implying those people are not good for their jobs or something (which is not my intent for any specific person). And I think there are cases for wanting value-alignment in non-obvious roles, but the question is whether the tradeoff in experience is worth it.

Upvoted this.

You generally shouldn't take Forum posts as seriously as peer-reviewed papers in top journals

I suspect I would advise taking them less seriously than you would advise, but I'm not sure.

It could also imply that EA should have fewer and larger orgs, but that's a question too complicated for this comment to cover

I think there might be a weak conventional consensus in that direction, yes. By looking at the conventional wisdom on this point, we don't have deal with the complicatedness of the question--that's kind of my whole point. But even more importantly, perhaps fewer EA orgs that are not any larger; perhaps only two EA orgs (I'm thinking of 80k and OpenPhil; I'm not counting CHAI as an EA org). There is not some fixed quantity of people that need to be employed in EA orgs! Conventional wisdom would suggest, I think, that EAs should mostly be working at normal, high-quality organizations/universities, getting experience under the mentorship of highly qualified (probably non-EA) people.

I suspect I would advise taking them less seriously than you would advise, but I'm not sure.

The range of quality in Forum posts is... wide, so it's hard to say anything about them as a group. I thought for a while about how to phrase that sentence and could only come up with the mealy-mouthed version you read.

But even more importantly, perhaps fewer EA orgs that are not any larger.

Maybe? I'd be happy to see a huge number of additional charities at the "median GiveWell grantee" level, and someone has to start those charities. Doesn't have to be people in EA — maybe the talent pool is simply too thin right now — but there's plenty of room for people to create organizations focused on important causes.

(But maybe you're talking about meta orgs only, in which case I'd need a lot more community data to know how I feel.)

Conventional wisdom would suggest, I think, that EAs should mostly be working at normal, high-quality organizations/universities, getting experience under the mentorship of highly qualified (probably non-EA) people.

I agree, and I also think this is what EA people are mostly doing. 

When I open Swapcard for the most recent EA Global, and look at the first 20 attendees alphabetically (with jobs listed), I see:

  • Seven people in academia (students or professors); one is at the Global Priorities Institute, but it still seems like "studying econ at Oxford" would be a good conventional-wisdom thing to do (I'd be happy to yield on this, though)
  • Six people working in conventional jobs (this includes one each from Wave and Momentum, but despite being linked to EA, both are normal tech companies, and Wave at least has done very well by conventional standards)
  • One person in policy
  • Six people at nonprofit orgs focused on EA things

Glancing through the rest of the list, I'd say it leans toward more "EA jobs" than not, but this is a group that is vastly skewed in favor of "doing EA stuff" compared to the broad EA community as a whole, and it's still not obvious that the people with EA jobs/headed for EA jobs are a majority. 

(The data gets even messier if you're willing to count, say, an Open Philanthropy researcher as someone doing a conventionally wise thing, since you seem to think OP should keep existing.)

Overall, I'd guess that most people trying to maximize their impact with EA in mind are doing so via policy work, earning-to-give,[1] or other conventional-looking strategies; this just gets hidden by the greater visibility of people in EA roles.

I'd love to hear counterarguments to this — I've held this belief for a long time, it feels uncommon, and there's a good chance I'm just wrong.

  1. ^

    This isn't a conventional way to use money, but the part where you earn money is probably very conventional (get professional skill, use professional skill in expected way, climb the ladder of your discipline).

This is very high-quality. No disputes just clarifications.

I don’t just mean meta-orgs.

I think working for a well-financed grantmaking organization is not outrageously unconventional, although I suspect most lean on part-time work from well-respected academics more than OpenPhil does.

And I think 80k may just be an exception (a minor one, to some extent), borne out of an unusually clear gap in the market. I think some of their work should be done in academia instead (basically whatever work it’s possible to do), but some of the very specific stuff like the jobs board wouldn’t fit there.

Also, if we imagine an Area Dad from an Onion Local News article, I don’t think he’s skepticism would be quite as pronounced for 80k as for other orgs like, e.g., an AI Safety camp.

Yeah, I'm not sure that people prioritizing the Forum over journal articles is a majority view, but it is definitely something that happens, and there are currents in EA that encourage this sort of thinking.

I'm not saying we should not be somewhat skeptical of journal articles. There are huge problems in the peer-review world. But forum/blogs posts, what your friends say, are not more reliable. And it is concerning that some elements of EA culture encourage you to think that they are.

Evidence for my claim, based on replies to some of Ineffective Altruism's tweets (who makes a similar critique).

1: https://twitter.com/IneffectiveAlt4/status/1630853478053560321?s=20 Look at replies in this thread

2: https://twitter.com/NathanpmYoung/status/1630637375205576704?s=20 Look at all the various replies in this thread

(If it is inappropriate for me to link to people's Twitter replies in a critical way, let me know. I feel a little uncomfortable doing this, because my point is not to name and shame any particular person. But I'm doing it because it seems worth pushing back against the claim that "this doesn't happen here." I do not want to post a name-blurred screenshot because I think all replies in the thread are valuable information, not just the replies I share, so I want to enable people to click through.)

>I also think that (some) EA orgs do more to filter for competence than most companies; I've applied to many conventional jobs, and been accepted to some of those, but none of them put me through work testing anywhere near as rigorous and realistic as CEA's.


I want to push back here based on recent experience. I recently applied for a job at CEA and was essentially told that my background and experience was a perfect fit and that I aced the application but that I was not 'tightly value aligned' and thus would not be getting the role. 

CEA certainly had an extensive, rigorous application process. They just proceeded to not value the results of that application process. I expect they will hire someone with demonstrably less competence and experience for the role I applied for, but who is more 'value aligned'.

I would normally hesitate to air this sort of thing in public, but I feel this point needs to be pushed back against. As far as I can tell this sort of thing is fairly endemic within EA orgs - there seems to be a strong, strong preference for valuing ideological purity as opposed to competence. I've heard similar stories from others. My small sample size is not just that this happens, but that it happens *openly*. 

To relate this to the OP's main thesis - this is a problem other areas (for instance, politics) have already seen and confronted and we know how it plays out. It's fairly easy to spot a political campaign staffed with 'true believers' as opposed to one with seasoned, hardened campaign veterans. True Believer campaigns crash and burn at a much higher rate, controlling for other factors, because they don't value basic competence. A common feature of long time pols who win over and over is that they don't staff for agreement, they staff for experience and ability to win. 

EA's going to learn the same lesson at some point, it's just a matter of how painful that learning is going to be. There are things EA as a movement is simply not good at, and they'd be far better off bringing in non-aligned outsiders with extensive experience than hiring internally and trying to re-invent the wheel.

Hi Jeremiah. I was the hiring manager here and I think there's been something of a misunderstanding here: I don't think this is an accurate summary of why we made the decision we did. It feels weird to discuss this in public, but I consent to you publishing the full rejection email we sent, if you would like.

I don't particularly feel it would be a valuable use of anyone's time to get into a drawn out public back-and-forth debate where we both nitpick the implications of various word choices. I'll just say that if your intention was to communicate something other than "We prefer a candidate who is tightly value aligned" then there was a significant failure of communication and you shouldn't have specifically used the phrase "tightly aligned" in the same sentence as the rejection.

If the issue is that CEA communicated poorly or you misunderstood the rejection, I agree that's not necessarily worth getting into. But you've made a strong claim about how CEA makes decisions based on the contents of a message, whose author is willing to make public. It looks to me like you essentially have two choices:

  • Agree to make the message public, or

  • Onlookers interpret this as an admission that your claim was exaggerated.

I'm strongly downvoting the parent comment for now, since I don't think it should be particularly visible. I'll reverse the downvote if you release the rejection letter and it is as you've represented. 

I'm sorry you had such a frustrating experience. The work of yours I've seen has been excellent, and I hope you find a place to use your skills within EA (or keep crushing it in other places where you have the chance to make an impact).

Some hiring processes definitely revolve around value alignment, but I also know a lot of people hired at major orgs who didn't have any particular connection to EA, and who seem to just be really good at what they do. "Hire good people, alignment is secondary" still feels like the most common approach based on the processes I've seen and been involved with, but my data is anecdata (and may be less applicable to meta-focused positions, I suppose).

[comment deleted]1y0
0
0
lilly
1y36
10
1

The consensus of most people is that conventional wisdom is pretty good when it comes to designing institutions; at least compared to what a first-principles-reasoner could come up with.

I think your characterization of conventional answers to practical sociological questions is much too charitable, and your conclusion ("there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong") is correspondingly much too strong.

Indeed, EA grew out of a recognition that many conventional answers to practical sociological questions are bad. Many of us were socialized to think our major life goals should include getting a lucrative job, buying a nice house in the suburbs, and leaving as much money as possible to our kids. Peter Singer very reasonably contested this by pointing out that the conventional wisdom is probably wrong here: the world is deeply unjust, most people and animals live very hard lives, and many of us can improve their lives while making minimal sacrifices ourselves. (And this is to say nothing about the really bad answers societies have developed for certain practical sociological questions: e.g., slavery; patrilineal inheritance; mass incarceration; factory farming.)

More generally, our social institutions are not designed to make sentient creatures' lives go well; they are designed to, for instance, maximize profits for corporations. Amazon is not usually cited as an example of an actor using evidence and reason to do as much good as possible, and company solutions are not developed with this aim in mind. (Think of another practical solution Amazon workers came up with: peeing in bottles because they risked missing their performance targets if they used the bathroom.)

I agree with many of the critiques you make of EA here, and I agree that EA could be improved by adopting conventional wisdom on some of the issues you cite. But I would suggest that your examples are cherrypicked, and that "the rot" you refer to is at least as prevalent in the broader world as it is within EA. Just because EA errs on certain practical sociological questions (e.g., peer review; undervaluing experience) does not mean that conventional answers are systematically better. 

Yeah, I strongly agree and endorse Michael's post, but this line you're drawing out is also where I struggle. Michael has made better progress on teasing out the boundaries of this line than I have, but I'm still unclear. Clearly there are cases where conventional wisdom is wrong -- EA is predicated on these cases existing.

Michael is saying on questions of philosophy, we should not accept conventional wisdom, but on questions of sociology, we should. I agree with you that the distinction between sociological and philosophical are not quite clear. I think you're example of "what should you do with your life" is a good example of where the boundaries blur.

Maybe, I think "sociological" is not quite the right framing, but something along the lines of "good governance." The peer review point Michael brings up doesn't fit into the dynamic. Even though I agree with him, I think "how much should I trust peer review" is an epistemic question, and epistemics does fall into the category where Michael thinks EAs might have an edge over conventional wisdom. That being said, even if I thought there was reason to distrust conventional wisdom on this point, I would still trust professional epistemic philosophers over the average EA here and I would find it hard to believe that professional epistemic philosophers think forums/blogs are more reliable than peer reviewed journals.

What "major life goals should include (emphasis added)" is not a sociological question. It is not a topic that a sociology department would study. See my comment that  I agree "conventional wisdom is wrong" in dismissing the philosophy of effective altruism (including the work of Peter Singer). And my remark immediately thereafter: "Yes, these are philosophical positions, not sociological ones, so it is not so outrageous to have a group of philosophers and philosophically-minded college students outperform conventional wisdom by doing first-principles reasoning".

I am not citing Amazon as an example of an actor using evidence and reason to do as much good as possible. I am citing it as an example of an organization that is effective at what it aims to do.

Maybe I'm just missing something, but I don't get why EAs have enough standing in philosophy to dispute the experts, but not in sociology. I'm not sure I could reliably predict which other fields you think conventional wisdom is or isn't adequate in.

In fields where it's possible to make progress with first-principles arguments/armchair reasoning, I think smart non-experts stand a chance of outperforming. I don't want to make strong claims about the likelihood of success here; I just want to say that it's a live possibility. I am much more comfortable saying that outperforming conventional wisdom is extremely unlikely on topics where first-principles arguments/armchair reasoning are insufficient.

(As it happens, EAs aren't really disputing the experts in philosophy, but that's beside the point...)

So basically, just philosophy, math, and some very simple applied math  (like, say, the exponential growth of an epidemic), but already that last example is quite shaky.

I think the crux of the disagreement is this: you can't disentangle the practical sociological questions from the normative questions this easily. E.g., the practical solution to "how do we feed everyone" is "torture lots of animals" because our society cares too much about having cheap, tasty food and too little about animals' suffering. The practical solution to "what do we do about crime" is "throw people in prison for absolutely trivial stuff" because our society cares too much about retribution and too little about the suffering of disadvantaged populations. And so on. Practical sociological solutions are always accompanied by normative baggage, and much of this normative baggage is bad. 

EA wouldn't be effective if it just made normative critiques ("the world is extremely unjust") but didn't generate its own practical solutions ("donate to GiveWell"). EA has more impact than most philosophy departments because it criticizes many conventional philosophical positions while also generating its own practical sociological solutions. This doesn't mean all of those solutions are right—I agree that many aren't—but EA wouldn't be EA if it didn't challenge conventional sociological wisdom.

(Separately, I'd contest that this is not a topic of interest to sociologists. Most sociology PhD curricula devote substantial time to social theory, and a large portion of sociologists are critical theorists; i.e., they believe that "social problems stem more from social structures and cultural assumptions than from individuals... [social theory] argues that ideology is the principal obstacle to human liberation.")

I think an exaggerated version of this is a big part of what went wrong with Leverage.

I wasn’t really a fan of framing this as a “rot”. I worry that this tilts people towards engaging with this topic more emotionally rather than rationally.

I thought you made some good points, however: Regarding peer review, I expect that one of the major cruxes here is timelines and whether engaging more with the peer-review system would slow down work too much. Regarding hiring value-aligned people, I thought that you didn’t engage much with the reasons why people tend to think this is important (ability to set people ill-defined tasks which you can’t easily evaluate + worry about mission drift over longer periods of time).

One thing I struggle with in discourse, is expressing agreement. Agreeing seems less generative, since I often don't have much more to say than "I agree with this and think you explain it well." I strongly agree with this post, and am very happy you made it. I have some questions/minor points of disagreement, but want to focus on what I agree with before I get to that, since I overwhelmingly agree and don't want to detract from your point.

The sentiment "we are smarter than everyone and therefore we distrust non-EA sources" seems pervasive in EA. I love a lot about EA, I am a highly engaged member. But that sentiment is one of the worst parts about EA (if not the worst). I believe it is  highly destructive to our ability to achieve our aims of doing good effectively. 

Some sub-communities within EA seem to do better at this than others. That being said, I think every element of EA engages in this kind of thinking to some extent. I don't know if I've ever met any EA who didn't think it on some level. I definitely have a stream of this within me. 

But, there is a much softer, more reasonable version of that sentiment. Something like "EA has an edge in some domains, but other groups also have worthwhile contributions." And I've met plenty of EAs who operate much more on this more reasonable line than the excessively superior sentiment described above. Still, it's easy to slip into the excessively superior sentiment and I think we should be vigilant to avoid it.

------

Onto some more critical questions/thoughts.

My previous epistemics used to center around "expert consensus." The COVID-19 pandemic changed that. Expert consensus seemed to frequently be wrong, and I ended up relying much more on individuals with a proven track record, like Zeynep Tufekci. I'm still not sure what my epistemics are, but I've moved towards a forecasting based model. Where I most trust people with a proven track record of getting things right, rather than experts. But it's hard to find people with this proven track record, so I almost always still default to trusting experts. I certainly don't think forum/blog posts fit into this "proven track record" category, unless it's the blog of someone with a proven track record. But "proven track record" is still a very high standard. Zeynep is literally the only person I know who fits the bill. My worry with people using a "forecaster > expert" model is they won't have a high enough standard for what qualifies someone as a trust worthy forecaster.  And it's not like I trust her on everything. I'm wondering what your thoughts are on a forecaster model.

Another question I have is that the slowness of peer-review does strike me as a legitimate issue. But I am not in the AI field at all so I have very little knowledge. I still would like to see AI researchers make more efforts to get their work peer-reviewed, but I'm wondering if there might be some dual system, where less time sensitive reports get peer reviewed and are treated with a high-level of trust, and more time-sensitive reports do not go through as rigorous of a process, but are still shared, albeit with a lower level of trust. I'm really not sure, but some sort of dual system seems necessary to me. It can't be we totally disregard all non peer-reviewed work? 

I upvoted this post and am strongly in favour of more institutional expertise in the community.

However, to take the example of peer review specifically, my sense is that:

  • academics themselves have criticized the peer review system a great deal for various reasons, including predatory journals, incentive problems, publication bias, Why Most Published Research Findings Are False, etc
  • people outside academia, e.g. practitioners in industry, are often pretty sharply skeptical of the usefulness of academic work, or unaware of it entirely,
  • the peer review system didn't obviously arise from a really intentional and thoughtful design effort (though maybe this is just my historical ignorance?), and there are institutional-inertia reasons why it would be hard to replace with something better even once we had a good proposal,
  • at the time the peer review system was developed, an enormous amount of our modern tools for communication and information search, processing, dissemination etc. didn't exist, so it really arose in an environment quite different from the one it's now in.

It feels to me both like the consensus view isn't as strongly in favour of peer review as you suggest, and that there are some structural reasons to think that the dominance of peer review in academic contexts isn't so strong an indicator of its fitness.

I recognise my above criticisms have holes in them, but they seemed worth airing anyway, to at least gesture at why people might end up where they are on this topic.

(Also, obviously I've done nothing to demonstrate that forum posts aren't worse on every axis. I just think that if we're really in the situation of "we should use this moderately terrible system instead of this extremely terrible system", we need to acknowledge that if we're going to get people who can see the terribleness on board.)

academics themselves have criticized the peer review system a great deal for various reasons, including predatory journals, incentive problems, publication bias, Why Most Published Research Findings Are False, etc

I think we could quibble on the scale and importance of all of these points, but I'm not prepared to confidently deny any of them. The important I want to make is: compared to what alternatives? The problem is hard, and even the best solution can be expected to have many visible imperfections. How persuaded you be by a revoluationary enumerating the many weaknesses  of democratic government without comparing them to a proposed (often imagined) alternative?

people outside academia, e.g. practitioners in industry, are often pretty sharply skeptical of the usefulness of academic work, or unaware of it entirely,

Again, compared to what alternative? I'm guessing they would say that at a company, the "idea people" get their ideas tested by the free market, and this grounds them, and makes their work practical. I am willing to believe that free-market-testing is a more reliable institution than peer review for evaluating certain propositions, and I am willing to buy that this is conventionally well-understood. But for many propositions, I would think there is no way to "productize" it such that profitability of the product is logically equivalent to the truth of the proposition. (This may not be the alternative you had in mind at all, if you had an alternative in mind).

the peer review system didn't obviously arise from a really intentional and thoughtful design effort

This is definitely not a precondition for a successful social institution.

at the time the peer review system was developed, an enormous amount of our modern tools for communication and information search, processing, dissemination etc. didn't exist, so it really arose in an environment quite different from the one it's now in.

But conventional wisdom persists in endorsing peer-review anyway!

This is definitely not a precondition for a successful social institution.

I want to differentiate two kinds of success for a social institution:

  1. "reproductive" success, by analogy with evolution: how well the institution establishes and maintains itself as dominant,
  2. success at stated goals: for peer review, success at finding the truth, producing high quality research, etc.

Your argument seems to be (at least in part) that because peer review has achieved success 1, that is strong evidence that it's better than its alternatives at success 2. My argument (in part) is that this is only true if the two kinds of success have some mechanism tying them together. Some example mechanisms could be:

  • the institution achieved reproductive success by means of being pushed really hard by people motivated by the desire to build and maintain a really high quality system,
  • the institution is easy to replace with better systems, and better systems are easy to try, so the fact that it hasn't been replaced must mean better systems are hard to find.

I don't think either of these things are true of peer review. (The second is true of AWS, for example.) So what's the mechanism that established peer review as the consensus system that relates to it being a high quality system?

(I'm not saying I have alternatives, just that "consensus means a thing is good" is only sometimes a good argument.)

For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer

I'm not confident that this is right. I'm thinking about conventional answers to common questions (although I suppose it depends on what you define as sociological) that tend to be not great. Ideas such as

  • If you want to find a romantic partner, convention wisdom says to make yourself as appealing to as wide an audience as possible; remove quirks and be bland; don't be weird. Whereas I think that it would be far better to filter out the people that wouldn't be a good match.
  • If you want to be happy, convention wisdom says that you should get a well-paid and respectable job. Whereas much of the of the literature (both psychological research and Buddhist psychology) suggest other methods, such as this paper, or Happiness: Lessons from a New Science.

I'm sure there are others, and I don't want to make the claim that conventional answers are never right (for example, for finding a romantic partner having good conversational skills and grooming yourself are convention wisdom that I agree with). But I would be wary of the claim that "if many people want to get right and there is a conventional answer, you should go with the conventional answer."

Hi Michael,

Nice post!

Claim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you.

In general, I think the consensus of a group of experts is more accurate than that of the most knowledgable person or the whole crowd. However, the optimal size of the group of experts depends on the specific question, and it may often be much smaller than one thinks. From Mannes 2014:

Through both simulation and an analysis of 90 archival data sets, we show that select crowds of 5 knowledgeable judges yield very accurate judgments across a wide range of possible settings—the strategy is both accurate and robust.

So if you intuit that we can do better than peer review, I would recommend getting a PhD in economics under a highly-respected supervisor, and learn how to investigate institutions like peer review (against proposed alternatives!) with a level of rigor that satisfies high-prestige economists.

Wow, that's really specific. Are you trying to evoke Robin Hanson? Before following him, ask him if it's a good idea. I think he regrets his path.

Robin Hanson didn't occur to me when I wrote it or any of the times I read it! I was just trying to channel what I thought conventional advice would be.

Curated and popular this week
Relevant opportunities