Geoffrey Miller

Psychology Professor @ University of New Mexico
8331 karmaJoined Jan 2017Working (15+ years)Albuquerque, NM, USA
www.primalpoly.com/

Bio

Participation
4

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
675

Yarrow - I'm curious which bits of what I wrote you found 'psychologically implausible'?

Beautiful and inspiring. Thanks for sharing this.

I hope more EAs think about turning abstract longtermist ideas into more emotionally compelling media!

mikbp: good question. 

Finding meaningful roles for ordinary folks ('mediocrities') is a big challenge for almost every human organization, movement, and subculture. It's not unique to EA -- although EA does tend to be quite elitist (which is reasonable, given that many of its core insights and values require a very high level of intelligence and openness to understand.) 

The usual strategy for finding roles for ordinary people in organizations is to create hierarchical structures in which the ordinary people are bossed around/influenced/deployed by more capable leaders. This requires a willingness to accept hierarchies as ethically and pragmatically legitimate -- which tends to be more of a politically conservative thing, and might conflict with EA's tendency to attract anti-hierarchical liberals.

Of course, such hierarchies don't need to involve full-time paid employment. Every social club, parent-teacher association, neighborhood association, amateur sports team, activist group, etc involves hierarchies of part-time volunteers.  They don't expect full-time commitments. So they're often pretty good at including people who are average both in terms of their traits and abilities, and in terms of the time they have available for doing stuff, beyond their paid jobs, child care, and other duties.

Counterpoints:

  1. Humans are about as good and virtuous as we could reasonably expect from a social primate that has evolved through natural selection, sexual selection, and social selection (I've written extensively on this in my 5 books).
  2. Human life has been getting better, consistently, for hundreds of years. See, e.g. Steven Pinker (2018) 'Enlightenment Now'.
  3. Factory farming would be ludicrously inefficient for the first several decades, at least, of any Moon or Mars colonies, so would simply not happen.

My more general worry is that this kind of narrative that 'humans are horrible, we mustn't colonize space and spread our horribleness elsewhere' is that it feeds the 'effective accelerationist' (e/acc) cult that thinks we'd be better replaced by AIs.

A brief meta-comment on critics of EAs, and how to react to them:

We're so used to interacting with each other in good faith, rationally and empirically, constructively and sympathetically, according to high ethical and epistemic standards, that we EAs have real trouble remembering some crucial fact of life:

  • Some people, including many prominent academics, are bad actors, vicious ideologues, and/or Machiavellian activists who do not share our world-view, and never will
  • Many people engaged the public sphere are playing games of persuasion, influence, and manipulation, rather than trying to understand or improve the world
  • EA is emotionally and ideologically threatening to many people and institutions, because insofar as they understand our logic of focusing on tractable, neglected, big-scope problems, they realize that they've wasted large chunks of their lives on intractable, overly popular, smaller-scope problems; and this makes them sad and embarrassed, which they resent
  • Most critics of EA will never be persuaded that EA is good and righteous. When we argue with such critics, we must remember that we are trying to attract and influence onlookers, not trying to change the critics' minds (which are typically unchangeable).

I think there's a huge difference in potential reach between a major TV series and a LessWrong post.

According to this summary from Financial Times, as of March 27, '3 Body Problem' had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries. 

Whereas a good LessWrong post might get 100 likes. 

We should be more scope-sensitive about public impact!

PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read '3 Body Problem' novel in  2015, we were invited to a conference on 'active Messaging to Extraterrestrial Intelligence' ('active METI') at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin's book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:

PDF here

Journal link here

Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There
Universal Adaptations in Search, Aversion, and Signaling?

Abstract
To understand the possible forms of extraterrestrial intelligence (ETI), we need not only astrobiology theories about how life evolves given habitable planets, but also evolutionary psychology theories about how intelligence emerges given life. Wherever intelligent organisms evolve, they are likely to face similar behavioral challenges in their physical and social worlds. The cognitive mechanisms that arise to meet these challenges may then be copied, repurposed, and shaped by further evolutionary selection to deal with more abstract, higher-level cognitive tasks such as conceptual reasoning, symbolic communication, and technological innovation, while retaining traces of the earlier adaptations for solving physical and social problems. These traces of evolutionary pathways may be leveraged to gain insight into the likely cognitive processes of ETIs. We demonstrate such analysis in the domain of search strategies and show its application in the domains of emotional aversions and social/sexual signaling. Knowing the likely evolutionary pathways to intelligence will help us to better search for and process any alien signals from the search for ETIs (SETI) and to assess the likely benefits, costs, and risks of humans actively messaging ETIs (METI).

'3 Body Problem' is a new 8-episode Netflix TV series that's extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin. 

It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.

Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?

Well Leif Wenar seems to have written a hatchet job that's deliberately misleading about EA values, priorities, and culture. 

The usual anti-EA ideologues are celebrating about Wired magazine taking such a negative view of EA.

For example, leader of the 'effective accelerationist' movement 'Beff Jezos' (aka Guillaume Verdon) wrote this post on X, linking to the Wenar piece, saying simply 'It's over. We won'. Which is presumably a reference to EA people working on AI safety being a bunch of Luddite 'decels' who want to stop the glorious progress towards ASI replacing all of humanity, and this Wenar piece permanently discrediting all attempts to slowing AI or advocating for AI safety.

So, apart from nitpicking everything that Wenar gets wrong, we should pay attention to the broader cultural context, in which he's seen as a pro-AI e/acc hero for dissing all attempts at promoting AI safety and responsible longtermism.

David - this is a helpful and reasonable comment.

I suspect that many EAs tactically and temporarily suppressed their use of EA language after the FTX debacle, when they knew that EA had suffered a (hopefully transient) setback.

This may actually be quite analogous to the cyclical patterns of outreach and enthusiasm that we see in crypto investing itself. The post-FTX 2022-2023 bear market in crypto was reflected in a lot of 'crypto influencers' just not talking very much about crypto for a year or two, when investor sentiment was very low. Then, as the price action picked up in the last half of 2023 through now, and optimism returned, and the Bitcoin ETFs got approved by the SEC, people started talking about crypto again. So it has gone, with every 4-year-cycle in crypto.

The thing to note here is that in the dark depths of the 'crypto winter' (esp. early 2023), it seemed like confidence and optimism might never return. (Which is, of course, why token prices were so low). But, things did improve, as the short-term sting of the FTX scandal faded. 

So, hopefully, things might go with EA itself, as we emerge from this low point in our collective sentiment.

Load more