Geoffrey Miller

Psychology Professor @ University of New Mexico
8267 karmaJoined Jan 2017Working (15+ years)Albuquerque, NM, USA
www.primalpoly.com/

Bio

Participation
4

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
665

Nicholas - thanks for posting this helpful summary of these empirical studies.

I do find it somewhat sad and alarming that so many EAs seem to be delaying or avoiding having kids, out of fear that this will 'impair productivity'. 

Productivity-maxxing can be a false god - and this is something that's hard to understand until one becomes a parent.

Just as money sent to charities can vary 100x in terms of actual effectiveness, 'productivity' can vary hugely in terms of actual impact in the world. 

Lots of academic parents I know (including me) realized, after having kids, that they had been spending huge amounts of time doing stuff that seemed 'productive' or 'fun' at the time, but that wasn't actually aligned with their genuine long-term goals and values. Some of this time was spent on self-indulgent status-seeking, credentialism, careerism, workaholism, networking, etc. Some of it was spent on habit-forming but unfulfilling forms of leisure (TV, video games, light reading). Much of it was mating effort to find and retain a sexual partner(s). And some of it was spent on feeling depressed, anxious, etc, wondering about the meaning of life -- concerns that tend to evaporate when you start spending more time enjoying the company of your kids, when the 'meaning of life' becomes bittersweetly apparent.

Jason - fair point. 

Except that all psychological traits are heritable, so offspring of smart, conscientious, virtuous EAs are likely to be somewhat smarter, more conscientious, and more virtuous than average offspring.

I think it's important for EA to avoid partisan political fights like this - they're not neglected cause areas, and they're often not tractable.

It's easy for the Left to portray the 'far right' as a 'threat to democracy, in the form of 'fascist authoritarians'. 

It's also easy for the Right to portray the 'far left' as a 'threat to democracy' in the form of 'socialist authoritarians'.

The issue of immigration (e.g. as considered by AfD) is especially tricky and controversial, in terms of whether increased immigration into Western democracies of people with anti-democratic values (e.g. fundamentalist religious values) would be a good or a bad thing.

So many political groups are already fighting over these issues. It would dilute EA's focus, and undermine our non-partisan credibility, to get involved in these things.

Kyle - I just completed the survey yesterday. I did find it very long and grueling. I worry that you might get lower quality data in the last 1/2 of the survey, due to participant fatigue and frustration.

My suggestion -- speaking as a psych professor who's run many surveys over the last three decades -- is to develop a shorter survey (no more than 25 minutes) that focuses on your key empirical questions, and try to get a good large sample for that. 

I just reposted your X/Twitter recruitment message, FWIW:

https://twitter.com/law_fiore/status/1706806416931987758 

Good luck! I might suggest doing a shorter follow-up survey in due course -- 90 minutes is a big time commitment for $15 payment!

Johanna -  thanks very much for sharing this fascinating, important, and useful research! Hope lots of EAs pay attention to it.

Hayven - there's a huge, huge middle ground between reckless e/acc ASI accelerationism on the one hand, and stagnation on the other hand.

I can imagine a moratorium on further AGI research that still allows awesome progress on all kinds of wonderful technologies such as longevity, (local) space colonization, geoengineering, etc -- none of which require AGI. 

Isaac -- good, persuasive post. 

I agree that p(doom) is rhetorically ineffective -- to normal people, it just looks weird, off-putting, pretentious, and depressing. Most folks out there have never taken a probability and statistics course, and don't know what p(X) means in general, much less p(doom). 

I also agree that p(doom) is way too ambiguous, in all the ways you mentioned, plus another crucial way: it isn't conditioned on anything we actually do about AI risk. Our p(doom) given an effective global AI regulation regime might be a lot lower than p(doom) if we do nothing. And the fact that p(doom) isn't conditioned on our response to p(doom) creates a sense of fatalistic futility, as if p(doom) is a quantitative fact of nature, like the Planck constant or the Coulomb constant, rather than a variable that reflects our collective response to AI risks, and that could go up or down quite dramatically given human behavior.

Caleb - thanks for this helpful introduction to Zach's talents, qualifications, and background -- very useful for those of us who don't know him!

I agree that EA organizations should try very hard to avoid entanglements with AI companies such as Anthropic - however well-intentioned they seem. We need to be able to raise genuine concerns about AI risks without feeling beholden to AI corporate interests.

Malo - bravo on this pivot in MIRI's strategy and priorities. Honestly it's what I've hoped MIRI would do for a while. It seems rational, timely, humble, and very useful! I'm excited about this.

I agree that we're very unlikely to solve 'technical alignment' challenges fast enough to keep AI safe, given the breakneck rate of progress in AI capabilities. If we can't speed up alignment work, we have to slow down capabilities work. 

I guess the big organizational challenge for MIRI will be whether its current staff, who may have been recruited largely for their technical AI knowledge, general rationality, and optimism about solving alignment, can pivot towards this more policy-focused and outreach-focused agenda -- which may require quite different skill sets. 

Let me know if there's anything I can do to help, and best of luck with this new strategy!

Load more