Comment author: selfactualizer 09 March 2018 07:04:30PM 3 points [-]

Great content. I just poured through looking for feedback to give but the content is really great. Only note is if this is going to be done as a presentation in June I think it could get a lot more engaging with less written texts on the slide.

Comment author: geoffreymiller  (EA Profile) 09 March 2018 09:49:02PM 2 points [-]

Yes, I always put too much text on slides the first few times I present on a new topic, and then gradually strip it away as I remember better what my points are. Thanks!

12

Cognitive and emotional barriers to EA's growth

This morning I gave a colloquium to my Psychology Department here at University of New Mexico. Most of the 30+ audience members had never heard of EA, although a few had a vague idea about it. I analyzed 10 cognitive and emotional barriers that people face in accepting EA approaches... Read More
8

Global catastrophic financial risks?

Have there been any good analyses of possible global catastrophic financial risks? I'm thinking of issues such as: 1) narrow AI traders cornering global capital markets through more efficient predictions, trades, and arbitrage, so ordinary folks are left with near-zero equity and pensions 2) blockchain and cryptocurrency technologies disrupting fiat-currency-denominated... Read More
Comment author: Jon_Behar 26 January 2018 07:09:42PM 2 points [-]

I run The Life You Can Save’s “Giving Game” project- we give students (among others) real money to donate in structured decision making process (e.g. between pre-selected charities representing major EA causes). Let me know if you’d be interested in incorporating a GG into this class or future iterations. I’d be happy to discuss ways to tailor the model to your needs, explain what other teachers have done, etc. Background here: https://www.thelifeyoucansave.org/giving-games

Comment author: geoffreymiller  (EA Profile) 29 January 2018 12:02:25AM 0 points [-]

Thank you! I'll check it out.

17

New Effective Altruism course syllabus

I've developed a new course called 'The Psychology of Effective Altruism' (Psych450) that I'm teaching this spring term here at University of New Mexico.  The syllabus (including an extensive list of required and optional readings and videos) is here:  https://www.primalpoly.com/s/syllabus-draft-jan24d.docx   Feel free to borrow any of this material if you teach... Read More
9

Ideological engineering and social control: A neglected topic in AI safety research?

Will enhanced government control of populations' behaviors and ideologies become one of AI's biggest medium-term safety risks?   For example, China seems determined to gain a decisive lead in AI research research by 2030, according to the new plan released this summer by its State Council: https://www.newamerica.org/documents/1959/translation-fulltext-8.1.17.pdf One of China's... Read More
In response to Open Thread #38
Comment author: William_S 23 August 2017 05:21:43PM 3 points [-]

Any thoughts on individual-level political de-polarization in the United States as a cause area? It seems important, because a functional US government helps with a lot of things, including x-risk. I don't know whether there are tractable/neglected approaches in the space. It seems possible that interventions on individuals that are intended to reduce polarization and promote understanding of other perspectives, as opposed to pushing a particular viewpoint or trying to lobby politicians, could be neglected. http://web.stanford.edu/~dbroock/published%20paper%20PDFs/broockman_kalla_transphobia_canvassing_experiment.pdf seems like a useful study in this area (it seems possible that this approach could be used for issues on the other side of the political spectrum)

In response to comment by William_S on Open Thread #38
Comment author: geoffreymiller  (EA Profile) 28 August 2017 11:30:09PM 0 points [-]

Heterodox Academy also has this new online training for reducing polarization and increasing mutual understanding across the political spectrum: https://heterodoxacademy.org/resources/viewpoint-diversity-experience/

Comment author: JamesDrain 25 August 2017 03:29:13AM *  5 points [-]

I have a fully-formed EA board game that I debuted at EA Global in San Francisco a couple weeks ago. EAs seem to really like it! You can see over one hundred of the game's cards here https://drive.google.com/open?id=0Byv0L8a24QNJeDhfNFo5d1FhWHc

The way the game works is that every player has a random private morality that they want to satisfy (e.g. preference utilitarianism, hedonism, sadism, nihilism) and all players also want to collaboratively achieve normative good (accumulating 1000 human QALYs, 10,000 animals QALYs, and 10 x-risk points). Players get QALYs and x-risk points by donating to charities and answering trivia questions.

The coolest part of the game is the reincarnation mechanic: every player has a randomly chosen income taken from the real-world global distribution of wealth. Players also unlock animal reincarnation mode after stumbling upon the bad giant pit of suffering (the modal outcome of unlocking animal reincarnation is to be stuck as a chicken until the pit of suffering is destroyed, or until a friendly human acquires a V(eg*n) card.)

I'm also thinking about turning the game into an app or computer game, but I'll probably need an experienced coder to help me with that.

In response to comment by JamesDrain on Open Thread #38
Comment author: geoffreymiller  (EA Profile) 28 August 2017 11:27:50PM 0 points [-]

Cool idea. Although I think domain-specific board games might be more intuitive and vivid for most people -- e.g. a set on X-risks (one on CRISPR-engineered pandemics, one on an AGI arms race), one on deworming, one on charity evaluation with strategic conflict between evaluators, charities, and donors, a modified 'Game of Life' based on 80k hours principles, etc.

Comment author: geoffreymiller  (EA Profile) 28 August 2017 11:22:19PM 8 points [-]

Fascinating post. I agree that we shouldn't compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they're actually trained to become.

A key issue is the LAWs' chain of commands' legitimacy, and how it's secured.

Mencius Moldbug had some interesting suggestions in Patchwork about how a 'cryptographic chain of command' over LAWs could actually increase the legitimacy and flexibility of governance over lethal force. https://www.amazon.com/dp/B06XG2WNF1

Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace -- an 'invincible robot army'. Who is permitted to issue orders? If the current political leader is voted out of office, but they don't want to leave, and they still have the LAWS 'launch codes', what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.

In other words, I'm not as worried about interstate war or intrastate protests; I'm worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

I guess this is just another example of an alignment problem - in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator's 'launch codes'. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites -- or the LAWs must embody some 'human/constitutional rights interrupts' that prevent such bullying.

Any suggestions on how to solve this 'chain of command' problem?

Comment author: geoffreymiller  (EA Profile) 21 August 2017 12:36:06AM 7 points [-]

In academic research, government and foundation grants are often awarded using criteria similar to ITN, except:

1) 'importance' is usually taken as short-term importance to the research field, and/or to one country's current human inhabitants (especially registered voters),

2) 'tractability' is interpreted as potential to yield several journal publications, rather than potential to solve real-world problems,

3) 'neglectedness' is interpreted as addressing a problem that's already been considered in only 5-20 previous journal papers, rather than one that's totally off the radar.

I would love to see academia in general adopt a more EA perspective on how to allocate scarce resources -- not just when addressing problems of human & animal welfare and X-risk, but in addressing any problem.

View more: Next