AM

Arran McCutcheon

67 karmaJoined

Comments
13

That's right, he said 'It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill.'

Adaptation: Assuming that advanced AI would preserve humanity is the same as an ant colony assuming that real estate developers would preserve their nest. Those developers don’t hate ants, they just want to use that patch of ground for something else (I may have seen this ant analogy somewhere else but can't remember where).

If the capabilities of nuclear technology and biotechnology advance faster than their respective safety protocols, the world faces an elevated risk from those technologies. Likewise, increases in AI capabilities must be accompanied by an increased focus on ensuring the safety of AI systems.

Human history can be summarised as a series of events in which we slowly and painfully learned from our mistakes (and in many cases we’re still learning). We rarely get things right first time. The alignment problem may not afford the opportunity to learn from our mistakes. If we develop misaligned AGI we will go extinct, or at the very least cede control of our destiny and miss out on the type of future that most people want to see. 

Givewell for AI alignment  

Artificial intelligence

When choosing where to donate to have the largest positive impact on AI alignment, the current best resource appears to be Larks annual literature review and charity comparison on the EA/LW forums. Those posts are very high-quality but they’re only published once a year and are ultimately the views of one person. A frequently updated donation recommendation resource contributed to by various experts would improve the volume and coordination of donations to AI alignment organisations and projects.

This is probably not the first time this idea has been suggested but I haven’t seen it explicitly mentioned within the current project ideas or commented suggestions. Refinement of idea #29.

Website for coordinating independent donors and applicants for funding

Empowering exceptional people, effective altruism

At EAG London 2021, many attendees indicated in their profiles that they were looking for donation opportunities. Donation autonomy is important to many prospective donors, and increasing the range of potential funding sources is important to those applying for funding. A curated website which allows applicants to post requests for funding and allows potential donors to browse those requests and offer to fully or partially fund applicants, seems like an effective solution.

Research scholarships / funding for self-study 

Empowering exceptional people

The value of a full-time researcher in some of the most impactful cause areas has been estimated as being between several hundred thousand to several million dollars per year, and research progress is now seen by most as the largest bottleneck to improving the odds of good outcomes in these areas. Widespread provision of scholarships / funding for self-study could enable far more potential researchers to gain the necessary experience, knowledge, skills and qualifications to make important contributions. Depending on the average amount granted to scholarship / funding applicants, even a hit rate of 5-10% (in terms of creating full-time researchers in high impact cause areas) could be a good use of funds.

EA Funds and other orgs already do this to some extent, I’m envisaging a much wider program.

Thanks Khorton for the feedback and additional thoughts.

I think the impact of cold emails is normally neutral, it would have to be a really poorly-written or antagonising email to make the reader actively go and do the opposite of what the email suggests! I guess neutral also qualifies as 'not good'.

But it seems like people with better avenues of contact to DC have been considering contacting him anyway, through cold means or otherwise, so that’s great.

Exactly, he has written posts about those topics, and about 'effective action', predictions and so on. And there is this article from 2016 which claims 'he is an advocate of effective altruism', although it then says 'his argument is mothball the department (DFID)', which I'm fairly sure most EAs would disagree with.

But as he's also written about a huge number of other things, day-to-day distractions are apparently the rule rather than the exception in policy roles, and value drift is always possible, it would be good to have someone on his team, or with good communication channels to them, who can re-emphasise these issues (without publicly associating EA with Cummings or any other political figure or party).

Although the blog post is seeking applications for various roles, the email address to send applications to is ‘ideas for number 10 at gmail dot com’.

If someone/some people took that address literally and sent an email outlining some relatively non-controversial EA-aligned ideas (e.g. collaboration with other governments on near-term AI-induced cyber security threats, marginal reduction of risks from AI arms races, pandemics and nuclear weapons, enhanced post-Brexit animal welfare laws, maintenance of the UK’s foreign aid commitment and/or increased effectiveness of foreign aid spending), would the expectancy of that email be positive (higher chance of above policies being adopted), negative (lower chance of above policies being adopted) or basically neutral (highly likely to be ignored or unread, irrelevant if policies are adopted due to uncertainty over long term impact)?

I’m inclined to have a go unless the consensus is that it would be negative in expectation.

Load more