There are lots of important project ideas in EA that people could work on, and I’d like to encourage people to explore more. When I was looking for projects to work on, I had difficulty thinking of what needed doing apart from obvious projects like raising money for GiveWell-recommended charities. I even had a sense that all the organisations that needed to exist existed, which is obviously not correct.
Fortunately many people have put together project ideas in important cause areas:
-
Health
-
Charity Entrepreneurship want to find founders to work on these, and will offer extensive guidance
-
Poverty
-
Poverty, health
-
Successful applicants to these challenges receive seed funding
-
Health, food and agriculture, human rights, education, water, gender equality, digital inclusion, climate change resilience, and electricity
-
Poverty, health
-
Lee Sharkey: Increasing Access to Pain Relief in Developing Countries
-
Poverty, health
-
Poverty
-
Poverty
-
Deep Science Ventures: Antibiotic resistance: what can you do?
-
Health, catastrophic risk
-
Health, catastrophic risk
-
Climate change
-
Brett Victor: What can a technologist do about climate change?
-
Climate change
-
AI safety
-
MIRI: Agent Foundations for Aligning Superintelligence with Human Interests
-
AI safety
-
AI safety
-
Research Priorities for Robust and Beneficial Artificial Intelligence
-
AI safety
-
AI safety
-
World Resources Institute: Creating a sustainable food future
-
Catastrophic risk
-
Feeding everyone no matter what: managing food security after global catastrophe
-
Catastrophic risk
-
Mixed
-
Mixed
-
Mixed
This is far from exhaustive, but it’s a start.
However, it’s not clear whether lack of ideas is actually what’s stopping people from working on new projects. So I’d be interested to know:
-
What’s blocking you from working on an altruistic project?
-
Are there resources which the community could provide that would help?
-
Do you have any more project ideas or lists of project ideas to add? - I'll keep this list updated with what I find.
[This came out of this thread on why things don’t get done in the EA community. Thanks to John Maxwell for being a commitment device.]
Here are some more AI safety problem lists which don't appear in the main post (there is probably lots of redundancy between these lists):
This list of research problems just got posted.
The appendix on this page has a list of topics.
This paper by Francesca Rossi (talk on this page; more talks from the same event here).
https://ai-alignment.com/ is a good site, here is a topic overview post (there might be others).
The Future of Life Institute created this topic map (I think this might be an older version?) They also have this research priorities document and this list of research topics they are interested in funding.
Here is a literature review of recent AGI safety work (discussion).
MIRI created this wiki-like thing which can serve as an overview of problems they consider important.
These posts have info about highly reliable agent design.
This blog about ML security provides another perspective on the friendliness problem.
The AI Safety Gridworlds paper offers 8 relatively concrete problems.
Aligning Superintelligence with Human Interests: An Annotated Bibliography.
Another research overview.
The Learning-Theoretic AI Alignment Research Agenda just came out recently.
Here is a new AI governance research agenda.
I agree with Jessica Taylor that one should additionally aim to acquire one's own perspective about how to solve the alignment problem.