Setting Community Norms and Values: A response to the InIn Open Letter

I’m writing this in response to the recent post about Intentional Insights documenting the many ways in which Gleb and the organisation he runs has acted in ways not representing EA values. Please take this post as representative of the views of the Centre for Effective Altruism (CEA) on the... Read More

Taking Systemic Change Seriously

This is meant to be a rough response to the attitude that systemic change is too difficult/intractable as well as a response by performance that EAs don't think about systemic change. Note: by systemic change I'm referring to many possible changes in the fundamental structure of economic, political and international... Read More

Concerns with Intentional Insights

A recent facebook post by Jeff Kaufman raised concerns about the behavior of Intentional Insights (InIn), an EA-aligned organization headed by Gleb Tsipursky. In discussion arising from this, a number of further concerns were raised. This post summarizes the concerns found with InIn. It also notes some concerns which were... Read More

The Map of Shelters and Refuges from Global Risks (Plan B of X-risks Prevention)

This map is part of the map : “ Plan of action to prevent human extinction risks ” . This map zooms in the Plan B of x-risks prevention. The main idea of the map: There are many ways how to create x-risks shelter, but they have only marginal utility... Read More

CEA Update: September 2016

Here's our update for September 2016. It was a great month for growth at CEA. In September, we got 106 new Giving What We Can members and 1,748 new EA Newsletter signups; in both cases that’s nearly double the previous three month average.  If anyone has questions, comment below and... Read More

CEA Updates + August 2016 update

In order to allow the community to get more of an insight into the ongoing activities of CEA, we're going to publish brief monthly reports on the organisation as a whole. (These reports were already going out to donors and other supporters.) If you'd like to get these as e-mails... Read More

Ask MIRI Anything (AMA)

Hi, all! The Machine Intelligence Research Institute (MIRI) is answering questions here tomorrow, October 12 at 10am PDT. You can post questions below in the interim. MIRI is a Berkeley-based research nonprofit that does basic research on key technical questions related to smarter-than-human artificial intelligence systems. Our research is largely aimed at developing... Read More

How I missed my pledge and how I'm fixing it

I failed to meet my pledge this year due to poor budgeting. In this post, I explain what I think happened and how I plan to avoid similar problems in the future. I wrote this post because I felt like I owed the community something for missing my pledge, and I thought... Read More

MIRI Update and Fundraising Case

The Machine Intelligence Research Institute is running its annual fundraiser, and we're using the opportunity to explain why we think MIRI's work is useful from an EA perspective . To that end, we'll be answering questions on a new "Ask MIRI Anything" EA Forum thread on Wednesday, October 12th. MIRI is... Read More

.impact updates (2 of 3): LEAN and SHIC

From Georgie Mallett:   .impact have successfully supported and developed multiple projects: the EA Forum, the EA Hub, the EA Newsletter, LEAN and so on. Our collaborative values have allowed these projects to flourish.   Having observed the impressive progress of Students for High-Impact Charity (SHIC) since its beginnings in... Read More

View more: Next