Comment author: weeatquince  (EA Profile) 07 September 2017 11:33:52AM *  1 point [-]

Hi, In case helpful for considering the additional Facebook information, I have a bunch of data on EA social media presence to help me compare growth in London to other locations, including a lot of downloaded Sociograph data from 2016.

For example the EA Facebook group size over the last year:

03/06/2016 _ 10263

13/01/2017 _ 12070

10/06/2017 _ 12,953

Obviously you'd expect these things to grow as people join then do not leave (but might ignore it), even if the movement was shrinking.

Comment author: Ajeya 17 July 2017 04:11:39AM 8 points [-]

Views my own, not my employers.

Thanks for writing this up! I agree that it could be a big win if general EA ideas besides cause prioritization (or the idea of scope-limited cause prioritization) spread to the point of being as widely accepted as environmentalism. Some alternatives to this proposal though:

  1. It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded -- less ambiguously so than with narrow EA I think (see Carl's comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
  2. Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I'm unsure how difficult this would be.

Both of these alternatives seem to have what is (to me) an advantage: they don't involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.

FWIW, I think I would be much more excited to evangelize broad low-level EA memes if there were some strong alternative channel to distinguish cause-neutral, super intense/obsessive EAs. Science has a very explicit distinction between science fans and scientists, and a very explicit funnel from one to the other (several years of formal education). EA doesn't have that yet, and may never. My instinct is that we should work on building a really really great "product", then build high and publicly-recognized walls around "practitioners" and "consumers" (a practical division of labor rather than a moral high ground thing), and then market the product hard to consumers.

Comment author: weeatquince  (EA Profile) 06 September 2017 08:10:40AM *  0 points [-]

I want to suggest a more general version of Ajeya's views which is:

If someone did want to put time and effort into creating the resources to promote something akin to "broad effective altruism" they could focus their effort in two ways:

  1. on research and advocacy that does not add to (and possibly detracts attention from) the "narrow effective altruism" movement.

  2. on research and advocacy that benefits the effective altruism movement.


  1. Eg. Researching what is the best arts charity in the UK. Not useful as it is very unlikely that anyone who does take a cause neutral approach to charity would want to give to a UK arts charity. There is a risk of misleading, for example if you google effective altruism and a bunch of materials on UK arts comes up first.

  2. Eg. Researching general principles of how to evaluate charities. Researching climate change solutions. Researching systemic change charities. These would all expand the scope of EA research and writings, might produce plausible candidates for the best charity/cause, and at the same time act to attract more people into the movement. Consider climate change. It is a problem that at some point this century humanity has to solve (unlike UK arts) and it is also a cause many non-EAs care about strongly


So if there was at least some effort put into any "broad effective altruism" expansion I would strongly recommend starting with finding ways to expand the movement that are simultaneously useful areas for us to be considering in more detail.

(That said, FWIW I am very wary of attempts to expanding to have a "broad effective altruism" for some of the reasons mentioned by others)

Comment author: weeatquince  (EA Profile) 02 August 2017 10:42:39AM 1 point [-]

Can you say something on the risk of lots of EAs putting their funds in the same space with the same investment manager? Should the community not diversify.

In response to EAGx Relaunch
Comment author: LawrenceC 23 July 2017 06:27:17AM *  1 point [-]

Awesome! Glad to hear that EAGx is still happening. I think it makes a lot of sense to pivot away from having many EAGx conferences of variable quality to a few high quality ones.

While we continue to think that this is an important function, CEA believes that, at least at the moment, our efforts to improve the world are bottlenecked by our ability to help promising people become fully engaged, rather than attracting new interest.

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

In response to comment by LawrenceC on EAGx Relaunch
Comment author: weeatquince  (EA Profile) 23 July 2017 01:14:53PM *  1 point [-]

did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

I am curious about this too.

In particular it feels to me that post event follow at EAGxs I have been to was weak. Does CEA think this is true of EAGxs to date? If so there a plan to improve on this? If there is not a plan can I offer my help to CEA to develop one?

Comment author: Maxdalton 13 June 2017 11:14:21AM 1 point [-]

Yes, this project is fully funded, from donations from a large donor given for this purpose.

Comment author: weeatquince  (EA Profile) 19 June 2017 09:06:00AM 10 points [-]

On this topic...

I would be interested in a write up of EA Ventures and why it did not seem to work (did it fail) and what can be learned from it. I think there is significant value in learning for the EA community in writing up projects like this even if they went wrong.

Similarly I would be interested in seeing the write up of the Pareto fellowship - another program that possibly (it is unclear) was not the success that was hoped for.

If it is the case (I hope it would be the case) that CEA has an internal write-up of these projects but not a publishable one I can try to find a trustworthy London based volunteer who could re-write it up for you. Or it might be a good project for a summer intern.


Understanding Charity Evaluation

In this short post I try to set out a model of understanding charity evaluation (and cause prioritisation) research, how and when such this research is useful to people and how it can be done better.   A MODEL OF DO-GOODERS There are many people who want to make the... Read More
Comment author: weeatquince  (EA Profile) 30 March 2017 09:19:03AM 1 point [-]

This is a good paper and well done to the authors.

I think section 3 is very weak. I am not flagging this as a flaw in the argument just the area that I see the most room for improvement in the paper and/or the most need for follow up research. The authors do say that more research is needed which is good.

Some examples of what I mean by the argument is weak: - The paper says it is "reasonable to believe that AMF does very well on prioritarian, egalitarian, and sufficientarian criteria". "reasonable to believe" is not a strong claim. No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics. This could be done but is not happening. - The paper says Iason "fail[s] to show that effective altruist recommendations actually do rely on utilitarianism" but the paper also fail to show that effective altruist recommendations actually do not rely on utilitarianism. - Etc

Why I think more research is useful here: - Because when the strongest case you can make for EA to people with equality as a moral intuition begins by saying "it is reasonable to believe . . . " it is so hard to make EA useful to such people. For example when I meet people new to EA who care a lot about equality making the case that: 'if you care about minimising suffering this 'AMF' thing comes up top and it is reasonable to assume that if you care about equality it also could be at the top because it is effective and helps the poorest' carries a lot less weight than perhaps saying: 'hey we funded a bunch of people who care foremost about equality, like you do, to map out their values and rank charities and this came top.'

Note cross-posting a summarised comment on this paper from a discussion on Facebook

Comment author: ColinB 24 November 2016 06:19:49PM 3 points [-]

I’m fairly new to the EA Community and have been surprised at the lack of attention to political systems in the EA portfolio. I believe the effective political systems are critical to human thriving in both the immediate and longer term, and to managing developments such as AI and biotechnology for benefit rather than harm. However, developing and implementing political systems fit for the 21st century would seem to raise major challenges – to give a few obvious examples:

• Many of the issues EAs are focussing on (AI, biorisk, global warming) can only be addressed well through effective global governance. At present our institutions for global and supranational governance are struggling and nation states are looking inwards.

• There is good evidence (eg as cited in the book ‘The Spirit Level’) that happy thriving citizens are correlated with high levels of trust in government and public institutions. Recently many nations experienced a serious decline of public trust in governments, this needs attention

• As a result of rising inequality and static living standards for the middle classes over the last 30 years, serious questions are being asked about the future of capitalist liberal democracies - growing discussion of postcapitalism and how it might be transitioned to etc

• There are also questions about whether democratic systems in which governments are subject to election every 4/5 years are able to effectively manage the impact of paradigm shifting development such as AI, biorisk and climate change

• Experts are suggesting AI is likely to replace vast numbers of jobs, raising big questions for who benefits from these technologies, future of work and more philosophical questions about meaning of human existence without work. Ideas such as universal basic income being proposed as possible responses. Huge potential avoidable human suffering if this is managed 'badly'

So I’m suggesting EAs should give more attention to political systems as scope is huge, the area is somewhat neglected (particular in connecting academic work with practical politics) and probably underfunded. Tractability can probably be improved particularly by seeking to raise public awareness and understanding of medium term challenges.

I would be interesting in getting involved in work, and potentially donating

Comment author: weeatquince  (EA Profile) 04 January 2017 05:19:00PM 0 points [-]

I would be interesting in getting involved in work, and potentially donating

Hi Colin B, been thinking about next steps. Any chance you could get in touch (email

Comment author: RyanCarey 06 December 2016 03:08:49AM *  6 points [-]

Hi Owen,

Thanks for producing all of this content. I agree that it is a highly leveraged activity to make important ideas known to more effective altruists, and that making an online repository of such materials to link to ought to be a scalable solution to this problem. Thanks also for launching the site in an early stage of development, and without promotion, in order to allow criticism!

I’ll pitch in on three issues: i) the strategy of EA Concepts ii) its user interface, and iii) possible alternative approaches. I discuss the user interface here because it relates to my overall thinking.


The main challenges, to paraphrase, are 1) to provide a reference, 2) to convey basic information, and 3) to connections and relationships within EA. It's hard for a simple reference (1) to also compellingly convey knowledge (2,3). Conveying knowlege is in large part a process of figuring out what to leave out. An obvious way to improve the pedagogical value would be to leave out the abstract decision-making topics whose research isn't shaping altruistic decisions much. Another major part of conveying knowledge (or getting people to read the content at all) is communicating some clear answer to the overarching question: “Why do I care?”.

First, the issue of leaving things out. Where research rarely shapes EA activities, such as in the ‘idealized decision-making’ section, such topics should probably be budded off into a glossary. Then, what is left would be a well-organized discussion of why effective altruists find certain activities compelling. One could even add info about how effective altruists are in fact organizing and spending their time. Then, one would have a shareable repository of strategic thinking. The question of why the reader might care would then answer itself.

The need for readable strategy content is clear. When one runs an EA chapter, one of the commonest questions from promising participants is what EAs are supposed to do, other than attending meetings and reading canonical texts. (These days, it surely is not just donating, either.) Useful answers would discuss what similar attendees are doing, and why, and what person-sized holes still exist in EA efforts. Such online material would convince people that EA is executing on some (combination of) overall plan(s). The EA plan(s) of course have arisen partly from historical circumstance.

This brings me to a final reason for including more general strategic thinking. If the EA community was started again today, we would - from the outset - include people from industry and government, rather than just from the academic sector. There would be researchers from a range of fields, such as tech policy, synthetic biology, machine learning, and productivity enhancement rather than just from philosophy and decision theory. So we have an awesome opportunity to re-center the map of EA concepts, that I think has so far been missed.

To summarize, I think the map would be better if it: selected and emphasized action-related topics, re-centered the EA community on useful concrete domains of knowledge, not just abstract ones, and conveyed the connections between action and theory, in order to make the material readable and learnable.

User interface

Currently, the site lacks usability. There are lots and lots of issues, so I wonder why some existing technical solution was not used, like Workflowy or Medium. Obviously, it is a prototype, but this gives all the more reason to start with existing software.

To begin with:

  • the site’s content should be visible on the front page with no clicks. It shouldn’t rely on anyone having the patience to click anything before closing the window. This means that the tree should be visible, and that the content from some or all pages should also be visible with minimal effort. One option, not necessarily a good one, would be to have the “+” button expand a node with mouse hover but regardless, some solution is required.
  • The numbers in the tree that indicate how many daughters each node are uninteresting, and should be removed. (Once one figures out how to have nodes effortlessly expanded and collapsed this will be even more obvious)
  • It should be possible to access content without going through to a separate page. Although it should be possible to link to pages, by default, they must appear on the homepage, and the only obvious way I see to do this to put them in the tree. This would make it effortless to return from content to the main tree, which is also necessary. (Currently you have to scroll to the bottom of the page to click the back button!)
  • Clicking the text of a node must have the same effect as clicking the plus sign next to it. Plus signs are too small to expect anyone to click. The only thing that it would make sense to happen, then, is for clicking a link to cause its daughters to expand. On this approach, if you wanted to further characterize the non-leaf nodes, you would need some solution other than pages.
  • When I hover over a page link in an article, nothing happens. It would be nice if a summary was provided, as in Arbital.
  • The text in the tree and in the articles is too light and somewhat too small.
  • One should be able to view pages alphabetically or to carry out a search.

So there is a ton of UI improvement to be done, that would seem to be a high priority, if one is to continue piloting the site.


So how else might a project like EA Concepts be built? I have already said that (1) might best be achieved by moving some of the drier topics into a glossary. (2,3) would be best achieved by whatever modality will reach a moderately large expected audience who will engage deeply with the content. In light of the fact that the current page has poor user interface, this could instead be done with blog posts, a book, an improved version of the current site, or the explanations platform Arbital. Arbital, a group of three ex-Google EAs have already made a crisp and useable site, and in order to attract more users, they are pivoting toward discussion of a range of consequential topics, rather than just mathematics. On the face of it, their mission is your mission, and it would be worth looking hard for synergies.

Overall, I think there's something useful to be done in this space, but I'm fairly unconvinced that the site is currently on track to capture (much of) that value.

Comment author: weeatquince  (EA Profile) 06 December 2016 12:41:20PM 3 points [-]

I am not an expert in design but from a personal point of view I like the user-interface.

I found the concept tree with the numbers in brackets and links that are different from the "+" etc, to be intuitive and easy to navigate.


Cause: Better political systems and policy making.

Should EAs be fighting for better political systems and better policy making? For governance where the decision makers, at minimum, are incentivised to act in the best long-term interest of the population?   At first glance this could be super important. If you think policy makers should be putting in... Read More

View more: Next