In response to EAGx Relaunch
Comment author: LawrenceC 23 July 2017 06:27:17AM *  1 point [-]

Awesome! Glad to hear that EAGx is still happening. I think it makes a lot of sense to pivot away from having many EAGx conferences of variable quality to a few high quality ones.

While we continue to think that this is an important function, CEA believes that, at least at the moment, our efforts to improve the world are bottlenecked by our ability to help promising people become fully engaged, rather than attracting new interest.

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

In response to comment by LawrenceC on EAGx Relaunch
Comment author: weeatquince  (EA Profile) 23 July 2017 01:14:53PM *  0 points [-]

did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

I am curious about this too.

In particular it feels to me that post event follow at EAGxs I have been to was weak. Does CEA think this is true of EAGxs to date? If so there a plan to improve on this? If there is not a plan can I offer my help to CEA to develop one?

Comment author: Maxdalton 13 June 2017 11:14:21AM 1 point [-]

Yes, this project is fully funded, from donations from a large donor given for this purpose.

Comment author: weeatquince  (EA Profile) 19 June 2017 09:06:00AM 10 points [-]

On this topic...

I would be interested in a write up of EA Ventures and why it did not seem to work (did it fail) and what can be learned from it. I think there is significant value in learning for the EA community in writing up projects like this even if they went wrong.

Similarly I would be interested in seeing the write up of the Pareto fellowship - another program that possibly (it is unclear) was not the success that was hoped for.

If it is the case (I hope it would be the case) that CEA has an internal write-up of these projects but not a publishable one I can try to find a trustworthy London based volunteer who could re-write it up for you. Or it might be a good project for a summer intern.

3

Understanding Charity Evaluation

In this short post I try to set out a model of understanding charity evaluation (and cause prioritisation) research, how and when such this research is useful to people and how it can be done better.   A MODEL OF DO-GOODERS There are many people who want to make the... Read More
Comment author: weeatquince  (EA Profile) 30 March 2017 09:19:03AM 1 point [-]

This is a good paper and well done to the authors.

I think section 3 is very weak. I am not flagging this as a flaw in the argument just the area that I see the most room for improvement in the paper and/or the most need for follow up research. The authors do say that more research is needed which is good.

Some examples of what I mean by the argument is weak: - The paper says it is "reasonable to believe that AMF does very well on prioritarian, egalitarian, and sufficientarian criteria". "reasonable to believe" is not a strong claim. No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics. This could be done but is not happening. - The paper says Iason "fail[s] to show that effective altruist recommendations actually do rely on utilitarianism" but the paper also fail to show that effective altruist recommendations actually do not rely on utilitarianism. - Etc

Why I think more research is useful here: - Because when the strongest case you can make for EA to people with equality as a moral intuition begins by saying "it is reasonable to believe . . . " it is so hard to make EA useful to such people. For example when I meet people new to EA who care a lot about equality making the case that: 'if you care about minimising suffering this 'AMF' thing comes up top and it is reasonable to assume that if you care about equality it also could be at the top because it is effective and helps the poorest' carries a lot less weight than perhaps saying: 'hey we funded a bunch of people who care foremost about equality, like you do, to map out their values and rank charities and this came top.'

Note cross-posting a summarised comment on this paper from a discussion on Facebook https://www.facebook.com/groups/798404410293244/permalink/1021820764618273/?comment_id=1022125664587783

Comment author: ColinB 24 November 2016 06:19:49PM 3 points [-]

I’m fairly new to the EA Community and have been surprised at the lack of attention to political systems in the EA portfolio. I believe the effective political systems are critical to human thriving in both the immediate and longer term, and to managing developments such as AI and biotechnology for benefit rather than harm. However, developing and implementing political systems fit for the 21st century would seem to raise major challenges – to give a few obvious examples:

• Many of the issues EAs are focussing on (AI, biorisk, global warming) can only be addressed well through effective global governance. At present our institutions for global and supranational governance are struggling and nation states are looking inwards.

• There is good evidence (eg as cited in the book ‘The Spirit Level’) that happy thriving citizens are correlated with high levels of trust in government and public institutions. Recently many nations experienced a serious decline of public trust in governments, this needs attention

• As a result of rising inequality and static living standards for the middle classes over the last 30 years, serious questions are being asked about the future of capitalist liberal democracies - growing discussion of postcapitalism and how it might be transitioned to etc

• There are also questions about whether democratic systems in which governments are subject to election every 4/5 years are able to effectively manage the impact of paradigm shifting development such as AI, biorisk and climate change

• Experts are suggesting AI is likely to replace vast numbers of jobs, raising big questions for who benefits from these technologies, future of work and more philosophical questions about meaning of human existence without work. Ideas such as universal basic income being proposed as possible responses. Huge potential avoidable human suffering if this is managed 'badly'

So I’m suggesting EAs should give more attention to political systems as scope is huge, the area is somewhat neglected (particular in connecting academic work with practical politics) and probably underfunded. Tractability can probably be improved particularly by seeking to raise public awareness and understanding of medium term challenges.

I would be interesting in getting involved in work, and potentially donating

Comment author: weeatquince  (EA Profile) 04 January 2017 05:19:00PM 0 points [-]

I would be interesting in getting involved in work, and potentially donating

Hi Colin B, been thinking about next steps. Any chance you could get in touch (email policy@ealondon.com).

Comment author: RyanCarey 06 December 2016 03:08:49AM *  6 points [-]

Hi Owen,

Thanks for producing all of this content. I agree that it is a highly leveraged activity to make important ideas known to more effective altruists, and that making an online repository of such materials to link to ought to be a scalable solution to this problem. Thanks also for launching the site in an early stage of development, and without promotion, in order to allow criticism!

I’ll pitch in on three issues: i) the strategy of EA Concepts ii) its user interface, and iii) possible alternative approaches. I discuss the user interface here because it relates to my overall thinking.

Strategy

The main challenges, to paraphrase, are 1) to provide a reference, 2) to convey basic information, and 3) to connections and relationships within EA. It's hard for a simple reference (1) to also compellingly convey knowledge (2,3). Conveying knowlege is in large part a process of figuring out what to leave out. An obvious way to improve the pedagogical value would be to leave out the abstract decision-making topics whose research isn't shaping altruistic decisions much. Another major part of conveying knowledge (or getting people to read the content at all) is communicating some clear answer to the overarching question: “Why do I care?”.

First, the issue of leaving things out. Where research rarely shapes EA activities, such as in the ‘idealized decision-making’ section, such topics should probably be budded off into a glossary. Then, what is left would be a well-organized discussion of why effective altruists find certain activities compelling. One could even add info about how effective altruists are in fact organizing and spending their time. Then, one would have a shareable repository of strategic thinking. The question of why the reader might care would then answer itself.

The need for readable strategy content is clear. When one runs an EA chapter, one of the commonest questions from promising participants is what EAs are supposed to do, other than attending meetings and reading canonical texts. (These days, it surely is not just donating, either.) Useful answers would discuss what similar attendees are doing, and why, and what person-sized holes still exist in EA efforts. Such online material would convince people that EA is executing on some (combination of) overall plan(s). The EA plan(s) of course have arisen partly from historical circumstance.

This brings me to a final reason for including more general strategic thinking. If the EA community was started again today, we would - from the outset - include people from industry and government, rather than just from the academic sector. There would be researchers from a range of fields, such as tech policy, synthetic biology, machine learning, and productivity enhancement rather than just from philosophy and decision theory. So we have an awesome opportunity to re-center the map of EA concepts, that I think has so far been missed.

To summarize, I think the map would be better if it: selected and emphasized action-related topics, re-centered the EA community on useful concrete domains of knowledge, not just abstract ones, and conveyed the connections between action and theory, in order to make the material readable and learnable.

User interface

Currently, the site lacks usability. There are lots and lots of issues, so I wonder why some existing technical solution was not used, like Workflowy or Medium. Obviously, it is a prototype, but this gives all the more reason to start with existing software.

To begin with:

  • the site’s content should be visible on the front page with no clicks. It shouldn’t rely on anyone having the patience to click anything before closing the window. This means that the tree should be visible, and that the content from some or all pages should also be visible with minimal effort. One option, not necessarily a good one, would be to have the “+” button expand a node with mouse hover but regardless, some solution is required.
  • The numbers in the tree that indicate how many daughters each node are uninteresting, and should be removed. (Once one figures out how to have nodes effortlessly expanded and collapsed this will be even more obvious)
  • It should be possible to access content without going through to a separate page. Although it should be possible to link to pages, by default, they must appear on the homepage, and the only obvious way I see to do this to put them in the tree. This would make it effortless to return from content to the main tree, which is also necessary. (Currently you have to scroll to the bottom of the page to click the back button!)
  • Clicking the text of a node must have the same effect as clicking the plus sign next to it. Plus signs are too small to expect anyone to click. The only thing that it would make sense to happen, then, is for clicking a link to cause its daughters to expand. On this approach, if you wanted to further characterize the non-leaf nodes, you would need some solution other than pages.
  • When I hover over a page link in an article, nothing happens. It would be nice if a summary was provided, as in Arbital.
  • The text in the tree and in the articles is too light and somewhat too small.
  • One should be able to view pages alphabetically or to carry out a search.

So there is a ton of UI improvement to be done, that would seem to be a high priority, if one is to continue piloting the site.

Alternatives

So how else might a project like EA Concepts be built? I have already said that (1) might best be achieved by moving some of the drier topics into a glossary. (2,3) would be best achieved by whatever modality will reach a moderately large expected audience who will engage deeply with the content. In light of the fact that the current page has poor user interface, this could instead be done with blog posts, a book, an improved version of the current site, or the explanations platform Arbital. Arbital, a group of three ex-Google EAs have already made a crisp and useable site, and in order to attract more users, they are pivoting toward discussion of a range of consequential topics, rather than just mathematics. On the face of it, their mission is your mission, and it would be worth looking hard for synergies.

Overall, I think there's something useful to be done in this space, but I'm fairly unconvinced that the site is currently on track to capture (much of) that value.

Comment author: weeatquince  (EA Profile) 06 December 2016 12:41:20PM 3 points [-]

I am not an expert in design but from a personal point of view I like the user-interface.

I found the concept tree with the numbers in brackets and links that are different from the "+" etc, to be intuitive and easy to navigate.

11

Cause: Better political systems and policy making.

Should EAs be fighting for better political systems and better policy making? For governance where the decision makers, at minimum, are incentivised to act in the best long-term interest of the population?   At first glance this could be super important. If you think policy makers should be putting in... Read More
3

Thinking about how we respond to criticisms of EA

There are thin and thick versions of EA[1]. The thinnest version is that EA is doing good effectively. The thickest versions look at the beliefs held by the majority of the community and says that these beliefs are what EA is. Often critics of EA take a thick version: they... Read More
Comment author: weeatquince  (EA Profile) 02 August 2016 10:53:31AM *  1 point [-]

Hi, I notice the content section of the pdf do not quite match up to the actual subheadings in the document. In particular the contents list a section on 'What benefits do people get out of being in a local group?' which is not in the doc. Just wanted to check that nothing has been missed out of the write-up.

Otherwise - AWESOME job guys - keep up the good work - and feel free to ask if you need funding to make this happen for future years.

Comment author: William_MacAskill 13 July 2016 08:07:54PM 16 points [-]

As a ‘well-known’ EA, I would say that you can reasonably say that EA has one of two goals: a) to ‘do the most good’ (leaving what ‘goodness’ is undefined); b) to promote the wellbeing of all (accepting that EA is about altruism in that it’s always ultimately about the lives of sentient creatures, but not coming down on a specific view of what wellbeing consists in). I prefer the latter definition (for various reasons; I think it’s a more honest representation of how EAs behave and what they believe), though think that as the term is currently used either is reasonable. Although reducing suffering is an important component of EA under either framing, under neither is the goal simply to minimize suffering, and I don’t think that Peter Singer, Toby Ord or Holden Karnofsky (etc) would object to me saying that they don’t think of this as the only goal either.

Comment author: weeatquince  (EA Profile) 25 July 2016 12:40:24PM 0 points [-]

Hi Will. I would be very interested to hear the various reasons you have for preferring the latter definition? I prefer the first of the two definitions that you give, primarily leaning towards it because it makes less assumptions about what it means to do good and I have a strong intuition that EA benefits by being open to all forms of doing good.

View more: Next