Posted on behalf of the research team at the Centre for Effective Altruism

The effective altruism community has surfaced a number of important ideas, identified existing research which is relevant to decisions, and in some cases pursued its own valuable research. Although there is a vast amount yet to learn, we’ve come a long way from a position of ignorance about how to help the world. At the same time, as the body of knowledge grows, it poses a number of challenges:

  • There isn’t always an easy reference for some important concepts or ideas.
  • It’s not obvious to someone coming into the area what to start reading, or where to find information on a given topic.
  • It can be obscure how the different branches of research are supposed to fit into the same overarching intellectual project.

Over the last month or two, the research team at the Centre for Effective Altruism has been working on a resource which attempts to address these challenges. The current version is somewhere between a reading list, an encyclopedia, and a textbook.

  • It is like a reading list in that we started with some of the highest-quality external material we know of, and wanted to provide readers a guide for this material.
  • It is like an encyclopedia in that it has separate short articles for different ideas, so users can dip into a part of it and browse.
  • It is like a textbook in that we provide a conceptual map of the space, which may help people orient their idea of how different concepts or pieces of work relate to others. This also gives people a natural place to start reading.

We think it’s important for the success of a project that it be useful from the outset, so we’ve put work into making sure that we have reasonable content across the entire space. However, we regard this as very much a starting point. We’re interested in finding out whether and how people use it. We’re interested in continuing to develop and improve the content. And we’re interested in whether we are missing important features, and how such a tool should work and accept contributions going forwards.

We've tried to do a good job of presenting a balanced view of important topics. We are confident some errors (of comission and omission) remain. The fault for these is all ours, but if you spot them please let us know. For broad discussion of the project, please use the comment thread here on the forum. For specific suggestions or feedback, or if you want to make a private comment, please see the feedback page.

We hope you find it interesting!

Effective Altruism Concepts

22

0
0

Reactions

0
0
Comments19
Sorted by Click to highlight new comments since:

Hi Owen,

Thanks for producing all of this content. I agree that it is a highly leveraged activity to make important ideas known to more effective altruists, and that making an online repository of such materials to link to ought to be a scalable solution to this problem. Thanks also for launching the site in an early stage of development, and without promotion, in order to allow criticism!

I’ll pitch in on three issues: i) the strategy of EA Concepts ii) its user interface, and iii) possible alternative approaches. I discuss the user interface here because it relates to my overall thinking.

###Strategy The main challenges, to paraphrase, are 1) to provide a reference, 2) to convey basic information, and 3) to connections and relationships within EA. It's hard for a simple reference (1) to also compellingly convey knowledge (2,3). Conveying knowlege is in large part a process of figuring out what to leave out. An obvious way to improve the pedagogical value would be to leave out the abstract decision-making topics whose research isn't shaping altruistic decisions much. Another major part of conveying knowledge (or getting people to read the content at all) is communicating some clear answer to the overarching question: “Why do I care?”.

First, the issue of leaving things out. Where research rarely shapes EA activities, such as in the ‘idealized decision-making’ section, such topics should probably be budded off into a glossary. Then, what is left would be a well-organized discussion of why effective altruists find certain activities compelling. One could even add info about how effective altruists are in fact organizing and spending their time. Then, one would have a shareable repository of strategic thinking. The question of why the reader might care would then answer itself.

The need for readable strategy content is clear. When one runs an EA chapter, one of the commonest questions from promising participants is what EAs are supposed to do, other than attending meetings and reading canonical texts. (These days, it surely is not just donating, either.) Useful answers would discuss what similar attendees are doing, and why, and what person-sized holes still exist in EA efforts. Such online material would convince people that EA is executing on some (combination of) overall plan(s). The EA plan(s) of course have arisen partly from historical circumstance.

This brings me to a final reason for including more general strategic thinking. If the EA community was started again today, we would - from the outset - include people from industry and government, rather than just from the academic sector. There would be researchers from a range of fields, such as tech policy, synthetic biology, machine learning, and productivity enhancement rather than just from philosophy and decision theory. So we have an awesome opportunity to re-center the map of EA concepts, that I think has so far been missed.

To summarize, I think the map would be better if it: selected and emphasized action-related topics, re-centered the EA community on useful concrete domains of knowledge, not just abstract ones, and conveyed the connections between action and theory, in order to make the material readable and learnable.

###User interface Currently, the site lacks usability. There are lots and lots of issues, so I wonder why some existing technical solution was not used, like Workflowy or Medium. Obviously, it is a prototype, but this gives all the more reason to start with existing software.

To begin with:

  • the site’s content should be visible on the front page with no clicks. It shouldn’t rely on anyone having the patience to click anything before closing the window. This means that the tree should be visible, and that the content from some or all pages should also be visible with minimal effort. One option, not necessarily a good one, would be to have the “+” button expand a node with mouse hover but regardless, some solution is required.
  • The numbers in the tree that indicate how many daughters each node are uninteresting, and should be removed. (Once one figures out how to have nodes effortlessly expanded and collapsed this will be even more obvious)
  • It should be possible to access content without going through to a separate page. Although it should be possible to link to pages, by default, they must appear on the homepage, and the only obvious way I see to do this to put them in the tree. This would make it effortless to return from content to the main tree, which is also necessary. (Currently you have to scroll to the bottom of the page to click the back button!)
  • Clicking the text of a node must have the same effect as clicking the plus sign next to it. Plus signs are too small to expect anyone to click. The only thing that it would make sense to happen, then, is for clicking a link to cause its daughters to expand. On this approach, if you wanted to further characterize the non-leaf nodes, you would need some solution other than pages.
  • When I hover over a page link in an article, nothing happens. It would be nice if a summary was provided, as in Arbital.
  • The text in the tree and in the articles is too light and somewhat too small.
  • One should be able to view pages alphabetically or to carry out a search.

So there is a ton of UI improvement to be done, that would seem to be a high priority, if one is to continue piloting the site.

###Alternatives So how else might a project like EA Concepts be built? I have already said that (1) might best be achieved by moving some of the drier topics into a glossary. (2,3) would be best achieved by whatever modality will reach a moderately large expected audience who will engage deeply with the content. In light of the fact that the current page has poor user interface, this could instead be done with blog posts, a book, an improved version of the current site, or the explanations platform Arbital. Arbital, a group of three ex-Google EAs have already made a crisp and useable site, and in order to attract more users, they are pivoting toward discussion of a range of consequential topics, rather than just mathematics. On the face of it, their mission is your mission, and it would be worth looking hard for synergies.

Overall, I think there's something useful to be done in this space, but I'm fairly unconvinced that the site is currently on track to capture (much of) that value.

UI is not really my area, so I'll leave that to others except to say:

  • Thanks for all the comments! I think that more work into the UI is going to be important, and critical voices are helpful for this.
  • In development a lot of this lived in workflowy, and it was noticeably worse to use than now. (But perhaps there was a different way of setting it up which would have worked better.)

On strategy, the general idea is not that everyone reads the whole thing, but that people can explore local areas they're interested in. This should avoid the need to cut anything off into a glossary (although the guidance for how to start engaging could improve; I agree that idealized ethical decision making content is irrelevant for most users so should probably be less prominent). This should let people engage with and become experts on aspects of EA-relevant research and have a rough idea of how it fits in with other areas, without needing to be expert on those other areas. One of the important reasons for laying it out in an approximately-logical tree was that we think this could help people to spot where there are gaps in the research that haven't been noticed.

I agree that idealized ethical decision making content is irrelevant for most users so should probably be less prominent

I feel like one of the key advantages of the tree structure is that it's already not too prominent. I can see the motivations for demoting it even further, but it does feel like it's in the right place with respect to the overall structure of the concepts, and it's hard to see how to de-emphasise it without losing that.

On strategy, the general idea is not that everyone reads the whole thing, but that people can explore local areas they're interested in. This should avoid the need to cut anything off into a glossary (although the guidance for how to start engaging could improve; I agree that idealized ethical decision making content is irrelevant for most users so should probably be less prominent).

Despite the treelike structure, omitting boring or esoteric topics still seems key for keeping the reader's trust and attention.

Wanting to lay things out logically also shouldn't prevent focusing more on areas that are more important.

I am not an expert in design but from a personal point of view I like the user-interface.

I found the concept tree with the numbers in brackets and links that are different from the "+" etc, to be intuitive and easy to navigate.

To provide another perspective on UI issues (in descending order of importance in my eyes):

  • I agree that the content pages need a better way to return to their location in the main tree, although I'm not exactly sure what that would look like. Having content appear within the tree itself has downsides, like wasting page space on tree structure illustration (roughly speaking I imagine navigating the tree and reading content as separate activities, and I don't want them to interfere with each other). It's not inconceivable that you could make the content available within the tree and on separate pages, so that users could choose how/where to read it.
  • I think having "+" expand on mouse hover is a very bad idea. I should be able to move my mouse around on the page without causing radical structural changes to what is displayed. (Moreover, mouse-hover stuff doesn't tend to work so well with mobile).
  • The numbers serve some value to my eyes, but I'm not sure how much. I'd also consider having the numbers reflect the total number of children under each node, rather than just the number of immediate children. That gives you an idea of how much depth a particular subsection is covered in, and how much an undertaking it would be to read all of it, for example.
  • I agree that search is also important. You can do this the "dumb" way by just strapping a custom Google search to the page, or you can do something smarter that e.g. highlights which parts of the tree contain your search results (perhaps how many times, with totals at the parent nodes). This smarter search seems like a low priority, but once I came up with it I thought it was too cute not to share.
  • I disagree that clicking on + nodes is too hard, although I agree that it's intuitive to expect clicking on the text of the parents to have the same effect. A simple solution would be to have the first child of every parent be a summary of that parent, but I'm not convinced any solution is necessary.

It's not inconceivable that you could make the content available within the tree and on separate pages, so that users could choose how/where to read it.

That's what I already secretly had in mind.

I disagree that clicking on + nodes is too hard, although I agree that it's intuitive to expect clicking on the text of the parents to have the same effect.

I feel like even once it has been made true that clicking on the text or the 'plus' next to it do the same thing, there will still be some work to do to make the user 'closer' to the content. Basically, good and popular websites tend to give the user a payoff with one-ish click. Currently, I doubt that the EA Concepts site would be used, except for when people are linked to specific ideas. But I recognize that this is at least a somewhat subtle point.

Your bulleted list is not formatted correctly which makes it really hard to read; can you fix it by putting two newlines before it?

What are the concrete use cases you guys have in mind? I can think of two:

  • Someone who's new to EA wants to get up to speed on EA thinking. They start at the top and either read systematically or click on whatever seems interesting.

  • People are having a discussion about EA online, and someone wants to explain an EA concept, so they link to the relevant page.

There are other hypothetical use cases for a tool such as this. If the pages were much more comprehensive, they could be useful to veteran EAs who wanted to get up to speed with regard to the latest thinking on a particular topic. But a tool like this would be different--it would probably have a wiki structure, and the content would be more speculative. It might be a sort of hybrid wiki and discussion forum, similar to how the original wiki was used. (I think this is plausibly a superior structure relative to a straight discussion forum like effective-altruism.com, because it increases the odds that discussions will be useful long after the discussion is finished.)

Both of the use cases with bullet points seem primarily targeted at people who are new to EA. EA forum users tend to be veteran EAs, so it might be worthwhile to usability test new EAs separately. For the first use case, you could simply present them with the top-level page and time how long it takes to lose interest. You'd probably want to concentrate on people who are dissatisfied with existing resources like Will MacAskill's book For the second use case, maybe you could have a conversation with a new EA, but try to use the tool to explain topics whenever possible? You could observe both how the person you were talking to responded and also how easy it was for you to find pages relevant to your conversation. It might also be useful to survey new EAs about their frustrations re: getting up to speed with EA, in case that gives you ideas for new features or approaches.

Your two suggestions are both close to things we had in mind (on the first one we were thinking less someone who's very new, as someone who's somewhat engaged already, and may be up-to-speed on some areas but want to learn more about others).

Another use case is helping people who are considering doing research or strategy work to orient themselves with respect to the whole space of current thinking. This can help people to understand how different parts of research translate into better decisions, which in turn can help them to pick more crucial questions to work on. The hierarchical structure can also make it more apparent if there's a topic which should be worked on but hasn't been: rather than just explore out from existing streetlights we can spot where there are big patches of darkness. This might be strengthened by your suggestion of a hybrid wiki/forum (we talked about something in this direction, and our feeling was "could be cool, revisit later").

Yeah, I see potential for this to be useful even if no-one uses it who isn't already familiar with the content: just structuring and categorising the information allows us to be clearer about which questions we can and can't answer, and be more aware of our conceptual gaps or weak points. I see that as a really useful and underrated clarifying tool, and I'm excited to see it develop further.

If the structuring and organizing of the content is a big part of its added value, that can be hard to preserve in a wiki or forum, which are often chaotic by nature. There's probably a trade-off between

  1. curation of content, particularly ensuring that content meets overarching goals and broad organizational principles, avoids duplication, self-contradiction, etc.
  2. quantity and depth of content, responsiveness to changes and developments, representation of a range of perspectives, and some sense of community-wide legitimacy

Broadly speaking, I'd guess that getting more people involved hurts (1) and helps (2). We already have a forum and a wiki, so maybe (2) is better served by existing resources, and your comparative advantage is (1). But I'm open-minded about the possibility that you can find a way to manage the tradeoff and maintain the structure despite an open contribution model.

I like the idea of this. I think it's great you've put this together and it would be much improved by changing the UX/UI stuff. I'm currently not sure where to go once I open the site, so my instinct was just to wander away.

I wonder if it would help to have a 'start here' bit, maybe with a beginner, intermediate and advanced sections. because I'm unfamiliar with it, I don't know which ones would be most useful to me.

It might also be nice of the concept map was actually a map, and you could see how stuff relates to each other, rather than a split-out list where you need to click on stuff to find out what's there.

But good work!

Some of the articles seem like they emphasize weird things. First example I noticed was the page on consuming animal products has three links to fairly specific points related to eating animals but no links to articles that present an actual case for veg*anism, and the article itself does not contain a case. This post is the sort of thing I'm talking about.

Fixed. At least with respect to adding and referencing the Hurford post (more might also be needed). Please keep such suggestions forthcoming.

The article on expected value theory incorrectly cites the VNM theorem as a defense of maximizing expected value. The VNM theorem says that for a rational agent, there must exist some measure of value for which the rational agent maximizes its expectation, but the theorem does not say anything about the structure of that measure of value. In particular, it does not say that value must be linear with respect to anything, so it does not give a reason not to be risk averse. There are good reasons for altruists to have very low risk aversion, but the VNM theorem is not a sufficient such reason.

Edit: I see the article on risk aversion clarifies that "risk aversion" means in the psychological sense, but without that context, it looks like the expected value article is saying that many EAs think altruists should have low risk aversion in the economic sense, which is true, an important point, and not supported by the VNM theorem. Also, the economics version of risk aversion is also an important concept for EAs, so I don't think it's a good idea to establish that "risk aversion" only refers to the psychological notion by default, rather than clarifying it every time.

Edit 2: Since this stuff is kind of a pet peeve of mine, I'd actually be willing to attempt to rewrite those articles myself, and if you're interested, I would let you use and modify whatever I write however you want.

Hi Alex, thanks for the comment, great to pick up issues like this.

I wrote the article, and I agree and am aware of your original point. Your edit is also correct in that we are using risk aversion in the psychological/pure sense, and so the VNM theory does imply that this form of risk aversion is irrational. However, I think you're right that, given that people are more likely to have heard of the concept of economic risk aversion, the expected value article is likely to be misleading. I have edited to emphasise the way that we're using risk aversion in these articles, and to clarify that VNM alone does not imply risk neutrality in an economic sense. I've also added a bit more discussion of economic risk aversion. Further feedback welcome!

Even though the last paragraph of the expected value maximization article now says that it's talking about the VNM notion of expected value, the rest of the article still seems to be talking about the naive notion of expected value that is linear with respect to things of value (in the examples given, years of fulfilled life). This makes the last paragraph seem pretty out of place in the article.

Nitpicks on the risk aversion article: "However, it seems like there are fewer reasons for altruists to be risk-neutral in the economic sense" is a confusing way of starting a paragraph about how it probably makes sense for altruists to be close to economically risk-neutral as well. And I'm not sure what "unless some version of pure risk-aversion is true" is supposed to mean.

Thanks, I've made some further changes, which I hope will clear things up. Re your first worry, I think that's a valid point, but it's also important to cover both concepts. I've tried to make the distinction clearer. If that doesn't address your worry, feel free to drop me a message or suggest changes via the feedback tab, and we can discuss further.

CEA is now considering where to take this project next: how much effort we should put into expanding it, and what new features/content we should focus on. We'd welcome feedback from anyone, regardless of whether you've used the site before, via this Google form

Curated and popular this week
Relevant opportunities