The September edition of the newsletter is here, with exciting news of grants, EA courses in universities and a spotlight on EA Global this month, with some fantastic talks available online. This is an open thread, meaning you can comment about whatever you like - not just discussion about the newsletter.                                                       


September 2016 Edition

View This Email in Your Browser
EA Newsletter Logo
 
Articles and Community Posts
 
Be sure to check out this new course on EA at the University of St Andrews – the course outline is a treasure trove of material. LSE also recently introduced a course on effective altruism for this coming academic year.

In this podcast Sam Harris talks with Will MacAskill about EA, moral illusions, existential risks and more. The other podcast of this kind that Will did, a two-hour conversation with Tim Ferriss, proved very popular.

In this exciting announcement, Charity Science announces a new initiative: Charity Science Health. Read all about how this charity was designed from the ground-up to be evidence-based, cost-effective and flexible.


Ever wondered what it would be like to work at an EA-aligned organisation? Milan Griffes takes to the 80,000 Hours blog to discuss his experiences of working at GiveWell.

What makes animal welfare an important focus area for EA? This article, written by those in the effective animal advocacy field, explains why helping animals could be the best use of your time and money.

Have a look at this updated chapter of the 80,000 Hours career guide on how exactly to build career capital early on in your career.
The New "Spotlight" Section

In this section we’ll be shining a spotlight on different topics, concepts and considerations that are central to EA. Submit ideas for this section through our feedback form.

After introducing this section in the last edition, this time we’ll be focusing on EA Global 2016. A total of 1,052 people attended the conference a month ago in Berkeley, making it the largest-ever gathering of effective altruists in history.

25 video recordings of talks and panels are now online – have a look at the list!

For example, there’s a panel on sharing and aggregating knowledge, a talk on whether effective altruists should do policy and of course the opening keynote that sketches a grand overview of the history and possible future of effective altruism. More videos will be added there in the coming weeks.

Interested in the next EA conference? Here’s a list of upcoming EAGx events around the world. The next one will be EAGx Berlin on October 8, for which you can sign up now.
Updates from EA Organizations
 
Animal Charity Evaluators

ACE released several blog posts, including a “Charities We’d Like to See” post that several people in their audience had requested. They also published an Online Ads Intervention Report, concluding that, while online ads likely spare large numbers of animals from suffering on farms, in most cases marginal resources are probably better spent on activities like corporate outreach and cndercover investigations.

Centre for the Study of Existential Risk

CSER has made its first hire in catastrophic environmental risk, Tatsuya Amano. CSER’s team is now up to 8 interdisciplinary researchers working on classification frameworks on global catastrophic risk, horizon-scanning, population ethics, biosecurity, disaster law and technology and more.

GiveWell

GiveWell published write-ups on grants Good Ventures made to New Incentives, Results for Development, and IDinsight as part of GiveWell's work to support the creation of future top charities. GiveWell also discussed whether other organizations would have funded the Against Malaria Foundation (AMF)'s bed net distributions if AMF hadn't.

Local Effective Altruism Network

The Local Effective Altruism Network launched a new website and is making it easier than ever to start a local group in your area or get support for an existing one.

Open Philanthropy Project

The Open Philanthropy Project awarded a $5,555,550 grant to support the launch of the UC Berkeley Center for Human-Compatible AI as part of its work on potential risks from advanced artificial intelligence. The Open Philanthropy Project also described its rationale for a grant to the Foundation for the National Institutes of Health (FNIH); the grant will allow FNIH to form a working group to recommend a consensus path for field testing gene drives to fight malaria.

Sentience Politics

The deadline for the Sentience Politics Essay Prize has been extended. You can submit your ideas for effective strategies to reduce the suffering of all sentient beings until Sept. 30.
Jobs
You can stay up to date with job offers through these groups on Facebook and LinkedIn.
 
Timeless Classics

The perspectives on effective altruism we don't always hear – Jess Whittlestone on how to take on board feedback about the EA movement.
 
Go forth and do the most good!

Let us know how you liked this edition and how we can improve further. As usual, thank you very much for your feedback!

See you again on Oct. 6!

Georgie, Michał, Pascal and Sören
– The Effective Altruism Newsletter Team

The Effective Altruism Newsletter is a joint project between the Centre for Effective Altruism, the Effective Altruism Hub and .impact
 
A community project of the Centre for Effective Altruism, a registered charity in England and Wales (Charity Number 1149828) – Centre for Effective Altruism, Littlegate House, St Ebbes Street, Oxford OX1 1PT, United Kingdom


unsubscribe from this list    update subscription preferences 

3

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 3:00 PM
[anonymous]7y3
0
0
under revision

The main EA question for WHO is how to improve biosecurity, with a focus on tail risks from pandemics, including synthetic ones.

Hello all,

I'm new to the forum, and not sure if this is a an abuse of the open thread so please tell me if so. (ps. really enjoyed the Sam Harris podcast)

Can anyone help point me in the direction of academic papers using economic models for cause prioritization or other EA related pursuits?

Quick background: 80000hours.org inspired me to study economics, not because I know much about what it's like to be an economist, but mainly because I like math, I like freakonomics radio, and I want to maximize my beneficial impact.

My situation: I'm beginning my senior research project (which I hope to lead me into graduate work), and my advisors don't seem to think that EA or cause prioritization research is economic in nature. Setting aside the likely possibility that I have simply failed to adequately explain EA to them, I think what they mean is that they don't see how it could make use of economic models.

Solicitation of advice: The reason I'm reaching out is because I don't actually know what economic EA research looks like. One idea I had for my project (based in pure ignorance, I should remind you) was to do a sensitivity analysis of a cause prioritization rubric to changes in moral frameworks. In other words, if you have different moral views (which is reasonable) how different will your cause priorities be? What do you think of this research question? Surely any organization doing CPR would have already done this analysis right? Why can't I find any published literature?

Do cause prioritization researchers use models? My advisors seem to think that it's more likely to be economic pontification than modeling that dictates prioritization. Please defend my honor. :-P

Thank you so much for your time!

Hello ChemaCB,

I had a look around and couldn't find too many full peer-reviewed models. (Yet: it's a young endeavour.) This is probably partially a principled reaction to the hard limits of solely quantitative approaches. Most researchers in the area are explicitly calling their work "shallow investigation": i.e. exploratory and pre-theoretical. To date, the empirical FHI papers tend to be piecemeal estimates and early methodological innovation, rather than full models. OpenPhil tends towards prior solicitation from experts and do causes one at a time so far. GiveWell's evaluations are all QALY based and piecemeal, though there's non-core formal stuff on there too.

There's hope: what modelling has been done is always done with economic methods. Michael Dickens has built a model which strikes me as an excellent start, but it's not likely to win over sceptical institutional markers, because it is ex nihilo and doesn't cite anyone. (C++ code here, including weights.) Peter Hurford lists many individual empirical models in footnote 4 here. Here's Gordon Irlam's less formal one, with a wider perspective. Here's a more formal one just for public policy.

To win them over, you could frame it as "social choice theory" rather than cause prioritisation. So for the goal of getting academic approval, Sen, Binmore and Broome are your touchstones here, rather than Cotton-Barrett, Beckstead, and Christiano.

Your particular project proposal seems like an empirical successor to MacAskill's PhD thesis; I'd suggest looking for leads directly in the bibliography there.

I hope you see the above as evidence for the importance of your proposed research, rather than a disincentive to doing it.

Also, welcome!

Seems like Sam Harris would be pretty good speaker for future EA Globals.

[anonymous]8y2
0
0

That may be a bad idea. The political left doesn't like him because he has criticized Islam, and that could reflect poorly on EA if he becomes associated with it. Not to mention the fact that his metaethical arguments suck and he has a bad reputation in philosophy (this isn't as big a problem though).

Agree may be a bad idea because there’s certainly a faction on the left which dislikes him. Though there could be reasons for not caring much about what they think.

He is a public intellectual, one of the very few that thinks there’s a legitimate risk of extinction from AI in the near future, is quite utilitarian and thinks non-human animals have significant moral value. It could be argued that the good which comes from him promoting of those ideas would outweigh the negative consequences.

I don’t feel strongly about him being a speaker, but thought it was interesting point to discuss.

Hi all,

Another new member here; while I'm familiar with how Reddit works, I find this whole forum a bit confusing. Thus, in an effort to make it a tad less confusing for myself and to perhaps point out some things that might result in potentially interested participants abandoning the site and the movement, I have some questions to ask:

1) Is this the right place to ask about things related to the whole EA movement, and what they are doing? Not just questions regarding individual research topics or concerns, but rather to provoke discussion and meaningful debate on how to improve the way of thinking and the means available to us for making a difference? Is there a collection on such discussion somewhere?

2) What kinds of things should be made into an article, rather than a comment on the open thread?

3) Why was a fork of LessWrong chosen over a traditional message board with well-defined categories that can help one focus on discussing the things they find relevant? With new articles posted only every few days, this might not be a problem, but surely it makes finding coherent posts on one topic a nightmare? I haven't explored the search function in detail yet, but assume that it's not helpful unless you know exactly what you are looking for.

4) Are there any other places for discussing effective altruism? What kind of proportions of unique users do the different EA-related websites have?

Apart from these, I have a huge amount of questions related to all sorts of aspects of the movement, and quite frankly find the current content available somewhat lacking beyond the very basics. Sure, the movement covers a large amount of people with possibly wildly varying opinions on all sorts of things, but I think that within those potential disagreements there might be a possibly untapped opportunity to learn more about both the situation of the world and ourselves.

While GiveWell in my opinion provides excellent reasoning behind the many things they do and thorough, detailed and well argumented thoughts on how they plan to approach things, I was not able to find much similar content or discussion elsewhere. If someone can point me to the right direction, that would be very much appreciated!

Best regards to all of you, and thank you for the time.

1) The EA open thread seems pretty inactive and I'm not sure why that is. On LessWrong that's where most of the discussion happens. It would be the obvious place to post those sorts of things if it were more highly trafficked. I'd still submit articles here and ask questions in the comments section.

2) A simple heuristic might be put straightforward questions into here or alternatively the Facebook/Reddit as they seem more highly trafficked. If you have an idea you think others might share or want to discuss put it into an article.

3) I don't know the answer to this. I've always been able to find what I was looking for, but I do see how it is more difficult.

4) Other places for EA Discussion are the Facebook Group, EA Subreddit and the Complice Room (never visited this one). If you discover any others, let me know, as I find myself confused as to why this site isn't more active and wondering if there are other spaces out there I'm not seeing. It's also worth considering going to a EA meetup as you'll get a lot more quick back and forth that way.

If you want to go beyond the basics into discussion of specific problems (Animal Welfare, AI Control, Entrepreneurship, Rationality etc.) there groups around with more focused discussion. This is a non-comprehensive list along with all the blogs out there. This site is a catch all and also has meta discussions (cause prioritisation, outreach etc.).

Welcome and Good luck in your search.

RainbowSpacedancer has it about right.

1) Sure. Feel free to link them on the Facebook page if you want more responses.

2) Anything that's more than a couple of well thought-through paragraphs. If it's longer than other highly upvoted posts, then people will clearly enjoy reading it!

3) Because we have upvotes, downvotes, easy ways to find top posts, automatic hiding of downvoted posts, a good system for nested comments, a useful sidebar for finding recent content, and technical support that is provided voluntarily by Trike Apps. If we had 100x more posts, then I would probably ask someone to implement subfora.

4) The other site is https://www.facebook.com/groups/effective.altruists/, which has higher volume, and lower average quality of analysis.

The Global Challenges Foundation has recently launched an essay competition with a $5 million prize (!) awarded to the best ideas for new forms of global decision-making. The foundation has previously funded work on global catastrophic risks, so I guess they would not be opposed to an EA mindset.

More information is available at their homepage.

Curated and popular this week