Comment author: Benito 05 April 2018 12:52:10AM *  4 points [-]

Note: EA is totally a trust network - I don't think the funds are trying to be anything like GiveWell, who you're supposed to trust based on the publicly-verifiable rigour of their research. EA funds is much more toward the side of the spectrum of "have you personally seen CEA make good decisions in this area" or "do you specifically trust one of the re-granters". Which is fine, trust is how tightly-knit teams and communities often get made. But if you gave to it thinking "this will look like if I give to Oxfam, and will have the same accountability structure" then you'll correctly be surprised to find out it works significantly via personal connections.

The same way you'd only fund a startup if you knew them and how they worked, you should probably only fund EA funds for similar reasons - and if the startup tried to make its business plan such that anyone would have reason to fund it, the business plan probably wouldn't be very good. I think that EA should continue to be a trust-based network, and so on the margin I guess people should give less to EA funds rather than EA funds make grants that are more defensible.

Comment author: Michelle_Hutchinson 08 April 2018 10:14:15PM 3 points [-]

This strikes me as making a false dichotomy between 'trust the grant making because lots of information is made public about its decisions' and 'trust the grant making because you personally know the re-granter (or know someone who knows someone etc)'. I would expect this is instead supposed to work in the way a lot of for profit funds presumably work: you trust your money to a particular fund manager because they have a strong history of their funds making money. You don't need to know Elie personally (or know about how he works / makes decisions) to know his track record of setting up GW and thereby finding excellent giving opportunities.

Comment author: Michelle_Hutchinson 05 February 2018 02:48:52PM 2 points [-]

[Note: It is difficult to compare the cost effectiveness of developed country anti-smoking MMCs and developing country anti-smoking MMCs because the systematic review cited above did not uncover any studies based on a developing country anti-smoking MMC. The one developing country study that it found was for a hypothetical anti-smoking MMC. That study, Higashi et al. 2011, estimated that an anti-smoking MMC in Vietnam would result in one DLYG (discount rate = 3%) for every 78,300 VND (about 4 USD). Additionally, the Giving What We Can report that shows tobacco control in developing countries being highly cost effective is based on the cost-effectiveness of tobacco taxes, not the cost-effectiveness of anti-smoking MMCs, and the estimated cost-effectiveness of tobacco taxes is based on the cost to the government, not the cost to the organization lobbying for a tobacco tax.]

This report briefly discusses MMCs as well as tax increases. It mentions MMCs are likely to be much more effective than those in the UK, due to the comparatively far lower awareness of the harms of smoking in developing countries, and far higher incidences in smoking. I wonder if we could learn more about the potential efficacy of such campaigns by comparing them to campaigns to try to lower road traffic injury? My impression is that in the latter case there has been a bit more study done specifically in developing world contexts.

Comment author: Michelle_Hutchinson 16 January 2018 05:24:31AM 4 points [-]

Thank you, this is a really useful write up of what sounds like a great project.

Comment author: SiebeRozendal 18 December 2017 03:46:08PM *  2 points [-]

Very exciting to read about this, especially the research agenda! I will definitely consult it when deciding on a topic for my master's thesis in philosophy.

I have a few questions about the strategy (Not sure if this is the best medium for these questions, but I didn't know where else);

  • a) Are you planning to be the central hub of EA-relevant academics?
  • b) What do you think about the Santa Fe Institute's model of a core group of resident academics, and a larger group of affiliated researchers who regularly visit?
  • c) Are you planning on incorporating more fields in the future, such as behavioural economics or complexity theory, and how do you decide on where to expand in?
  • d) Where can I find more information about GPI's strategy, and are you planning on publishing it to the EA Forum?

Btw, on p. 26 of the agenda there's an unfinished sentence: "How important is the distinction between ‘sequence’ thinking and ‘cluster’ thinking? What’s "

Comment author: Michelle_Hutchinson 21 December 2017 05:02:42PM *  2 points [-]

Glad to hear you're finding it useful!

a) Yes, that's the plan

b) We haven't decided on our model yet. Right now, we have a number of full-time academics, a number of research associates who attend seminars and collaborate with the full-time crew, and research research visitors coming periodically. Having researchers visit from other institutions seems useful for bringing in new ideas, getting to collaborate more closely than one could online, and having the visitors take back elements of our work to their home institutions. I would guess in future it would make sense to have at least some researchers who visit periodically, as well as people coming just as a one-off. But I couldn't say for sure at the moment.

c) Yes, we are. Behavioural economics is already something we've thought a little about. Our reason for not expanding into more subjects at the moment is the difficulty of building thoroughly interdisciplinary groups within academia. As a small example, GPI is based in the Philosophy Department at Oxford, which isn't ideal for hiring Economists, who would prefer to be based in the Economics department. Given that, and the close tie in the past between EA and philosophy, we see a genuine risk of GPI/EA being thought of as 'philosophy plus' rather than truly multi/interdisciplinary. For that reason, we're starting with just one other discipline, and trying to build strong roots there. At the same time, we're trying to remain cognisant of other disciplines likely to be relevant, and the work that's going on there. (As an example in psychology, Lucius Caviola has been publishing interesting work both on speciesism and on how to develop a better scale for measuring moral traits EAs might be interested in.)

d) The best source of information is our website. I do plan on putting occasional updates on the EA forum, but as our work output will largely be academic papers, we're unlikely to publish them on here.

Thanks for the heads up!

Comment author: sdspikes 16 December 2017 05:04:09AM 2 points [-]

It looks like these all require relocating to Oxford, is that accurate?

Comment author: Michelle_Hutchinson 16 December 2017 09:12:41PM 1 point [-]

Yes, that's right. For the researcher roles, you would at least need to be in Oxford during term time. For the operations role, it would important to be there for essentially the whole period.

Comment author: MichaelPlant 14 December 2017 11:49:40PM 2 points [-]

This is all very exciting. Just fyi, the application links for the research fellow and senior research fellow that you mention in your last paragraph are broken.

Comment author: Michelle_Hutchinson 15 December 2017 12:15:53PM 0 points [-]

Thanks for the heads up! I think this is a browser issue with the uni website. It actually works for me on Chrome and Edge, but others have found they don't work on Chrome, but do work on Safari. Would you mind trying a different browser and seeing if that works?

Comment author: WillPearson 12 September 2017 08:24:06AM 1 point [-]

My personal idea of it is a broad church. So the systems that govern our lives, government and the economy distribute resources in a certain way. These can have a huge impact on the world. They are neglected because it involves fighting an uphill struggle against vested interests.

Someone in a monarchy campaigning for democracy would be an example of someone who is aiming for systemic change. Someone who has an idea to strengthen the UN so that it could help co-ordinate regulation/taxes better between countries (so that companies don't just move to low tax, low worker protection, low environmental regulation areas) is aiming for systemic change.

Comment author: Michelle_Hutchinson 12 September 2017 02:44:28PM *  2 points [-]

Will, you might be interested in these conversation notes between GiveWell and the Tax Justice Network: http://files.givewell.org/files/conversations/Alex_Cobham_07-14-17_(public).pdf (you have to c&p the link)

Comment author: Tee 05 September 2017 12:40:16PM 1 point [-]

Hey Michelle, I authored that particular part and I think what you've said is a fair point. As you said, the point was to identify the Bay as an outlier in terms of the amount of support for AI, not declare AI as an outlier as a cause area.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

If anything, I'd say we put a fair amount of emphasis on how EAs are coming around on AI, and how resistance toward putting resources toward AI has dropped significantly.

We could speculate about how future-oriented certain cause areas may be, and how to aggregate or disaggregate them in future surveys. We've made a note to consider that for 2018.

Comment author: Michelle_Hutchinson 05 September 2017 03:24:53PM 3 points [-]

Thanks Tee.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

I agree - my comment was in the context of the false graph; given the true one, the emphasis on poverty seems warranted.

Comment author: Peter_Hurford  (EA Profile) 05 September 2017 02:35:42AM 0 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

I can understand why you're having trouble interpreting the first graph, because it is wrong. It looks like in my haste to correct the truncated margin problem, I accidentally put a graph for "near top priority" instead of "top priority". I will get this fixed as soon as possible. Sorry. :(

We will have to re-explore the aggregation and disaggregation with an updated graph. With 237 people saying AI is the top priority and 150 people saying non-AI far future is the top priority versus 601 saying global poverty is the top priority, global poverty still wins. Sorry again for the confusion.

-

The term 'outlier' seems false according to the stats you cite

The term "outlier" here is meant in the sense of a statistically significant outlier, as in it is statistically significantly more in favor of AI than all other areas. 62% of people in the Bay think AI is the top priority or near the top priorities compared to 44% of people elsewhere (p < 0.00001), so it is a difference of a majority versus non-majority as well. I think this framing makes more sense when the above graph issue is corrected -- sorry.

Looking at it another way, The Bay contains 3.7% of all EAs in this survey, but 9.6% of all EAs in the survey who think AI is the top priority.

Comment author: Michelle_Hutchinson 05 September 2017 09:00:44AM *  3 points [-]

Thanks for clarifying.

The claim you're defending is that the Bay is an outlier in terms of the percentage of people who think AI is the top priority. But what the paragraph I quoted says is 'favoring a cause area outlier' - so 'outlier' is picking out AI amongst causes people think are important. Saying that the Bay favours AI which is an outlier amongst causes people favour is a stronger claim than saying that the Bay is an outlier in how much it favours AI. The data seems to support the latter but not the former.

Comment author: Michelle_Hutchinson 04 September 2017 02:40:19PM *  7 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause. Yet while 600 people said it was the top cause, according to the graph around 800 people said that long run future was the top cause (AI + non-AI far future). It seems plausible to disaggregate AI and non-AI long run future, but at least as plausible to aggregate them (given the aggregation of health / education / economic interventions in poverty), and conclude that most EAs think the top cause is improving the long-run future. Although you might have been allowing people to pick multiple answers, and found that most people who picked poverty picked only that, and most who picked AI / non-AI FF picked both?

The following statement appears to me rather loaded: "For years, the San Francisco Bay area has been known anecdotally as a hotbed of support for artificial intelligence as a cause area. Interesting to note would be the concentration of EA-aligned organizations in the area, and the potential ramifications of these organizations being located in a locale heavily favoring a cause area outlier." The term 'outlier' seems false according to the stats you cite (over 40% of respondents outside the Bay thinking AI is a top or near top cause), and particularly misleading given the differences made here by choices of aggregation. (Ie. that you could frame it as 'most EAs in general think that long-run future causes are most important; this effect is a bit stronger in the Bay)

Writing on my own behalf, not my employer's.

View more: Next