8

CFAR's end-of-year Impact Report and Fundraiser

End-of-year updates for those interested: CFAR made a larger effort to track our programs' impact on existential risk over the last year; you can find a partial account of our findings on our blog.  (Also, while some of the details of our tracking aren't currently published due to privacy concerns, let me know if... Read More
Comment author: MichaelDickens  (EA Profile) 07 December 2016 05:10:38AM 1 point [-]

I don't believe organizations should post fundraising documents to the EA Forum. As a quick heuristic, if all EA orgs did this, the forum would be flooded with posts like this one and it would pretty much kill the value of the forum.

It's already the case that a significant fraction of recent content is CEA or CEA-associated organizations talking about their own activities, which I don't particularly want to see on the EA Forum. I'm sure some other people will disagree but I wanted to contribute my opinion so you're aware that some people dislike these sorts of posts.

Comment author: AnnaSalamon 07 December 2016 05:51:41AM *  17 points [-]

I feel that 1-2 such posts per organization per year is appropriate and useful, especially since organizations often have year-end reviews or other orienting documents timed near their annual fundraiser, and reading these allows me to get oriented about what the organizations are up to.

Comment author: lukeprog 01 December 2016 06:49:50AM 17 points [-]

I donated to MIRI this year, too, and it is striking — given that you and I coming at the question from different backgrounds (i.e. with me as past MIRI executive) — how similar my reasons (this year) are to yours, including my reaction to Open Phil's write-up, my reservations, my perception of how field dynamics have changed, etc.

(Note: I work at Open Phil but wasn't involved in thinking through or deciding Open Phil's grant to MIRI. My opinions in this comment are, obviously, my own.)

Comment author: AnnaSalamon 03 December 2016 05:26:39AM *  8 points [-]

Seeing this comment from you makes me feel good about Open Phil's internal culture; it seems like evidence that folks who work there feel free to think independently and to voice their thoughts even when they disagree. I hope we manage a culture that makes this sort of thing accessible at CFAR and in general.

Comment author: Owen_Cotton-Barratt 01 December 2016 07:22:14PM 4 points [-]

I agree with all this. I read your original "attempts to be clear" as Motion A (which I was taking a stance in favour of), and your original "attempts to be exainable" as Motion B (which I wasn't sure about).

Comment author: AnnaSalamon 01 December 2016 07:58:25PM *  5 points [-]

Gotcha. Your phrasing distinction makes sense; I'll adopt it. I agree now that I shouldn't have included "clarity" in my sentence about "attempts to be clear/explainable/respectable".

The thing that confused me is that it is hard to incentivize clarity but not the explainability; the easiest observable is just "does the person's research make sense to me?", which one can then choose how to interpret, and how to incentivize.

It's easy enough to invest in clarity / Motion A without investing in explainability / Motion B, though. My random personal guess is that MIRI invests about half of their total research effort into clarity (from what I see people doing around the office), but I'm not sure (and I could ask the researchers easily enough). Do you have a suspicion about whether MIRI over- or under-invests in Motion A?

Comment author: [deleted] 30 November 2016 04:19:04PM *  5 points [-]

The amount of money employees at EA organisations can give is fairly small

Agreed. Is there any evidence employee donation is a significant problem, or that it will become one in the near future? If not, and given there is no obvious solution, I suggest focusing on higher priorities (e.g. VIP outreach).

Thanks to Max Dalton, Sam Deere, Will MacAskill, Michael Page, Stefan Shubert, Carl Shulman, Pablo Stafforini, Rob Wiblin, and Julia Wise for comments and contributions to the conversation.

I think too many (brain power x hours) have been expended here.

Sorry to be a downer, just trying to help optimize.

Comment author: AnnaSalamon 01 December 2016 07:51:54PM *  0 points [-]

I feel as though building a good culture is really quite important, and like this sort of specific proposal & discussion is how, bit by bit, one does that. It seems to me that the default for large groups of would-be collaborators is to waste almost all the available resource due basically to "insufficiently ethical/principled social fabric".

(My thoughts here are perhaps redundant with Owen's reply to your comment, but it seems important enough that I wanted to add a separate voice and take.)

Re: how much this matters (or how much is wasted without this), I like the examples in Eliezer's article on lost purposes or in Scott Alexander's review of house of god.

The larger EA gets, the easier it is for standard failure modes by effort becomes untethered from real progress, or some homegrown analog, to eat almost all our impact as well. And so the more necessary it is that we really seriously try to figure out what principles can keep our collective epistemology truth-tracking.

Comment author: Owen_Cotton-Barratt 01 December 2016 12:43:51PM 9 points [-]

I generally agree with both of these comments. I think they're valuable points which express more clearly than I did some of what I was getting at with wanting a variety of approaches and thinking I should have some epistemic humility.

One point where I think I disagree:

attempts to be clear/explainable/respectable are less likely to pull in good directions.

I don't want to defend pulls towards being respectable, and I'm not sure about pulls towards being explainable, but I think that attempts to be clear are extremely valuable and likely to improve work.I think that clarity is a useful thing to achieve, as it helps others to recognise the value in what you're doing and build on the ideas where appropriate (I imagine that you agree with this part).

I also think that putting a decent fraction of total effort into aiming for clarity is likely to improve research directions. This is based on research experience -- I think that putting work into trying to explain things very clearly is hard and often a bit aversive (because it can take you from an internal sense of "I understand all of this" to a realisation that actually you don't). But I also think it's useful for making progress purely internally, and that getting a crisper idea of the foundations can allow for better work building on this (or a realisation that this set of foundations isn't quite going to work).

Comment author: AnnaSalamon 01 December 2016 07:13:20PM *  5 points [-]

Not sure how much this is a response to you, but:

In considering whether incentives toward clarity (e.g., via being able to explain one’s work to potential funders) are likely to pull in good or bad directions, I think it’s important to distinguish between two different motions that might be used as a researcher (or research institution) responds to those incentives.

  • Motion A: Taking the research they were already doing, and putting a decent fraction of effort into figuring out how to explain it, figuring out how to get it onto firm foundations, etc.

  • Motion B: Choosing which research to do by thinking about which things will be easy to explain clearly afterward.

It seems to me that “attempts to be clear” in the sense of Motion A are indeed likely to be helpful, and are worth putting a significant fraction of one’s effort into. I agree also that they can be aversive and that this aversiveness (all else equal) may tend to cause underinvestment in them.

Motion B, however, strikes me as more of a mixed bag. There is merit in choosing which research to do by thinking about what will be explainable to other researchers, such that other researchers can build on it. But there is also merit to sometimes attempting research on the things that feel most valuabe/tractable/central to a given researcher, without too much shame if it then takes years to get their research direction to be “clear”.

As a loose analogy, one might ask whether “incentives to not fail” have a good or bad effect on achievement. And it seems like a mixed bag. The good part (analogous to Motion A) is that, once one has chosen to devote hours/etc. to a project, it is good to try to get that project to succeed. The more mixed part (analogous to Motion B) is that “incentives to not fail” sometimes cause people to refrain from attempting ambitious projects at all. (Of course, it sometimes is worth not trying a particular project because its success-odds are too low — Motion B is not always wrong.)

Comment author: AnnaSalamon 01 December 2016 08:38:56AM *  15 points [-]

I suspect it’s worth forming an explicit model of how much work “should” be understandable by what kinds of parties at what stage in scientific research.

To summarize my own take:

It seems to me that research moves down a pathway from (1) "totally inarticulate glimmer in the mind of a single researcher" to (2) "half-verbal intuition one can share with a few officemates, or others with very similar prejudices" to (3) "thingy that many in a field bother to read, and most find somewhat interesting, but that there's still no agreement about the value of" to (4) "clear, explicitly statable work whose value is universally recognized valuable within its field". (At each stage, a good chunk of work falls away as a mirage.)

In "The Structure of Scientific Revolutions", Thomas Kuhn argues that fields begin in a "preparadigm" state in which nobody's work gets past (3). (He gives a bunch of historical examples that seem to meet this pattern.)

Kuhn’s claim seems right to me, and AI Safety work seems to me to be in a "preparadigm" state in that there is no work past stage (3) now. (Paul's work is perhaps closest, but there is are still important unknowns / disagreement about foundations, whether it'll work out, etc.)

It seems to me one needs epistemic humility more in a preparadigm state, because, in such states, the correct perspective is in an important sense just not discovered yet. One has guesses, but the guesses cannot be established in common as established knowledge.

It also seems to me that the work of getting from (3) to (4) (or from 1 or 2 to 3, for that matter) is hard, that moving along this spectrum requires technical research (it basically is a core research activity), and one shouldn't be surprised if it sometimes takes years -- even in cases where the research is good. (This seems to me to also be true in e.g. math departments, but to be extra hard in preparadigm fields.)

(Disclaimer: I'm on the MIRI board, and I worked at MIRI from 2008-2012, but I'm speaking only for myself here.)

Comment author: AnnaSalamon 01 December 2016 09:10:51AM *  11 points [-]

Relatedly, it seems to me that in general, preparadigm fields probably develop faster if:

  1. Different research approaches can compete freely for researchers (e.g., if researchers have secure, institution-independent funding, and can work on whatever approach pleases them). (The reason: there is a strong relationship between what problems can grab a researcher’s interest, and what problems may go somewhere. Also, researchers are exactly the people who have leisure to form a detailed view of the field and what may work. cf also the role of play in research progress.)

  2. The researchers themselves feel secure, and do not need to attempt to optimize for work for “what others will evaluate as useful enough to keep paying me”. (Since such evaluations are unreliable in pre paradigm fields, and since one wants to maximize the odds that the right approach is tried. This security may well increase the amount of non-productivity in the median case, but it should also increase the usefulness of the tails. And the tails are where most of the value is.)

  3. Different research approaches somehow do not need to compete for funding, PR, etc., except via researchers’ choices as to where to engage. There are no organized attempts to use social pressure or similar to override researchers’ intuitions as to where may be fruitful to engage (nor to override research institutions’ choice of what programs to enable, except via the researchers’ interests). (Funders’ intuitions seem less likely to be detailed than are the intuitions of the researcher-on-that-specific-problem; attempts to be clear/explainable/respectable are less likely to pull in good directions.)

  4. The pool of researchers includes varied good folks with intuitions formed in multiple fields (e.g., folks trained in physics; other folks trained in math; other folks trained in AI; some usually bright folks just out of undergrad with less-developed disciplinary prejudices), to reduce the odds of monoculture.

(Disclaimer: I'm on the MIRI board, and I worked at MIRI from 2008-2012, but I'm speaking only for myself here.)

Comment author: AnnaSalamon 01 December 2016 08:38:56AM *  15 points [-]

I suspect it’s worth forming an explicit model of how much work “should” be understandable by what kinds of parties at what stage in scientific research.

To summarize my own take:

It seems to me that research moves down a pathway from (1) "totally inarticulate glimmer in the mind of a single researcher" to (2) "half-verbal intuition one can share with a few officemates, or others with very similar prejudices" to (3) "thingy that many in a field bother to read, and most find somewhat interesting, but that there's still no agreement about the value of" to (4) "clear, explicitly statable work whose value is universally recognized valuable within its field". (At each stage, a good chunk of work falls away as a mirage.)

In "The Structure of Scientific Revolutions", Thomas Kuhn argues that fields begin in a "preparadigm" state in which nobody's work gets past (3). (He gives a bunch of historical examples that seem to meet this pattern.)

Kuhn’s claim seems right to me, and AI Safety work seems to me to be in a "preparadigm" state in that there is no work past stage (3) now. (Paul's work is perhaps closest, but there is are still important unknowns / disagreement about foundations, whether it'll work out, etc.)

It seems to me one needs epistemic humility more in a preparadigm state, because, in such states, the correct perspective is in an important sense just not discovered yet. One has guesses, but the guesses cannot be established in common as established knowledge.

It also seems to me that the work of getting from (3) to (4) (or from 1 or 2 to 3, for that matter) is hard, that moving along this spectrum requires technical research (it basically is a core research activity), and one shouldn't be surprised if it sometimes takes years -- even in cases where the research is good. (This seems to me to also be true in e.g. math departments, but to be extra hard in preparadigm fields.)

(Disclaimer: I'm on the MIRI board, and I worked at MIRI from 2008-2012, but I'm speaking only for myself here.)

Comment author: Vidur_Kapur  (EA Profile) 10 April 2016 04:03:20PM 3 points [-]

I have a probably silly question about the EuroSPARC program: what if you're in the no man's land between high school and university, i.e. you've just left high school before the program starts?

I know of a couple of mathematically talented people who might be interested (and who would still be in high school), so I'll certainly try and contact them!

Comment author: AnnaSalamon 10 April 2016 07:04:39PM 3 points [-]

Folks who haven't started college yet and who are no more than 19 years old are eligible for EuroSPARC; so, yes, your person (you?) should apply :)

12

Four free CFAR programs on applied rationality and AI safety

CFAR will be running four free programs this summer that are in various ways intended to help with EA/xrisk, all of which are currently accepting applications: EuroSPARC  (July 19-26 in Oxford, UK) .   A program on applied rationality and cognition for mathematically talented high schoolers from anywhere in the... Read More

View more: Next