Comment author: Dunja 01 March 2018 10:31:27AM 2 points [-]

Sure :) I saw that one on their website as well. But a few papers over the course of 2-3 years isn't very representative for an effective research group, is it? If you look at groups by scholars who do get (way smaller) grants in the field of AI, their output is way more effective. But even if we don't count publications, but speak in terms of effectiveness of a few publications, I am not seeing anything. If you are, maybe you can explain it to me?

Comment author: Gregory_Lewis 04 March 2018 12:20:40PM 0 points [-]

I regret I don't have much insight to offer on the general point. When I was looking into the bibliometrics myself, very broad comparison to (e.g.) Norwegian computer scientists gave figures like '~0.5 to 1 paper per person year', which MIRI's track record seemed about on par if we look at peer reviewed technical work. I wouldn't be surprised to find better performing research groups (in terms of papers/highly cited papers), but slightly moreso if these groups were doing AI safety work.

Comment author: Dunja 01 March 2018 09:17:16AM *  2 points [-]

Oh but you are confusing conference presentations with conference publications. Check the links you've just sent me: they discuss the latter, nor the former. You cannot cite conference presentation (or that's not what's usually understood under "citations", and definitely not in the links from your post), but only a publication. Conference publications in the field of AI are usually indeed peer-reviewed and yes, indeed, they are often even more relevant than journal publications, at least if published in prestigious conference proceedings (as I stated above).

Now, on MIRI's publication page there are no conference publications in 2017, and for 2016 there are mainly technical reports, which is fine, but should again not be confused with regular (conference) publications, at least according to the information provided by the publisher. Note that this doesn't mean technical reports are of no value! To the contrary. I am just making an overall analysis of the state of the art of MIRI's publications, and trying to figure out what they've published, and then how this compares with a publication record of similarly sized research groups in a similar domain. If I am wrong in any of these points, I'll be happy to revise my opinion!

Comment author: Gregory_Lewis 01 March 2018 10:18:30AM *  1 point [-]

This paper was in 2016, and is included in the proceedings of the UAI conference that year. Does this not count?

Comment author: Dunja 28 February 2018 11:17:05AM *  1 point [-]

Thanks for the comment, Gregory! I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations. Only once your research results are published in a peer-reviewed journal (including peer-reviewed conference proceedings) can other scholars in the field take them as a (minimally) reliable source for further research that would build on it. By the way, many prestigious AI conferences actually come with a peer-reviewed proceedings (take e.g. AAAI or IJCAI), so you can't even present at the conference without submitting a paper.

Again, MIRI might be doing excellent work. All i am asking is: in view of which criteria can we judge this to be the case? What are the criteria of assessment, which EA community finds extremely important when it comes to the assessment of charities, and which I think we should find just as important when it comes to the funding of scientific research?

Comment author: Gregory_Lewis 01 March 2018 05:54:42AM 3 points [-]

I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations.

Technical research on AI generally (although not exclusively) falls under the heading of computer science. In this field, it is not only the prevailing (but not universal) view of practitioners that conference presentations are academically 'better' (here, here, etc.), but that they tend to have similar citation counts too.

Comment author: Gregory_Lewis 28 February 2018 06:06:00AM *  4 points [-]

Disclosure: I'm both a direct and indirect beneficiary of Open Phil funding. I am also a donor to MIRI, albeit an unorthodox one

[I]f you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter).

I have a 2 year out-of-date rough draft on bibliometrics re. MIRI, which likely won't get updated due to being superceded by Lark's excellent work and other constraints on my time. That said:

My impression of computer science academia was that (unlike most other fields) conference presentations are significantly more important than journal publications. Further when one looks at work on MIRI's page from 2016-2018, I see 2 papers at Uncertainty of AI, which this site suggests is a 'top-tier' conference. (Granted, for one of these neither or the authors have a MIRI institutional affiliation, although 'many people at MIRI' are acknowledged).

Comment author: DavidMoss 30 October 2017 03:14:14AM 6 points [-]

I don't have much to contribute to the normative social epistemology questions raised here, since this is a huge debate within philosophy. People interested in a general summary might read the Philosophy Compass review or the SEP article.

But I did want to question the claim about the descriptive social epistemology of the EA movement which is made i.e. that:

What occurs instead is agreement approaching fawning obeisance to a small set of people the community anoints as ‘thought leaders’, and so centralizing on one particular eccentric and overconfident view.

I'm not sure this is useful as a general characterisation of the EA community, though certainly at times people are too confident, too deferential etc. What beliefs might be the beneficiaries of this fawning obeisance? There doesn't seem to me to be sufficient uncontroversial agreement about much (even utilitarianism has a number of prominent 'thought leaders' pushing against it saying that we ought to be opening ourselves up to alternatives).

The general characterisation seems in tension with the common idea that EA is highly combative and confrontational (it would be strange though not impossible if we had a constant disagreement and attempted argumentative one-upmanship, combined with excessive deference to certain thought leaders). Instead what I see is occasional excessive deference to people respected within certain cliques, by members of those circles, but not 'centralization' on any one particular view. Perhaps all Greg has in mind is these kinds of cases where people defer too much to people they shouldn't (perhaps due to a lack of actual experts in EA rather than due to their own vice). But then it's not clear to me what the typical EA-rationalist who has not and probably shouldn't make a deep study of many-worlds, free will, or meta-ethics should do to avoid this problem.

Comment author: Gregory_Lewis 27 February 2018 02:16:14AM *  0 points [-]

Apropos of which, SEP published an article on disagreement last week, which provides an (even more) up to date survey of philosophical discussion in this area.

Comment author: Gregory_Lewis 21 February 2018 10:24:11PM 16 points [-]

Thank you for writing this post. An evergreen difficulty that applies to discussing topics of such a broad scope is the large number of matters that are relevant, difficult to judge, and where one's judgement (whatever it may be) can be reasonably challenged. I hope to offer a crisper summary of why I am not persuaded.

I understand from this the primary motivation of MCE is avoiding AI-based dystopias, with the implied causal chain being along the lines of, “If we ensure the humans generating the AI have a broader circle of moral concern, the resulting post-human civilization is less likely to include dystopic scenarios involving great multitudes of suffering sentiences.”

There are two considerations that speak against this being a greater priority than AI alignment research: 1) Back-chaining from AI dystopias leaves relatively few occasions where MCE would make a crucial difference. 2) The current portfolio of ‘EA-based’ MCE is poorly addressed to averting AI-based dystopias.

Re. 1): MCE may prove neither necessary nor sufficient for ensuring AI goes well. On one hand, AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators. On the other, even if the AI-designers have the desired broad moral circle, they may have other crucial moral faults (maybe parochial in other respects, maybe selfish, maybe insufficiently reflective, maybe some mistaken particular moral judgements, maybe naive approaches to cooperation or population ethics, and so on) - even if they do not, there are manifold ways in the wider environment (e.g. arms races), or in terms of technical implementation, that may incur disaster.

It seems clear to me that, pro tanto, the less speciesist the AI-designer, the better the AI. Yet for this issue to be of such fundamental importance to be comparable to AI safety research generally, the implication is of an implausible doctrine of ‘AI immaculate conception’: only by ensuring we ourselves are free from sin can we conceive an AI which will not err in a morally important way.

Re 2): As Plant notes, MCE does not arise from animal causes alone: global poverty, climate change also act to extend moral circles, as well as propagating other valuable moral norms. Looking at things the other way, one should expect the animal causes found most valuable from the perspective of avoiding AI-based dystopia to diverge considerably from those picked on face-value animal welfare. Companion animal causes are far inferior from the latter perspective, but unclear on the former if this a good way of fostering concern for animals; if the crucial thing is for AI-creators not to be speciest over the general population, targeted interventions like ‘Start a petting zoo at Deepmind’ look better than broader ones, like the abolition of factory farming.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

Notwithstanding the above, the approach outlined above has a role to play in some ideal ‘far future portfolio’, and it may be reasonable for some people to prioritise work on this area, if only for reasons of comparative advantage. Yet I aver it should remain a fairly junior member of this portfolio compared to AI-safety work.

Comment author: Halstead 02 February 2018 06:37:52PM *  2 points [-]

Genetics might be a constraint on ultra-fragility. If all of the most practically important traits are highly heritable, then one wouldn't expect the contingency of conception to produce as much variation in outcomes as in the state of affairs in which the contingency of conception has a very large effect on average individual traits. While it is true that the individual born is a highly contingent matter, the traits of the individual produced might not be. If my parents had an argument on the blessed night of my conception but overcame their disagreement the next day, then there would be some reason to think that a one day older version of me would be writing this comment.

Chaos also doesn't seem inimical to attempts to rationally steer the future. Notwithstanding the fact that the climate system is chaotic, pumping lots of greenhouse gases into it looks like a bad idea in expectation.

Comment author: Gregory_Lewis 02 February 2018 06:48:12PM 2 points [-]

That seems surprising to me, given the natural model for the counterpart in the case you describe would be a sibling, and observed behaviour between sibs is pretty divergent. I grant your counterfactual sibling would be more likely than a random member of the population to be writing something similar to the parent comment, but the absolute likelihood remains very low.

The fairly intermediate heritabilities of things like intelligence, personality traits etc. also look pretty variable. Not least, there's about a 0.5 chance your counterpart would be the opposite sex to you.

I agree even if history is chaotic in some respects, it is not chaotic to everything, and there can be forcing interventions (one can grab a double pendulum, etc), yet less overwhelming interventions may be pretty hard to fathom in the chaotic case (It's too early to say whether the french revolution was good or bad, etc.)

Comment author: Jan_Kulveit 26 December 2017 11:27:43PM *  3 points [-]

Obviously the toy model is wrong in describing reality: it's one end of the possible spectrum, where you have complete randomness. On the other you have another toy model: results in a field neatly ordered by cognitive difficulty, and the best person at a time picks all the available fruit. My actual claims roughly are

  • reality is somewhere in between

  • it is field-dependent

  • even in fields more toward the random end, there actually would be differences like different speeds of travel among prospectors

It is quite unclear to me where on this scale the relevant fields are.

I believe your conclusion, that the power law distribution is all due to the properties of the peoples cognitive processes, and no to the randomness of the field, is not supported by the scientometric data for many research fields.

Thanks for a good preemptive answer :) Yes if you are good enough in identifying the "golden" cognitive processes. While it is clear you would be better than random chance, it is very unclear to me how good you would be. *

I think its worth digging into an example in detail: if you look a at early Einstein, you actually see someone with an unusually developed geometric thinking and the very lucky heuristic of interpreting what the equations say as the actual reality. Famously special relativity transformations were written first by Poincare. "All" what needed to be done was to take it seriously. General relativity is a different story, but at that point Einstein was already famous and possibly one of the few brave enough to attack the problem.

Continuing with the same example, I would be extremely doubtful if Einstein would be picked by selection process similar to what CEA or 80k hours will be probably running, before he become famous. 2nd grade patent clerk? Unimpressive. Well connected? No. Unusual geometric imagination? I'm not aware of any LessWrong sequence which would lead to picking this as that important :) Lucky heuristic? Pure gold, in hindsight.

(*) At the end you can take this as an optimization problem depending how good your superior-cognitive-process selection ability is. Let's have a practical example: You have 1000 applicants. If your selection ability is great enough, you should take 20 for individual support. But maybe its just good, and than you may get better expected utility if you are able to reach 100 potentially great people in workshops. Maybe you are much better than chance, but not really good... than, maybe you should create online course taking in 400 participants.

Comment author: Gregory_Lewis 27 December 2017 02:27:20AM 4 points [-]

I share your caution on the difficulty of 'picking high impact people well', besides the risk of over-fitting on anecdata we happen to latch on to, the past may simply prove underpowered for forward prediction: I'm not sure any system could reliably 'pick up' Einstein or Ramanujan, and I wonder how much 'thinking tools' etc. are just epiphenomena of IQ.

That said, fairly boring metrics are fairly predictive. People who do exceptionally well at school tend to do well at university, those who excel at university have a better chance of exceptional professional success, and so on and so forth. SPARC (a program aimed at extraordinarily mathematically able youth) seems a neat example. I accept none of these supply an easy model for 'talent scouting' intra-EA, but they suggest one can do much better than chance.

Optimal selectivity also depends on the size of boost you give to people, even if they are imperfectly selected. It's plausible this relationship could be convex over the 'one-to-one mentoring to webpage' range, and so you might have to gamble on something intensive even in expectation of you failing to identify most or nearly all of the potentially great people.

(Aside: Although tricky to put human ability on a cardinal scale, normal-distribution properties for things like working memory suggest cognitive ability (however cashed out) isn't power law distributed. One explanation of how this could drive power-law distributions in some fields would be a Matthew effect: being marginally better than competing scientists lets one take the majority of the great new discoveries. This may suggest more neglected areas, or those where the crucial consideration is whether/when something is discovered, rather than who discovers it (compare a malaria vaccine to an AGI), are those where the premium to really exceptional talent is less. )

Comment author: RyanCarey 19 December 2017 08:14:04PM *  9 points [-]

That is an excellent update. The strategic directions broadly make sense to me for all of the teams, and I, like many people, am really happy with the ways CEA has improved over the last year.

One item of feedback on the post: the description of mistakes is a bit long, boring, and over-the-top. Many of these things are not actually very important issues.

One suggestion re the EA Forum revamp: the site is looking pretty great these days. My main gripes --- things like the front being slightly small for my preferences --- could be easily fixed with some restyling. Some of their features, like including sequences of archived material, could also be ideal for the EA Forum use case. IDK whether the codebase is good but recall that the EA Forum was originally created by restyling LessWrong1, so the notion of stealing that code comes from a healthy tradition! Also, This last part is probably a bit too crazy (and too much work), but one can imagine a case where you post content (and accept comments) from both sites at once.

That aside, it's really appreciated that you guys have taken the forum over this year. And in general, it's great to see all of this progress, so here's to 2018!

Comment author: Gregory_Lewis 20 December 2017 01:03:22AM 3 points [-]

I agree both with Ryan's overall evaluation (this is excellent) and that the 'mistakes' section, although laudable in intent, errs slightly too far in the 'self-flagellatory' direction. Some of the mistakes listed either seem appropriate decisions (e.g. "We prioritized X over Y, so we didn't do as much Y as we'd like"), or are the result of reasonable decisions or calculations ex ante which didn't work out.

I think the main value of publicly recording mistakes is to allow others to learn from them or (if egregious) be the context for a public mea culpa. The line between, "We made our best guess, it turned out wrong, but we're confident we made the right call ex ante" and "Actually, on reflection, we should have acted differently given what we knew at the time" is blurry, as not all decisions can (or should) be taken with laborious care.

Perhaps crudely categorising mistakes into 'major' and 'minor' given magnitude, how plausibly could have been averted, etc., and putting the former in updates like these but the latter linked to in an appendix might be a good way forward.

Comment author: Peter_Hurford  (EA Profile) 18 December 2017 10:01:34PM 3 points [-]

My excuses in order of importance:

1.) While I do think AI as a cause area could be plausibly better than global poverty or animal welfare, I don't think it's so plausibly better that the expected value given my uncertainty dwarfs my current recommendations.

2a.) I think I'm basically okay with the streetlight effect. I think there's a lot of benefit in donating now to support groups that might not be able to expand at all without my donation, which is what the criteria I outlined here accomplish. Given the entire EA community collaborating as a whole, I think there's less need for me to focus tons of time on making sure my donations are as cost-effective as possible, and more just a need to clear a bar of being "better than average". I think my recommendations here accomplish that.

2b.) Insofar as my reasoning in (2a) is some "streetlight effect" bias, I think you could accuse nearly anyone of this, since very few have thoroughly explored every cause area and no one could fully rule out being wrong about a cause area.

3.) There is still more I could donate later. This money is being saved mainly as a hedge to large financial uncertainty in my immediate future, but could also be used as savings to donate later when I learn more.

Comment author: Gregory_Lewis 19 December 2017 12:38:18AM 3 points [-]

[Note: I work on existential risk reduction]

Although I laud posts like the OP, I'm not sure I understand this approach to uncertainty.

I think a lot turns on what you mean by the AI cause area being "Plausibly better" than global poverty or animal welfare on EV. The Gretchenfrage seems to be this conditional forecast: "If I spent (lets say) 6 months looking at the AI cause area, would I expect to identify better uses of marginal funding in this cause area than those I find in animal welfare and global poverty?"

If the answer is "plausibly so, but probably not" (either due to a lower 'prima facie' central estimate, or after pricing in regression to the mean etc.), then I understand the work uncertainty is doing here (modulo the usual points about VoI): one can't carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.

Yet if the answer is "Probably, yes", then offering these recommendations simpliciter (i.e. "EA should fully fund this") seems premature to me. The evaluation is valuable, but should be presented with caveats like, "Conditional on thinking global poverty is the best cause area, fund X; conditional on thinking animal welfare is the best cause area, fund Y (but, FWIW, I believe AI is the best cause area, but I don't know what to fund within it)." It would also lean against making ones own donations to X, Y etc., rather than spending time thinking about it/following the recommendations of someone one trusts to make good picks in the AI cause area.

View more: Next