Comment author: DavidMoss 30 October 2017 03:14:14AM 6 points [-]

I don't have much to contribute to the normative social epistemology questions raised here, since this is a huge debate within philosophy. People interested in a general summary might read the Philosophy Compass review or the SEP article.

But I did want to question the claim about the descriptive social epistemology of the EA movement which is made i.e. that:

What occurs instead is agreement approaching fawning obeisance to a small set of people the community anoints as ‘thought leaders’, and so centralizing on one particular eccentric and overconfident view.

I'm not sure this is useful as a general characterisation of the EA community, though certainly at times people are too confident, too deferential etc. What beliefs might be the beneficiaries of this fawning obeisance? There doesn't seem to me to be sufficient uncontroversial agreement about much (even utilitarianism has a number of prominent 'thought leaders' pushing against it saying that we ought to be opening ourselves up to alternatives).

The general characterisation seems in tension with the common idea that EA is highly combative and confrontational (it would be strange though not impossible if we had a constant disagreement and attempted argumentative one-upmanship, combined with excessive deference to certain thought leaders). Instead what I see is occasional excessive deference to people respected within certain cliques, by members of those circles, but not 'centralization' on any one particular view. Perhaps all Greg has in mind is these kinds of cases where people defer too much to people they shouldn't (perhaps due to a lack of actual experts in EA rather than due to their own vice). But then it's not clear to me what the typical EA-rationalist who has not and probably shouldn't make a deep study of many-worlds, free will, or meta-ethics should do to avoid this problem.

Comment author: Gregory_Lewis 27 February 2018 02:16:14AM *  0 points [-]

Apropos of which, SEP published an article on disagreement last week, which provides an (even more) up to date survey of philosophical discussion in this area.

Comment author: Gregory_Lewis 21 February 2018 10:24:11PM 17 points [-]

Thank you for writing this post. An evergreen difficulty that applies to discussing topics of such a broad scope is the large number of matters that are relevant, difficult to judge, and where one's judgement (whatever it may be) can be reasonably challenged. I hope to offer a crisper summary of why I am not persuaded.

I understand from this the primary motivation of MCE is avoiding AI-based dystopias, with the implied causal chain being along the lines of, “If we ensure the humans generating the AI have a broader circle of moral concern, the resulting post-human civilization is less likely to include dystopic scenarios involving great multitudes of suffering sentiences.”

There are two considerations that speak against this being a greater priority than AI alignment research: 1) Back-chaining from AI dystopias leaves relatively few occasions where MCE would make a crucial difference. 2) The current portfolio of ‘EA-based’ MCE is poorly addressed to averting AI-based dystopias.

Re. 1): MCE may prove neither necessary nor sufficient for ensuring AI goes well. On one hand, AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators. On the other, even if the AI-designers have the desired broad moral circle, they may have other crucial moral faults (maybe parochial in other respects, maybe selfish, maybe insufficiently reflective, maybe some mistaken particular moral judgements, maybe naive approaches to cooperation or population ethics, and so on) - even if they do not, there are manifold ways in the wider environment (e.g. arms races), or in terms of technical implementation, that may incur disaster.

It seems clear to me that, pro tanto, the less speciesist the AI-designer, the better the AI. Yet for this issue to be of such fundamental importance to be comparable to AI safety research generally, the implication is of an implausible doctrine of ‘AI immaculate conception’: only by ensuring we ourselves are free from sin can we conceive an AI which will not err in a morally important way.

Re 2): As Plant notes, MCE does not arise from animal causes alone: global poverty, climate change also act to extend moral circles, as well as propagating other valuable moral norms. Looking at things the other way, one should expect the animal causes found most valuable from the perspective of avoiding AI-based dystopia to diverge considerably from those picked on face-value animal welfare. Companion animal causes are far inferior from the latter perspective, but unclear on the former if this a good way of fostering concern for animals; if the crucial thing is for AI-creators not to be speciest over the general population, targeted interventions like ‘Start a petting zoo at Deepmind’ look better than broader ones, like the abolition of factory farming.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

Notwithstanding the above, the approach outlined above has a role to play in some ideal ‘far future portfolio’, and it may be reasonable for some people to prioritise work on this area, if only for reasons of comparative advantage. Yet I aver it should remain a fairly junior member of this portfolio compared to AI-safety work.

Comment author: Halstead 02 February 2018 06:37:52PM *  2 points [-]

Genetics might be a constraint on ultra-fragility. If all of the most practically important traits are highly heritable, then one wouldn't expect the contingency of conception to produce as much variation in outcomes as in the state of affairs in which the contingency of conception has a very large effect on average individual traits. While it is true that the individual born is a highly contingent matter, the traits of the individual produced might not be. If my parents had an argument on the blessed night of my conception but overcame their disagreement the next day, then there would be some reason to think that a one day older version of me would be writing this comment.

Chaos also doesn't seem inimical to attempts to rationally steer the future. Notwithstanding the fact that the climate system is chaotic, pumping lots of greenhouse gases into it looks like a bad idea in expectation.

Comment author: Gregory_Lewis 02 February 2018 06:48:12PM 2 points [-]

That seems surprising to me, given the natural model for the counterpart in the case you describe would be a sibling, and observed behaviour between sibs is pretty divergent. I grant your counterfactual sibling would be more likely than a random member of the population to be writing something similar to the parent comment, but the absolute likelihood remains very low.

The fairly intermediate heritabilities of things like intelligence, personality traits etc. also look pretty variable. Not least, there's about a 0.5 chance your counterpart would be the opposite sex to you.

I agree even if history is chaotic in some respects, it is not chaotic to everything, and there can be forcing interventions (one can grab a double pendulum, etc), yet less overwhelming interventions may be pretty hard to fathom in the chaotic case (It's too early to say whether the french revolution was good or bad, etc.)

10

How fragile was history?

Elsewhere  (and better): 1 , 2 . If one could go back in time and make a small difference in the past, would one expect it to effect dramatic changes to the future? Questions like these are fertile soil for fiction writers (generally writing under speculative or alternative history) but... Read More
Comment author: Jan_Kulveit 26 December 2017 11:27:43PM *  3 points [-]

Obviously the toy model is wrong in describing reality: it's one end of the possible spectrum, where you have complete randomness. On the other you have another toy model: results in a field neatly ordered by cognitive difficulty, and the best person at a time picks all the available fruit. My actual claims roughly are

  • reality is somewhere in between

  • it is field-dependent

  • even in fields more toward the random end, there actually would be differences like different speeds of travel among prospectors

It is quite unclear to me where on this scale the relevant fields are.

I believe your conclusion, that the power law distribution is all due to the properties of the peoples cognitive processes, and no to the randomness of the field, is not supported by the scientometric data for many research fields.

Thanks for a good preemptive answer :) Yes if you are good enough in identifying the "golden" cognitive processes. While it is clear you would be better than random chance, it is very unclear to me how good you would be. *

I think its worth digging into an example in detail: if you look a at early Einstein, you actually see someone with an unusually developed geometric thinking and the very lucky heuristic of interpreting what the equations say as the actual reality. Famously special relativity transformations were written first by Poincare. "All" what needed to be done was to take it seriously. General relativity is a different story, but at that point Einstein was already famous and possibly one of the few brave enough to attack the problem.

Continuing with the same example, I would be extremely doubtful if Einstein would be picked by selection process similar to what CEA or 80k hours will be probably running, before he become famous. 2nd grade patent clerk? Unimpressive. Well connected? No. Unusual geometric imagination? I'm not aware of any LessWrong sequence which would lead to picking this as that important :) Lucky heuristic? Pure gold, in hindsight.

(*) At the end you can take this as an optimization problem depending how good your superior-cognitive-process selection ability is. Let's have a practical example: You have 1000 applicants. If your selection ability is great enough, you should take 20 for individual support. But maybe its just good, and than you may get better expected utility if you are able to reach 100 potentially great people in workshops. Maybe you are much better than chance, but not really good... than, maybe you should create online course taking in 400 participants.

Comment author: Gregory_Lewis 27 December 2017 02:27:20AM 4 points [-]

I share your caution on the difficulty of 'picking high impact people well', besides the risk of over-fitting on anecdata we happen to latch on to, the past may simply prove underpowered for forward prediction: I'm not sure any system could reliably 'pick up' Einstein or Ramanujan, and I wonder how much 'thinking tools' etc. are just epiphenomena of IQ.

That said, fairly boring metrics are fairly predictive. People who do exceptionally well at school tend to do well at university, those who excel at university have a better chance of exceptional professional success, and so on and so forth. SPARC (a program aimed at extraordinarily mathematically able youth) seems a neat example. I accept none of these supply an easy model for 'talent scouting' intra-EA, but they suggest one can do much better than chance.

Optimal selectivity also depends on the size of boost you give to people, even if they are imperfectly selected. It's plausible this relationship could be convex over the 'one-to-one mentoring to webpage' range, and so you might have to gamble on something intensive even in expectation of you failing to identify most or nearly all of the potentially great people.

(Aside: Although tricky to put human ability on a cardinal scale, normal-distribution properties for things like working memory suggest cognitive ability (however cashed out) isn't power law distributed. One explanation of how this could drive power-law distributions in some fields would be a Matthew effect: being marginally better than competing scientists lets one take the majority of the great new discoveries. This may suggest more neglected areas, or those where the crucial consideration is whether/when something is discovered, rather than who discovers it (compare a malaria vaccine to an AGI), are those where the premium to really exceptional talent is less. )

Comment author: RyanCarey 19 December 2017 08:14:04PM *  9 points [-]

That is an excellent update. The strategic directions broadly make sense to me for all of the teams, and I, like many people, am really happy with the ways CEA has improved over the last year.

One item of feedback on the post: the description of mistakes is a bit long, boring, and over-the-top. Many of these things are not actually very important issues.

One suggestion re the EA Forum revamp: the lesserwrong.com site is looking pretty great these days. My main gripes --- things like the front being slightly small for my preferences --- could be easily fixed with some restyling. Some of their features, like including sequences of archived material, could also be ideal for the EA Forum use case. IDK whether the codebase is good but recall that the EA Forum was originally created by restyling LessWrong1, so the notion of stealing that code comes from a healthy tradition! Also, This last part is probably a bit too crazy (and too much work), but one can imagine a case where you post content (and accept comments) from both sites at once.

That aside, it's really appreciated that you guys have taken the forum over this year. And in general, it's great to see all of this progress, so here's to 2018!

Comment author: Gregory_Lewis 20 December 2017 01:03:22AM 3 points [-]

I agree both with Ryan's overall evaluation (this is excellent) and that the 'mistakes' section, although laudable in intent, errs slightly too far in the 'self-flagellatory' direction. Some of the mistakes listed either seem appropriate decisions (e.g. "We prioritized X over Y, so we didn't do as much Y as we'd like"), or are the result of reasonable decisions or calculations ex ante which didn't work out.

I think the main value of publicly recording mistakes is to allow others to learn from them or (if egregious) be the context for a public mea culpa. The line between, "We made our best guess, it turned out wrong, but we're confident we made the right call ex ante" and "Actually, on reflection, we should have acted differently given what we knew at the time" is blurry, as not all decisions can (or should) be taken with laborious care.

Perhaps crudely categorising mistakes into 'major' and 'minor' given magnitude, how plausibly could have been averted, etc., and putting the former in updates like these but the latter linked to in an appendix might be a good way forward.

Comment author: Peter_Hurford  (EA Profile) 18 December 2017 10:01:34PM 4 points [-]

My excuses in order of importance:

1.) While I do think AI as a cause area could be plausibly better than global poverty or animal welfare, I don't think it's so plausibly better that the expected value given my uncertainty dwarfs my current recommendations.

2a.) I think I'm basically okay with the streetlight effect. I think there's a lot of benefit in donating now to support groups that might not be able to expand at all without my donation, which is what the criteria I outlined here accomplish. Given the entire EA community collaborating as a whole, I think there's less need for me to focus tons of time on making sure my donations are as cost-effective as possible, and more just a need to clear a bar of being "better than average". I think my recommendations here accomplish that.

2b.) Insofar as my reasoning in (2a) is some "streetlight effect" bias, I think you could accuse nearly anyone of this, since very few have thoroughly explored every cause area and no one could fully rule out being wrong about a cause area.

3.) There is still more I could donate later. This money is being saved mainly as a hedge to large financial uncertainty in my immediate future, but could also be used as savings to donate later when I learn more.

Comment author: Gregory_Lewis 19 December 2017 12:38:18AM 3 points [-]

[Note: I work on existential risk reduction]

Although I laud posts like the OP, I'm not sure I understand this approach to uncertainty.

I think a lot turns on what you mean by the AI cause area being "Plausibly better" than global poverty or animal welfare on EV. The Gretchenfrage seems to be this conditional forecast: "If I spent (lets say) 6 months looking at the AI cause area, would I expect to identify better uses of marginal funding in this cause area than those I find in animal welfare and global poverty?"

If the answer is "plausibly so, but probably not" (either due to a lower 'prima facie' central estimate, or after pricing in regression to the mean etc.), then I understand the work uncertainty is doing here (modulo the usual points about VoI): one can't carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.

Yet if the answer is "Probably, yes", then offering these recommendations simpliciter (i.e. "EA should fully fund this") seems premature to me. The evaluation is valuable, but should be presented with caveats like, "Conditional on thinking global poverty is the best cause area, fund X; conditional on thinking animal welfare is the best cause area, fund Y (but, FWIW, I believe AI is the best cause area, but I don't know what to fund within it)." It would also lean against making ones own donations to X, Y etc., rather than spending time thinking about it/following the recommendations of someone one trusts to make good picks in the AI cause area.

Comment author: [deleted] 02 December 2017 06:53:29PM *  -2 points [-]

I left most EA Facebook groups and concluded that EA will be an ineffective movement as a whole because I found basically NONE of the above being done in your organization. Ever. "Being intellectually fair can help people to resolve disagreements, so we have norms against overconfidence and fallacious reasoning." No, you have a norm of extreme overconfidence and fallacious reasoning, in the form of DEMANDS for "arguments by authority" that are the consistent response I encountered. More than half a dozen EA people "explained" to me that they would pay no attention to my claims or work until I went back to university and got a PhD, and others who had only a Bachelors in computer programming wanted to "review" my work in unrelated areas before even accepting an unpaid article for their blog from me. Others responded as if EA was some sort of popularity contest and not an effort to help others altruistically.

As constituted, EA is a practice in the wildly overblown egos of privileged young white males (mostly) who will accomplish very very little. The norm is that they believe they know literally everything and have no interest in hearing ideas that are new to them, at all.

I joined because I have knowledge to share. The "moderators" of the FB group consistently felt my knowledge was of no value and refused to permit my posts to be seen. I have shared my knowledge at three international academic conferences, but it was not deemed worthy of a single FB post on EA. The message was abundantly clear, EA does not want new ideas or knowledge, does not want to see any of their current ideas and assumptions questioned at all. My advice to anyone who wants to "Share knowledge. If you know a lot about an area, help others to learn by writing up what you’ve found" is to find a group where people might have even a slight interest, your efforts to do so at "Effective" altruism will be entirely ineffective.

It is a damned shame, the concept of EA is a good one.

Comment author: Gregory_Lewis 02 December 2017 10:27:14PM *  2 points [-]

For the benefit of readers: The individual who wrote this is almost certainly Carmi Turchick, an (his words) "autodidact independent scholar". He reports he presented works relating to his blog at the Symposium on the Psychology of War and the Association for Politics and the Life Sciences, and presented a poster at the Human Evolution and Behaviour Society.

I take this academic record to be pretty modest for someone who claims to have novel understanding about how to 'solve war', so it doesn't seem unreasonable for people to screen out claims like this on this heuristic, and doesn't imply they take themselves to know literally everything nor have no interest in new ideas. Just that the likelihood of good new ideas arising from this reference class is too low for it to be worth indulging them with scarce attention.

Of course, such a screening heuristic means one won't see diamonds in the rough. I can reassure others this is unlikely the case here. For my sins I had a look at the Altruism and War work. It is very long, not very well written, and falls into the standard autodictat's trap of taking as startlingly original insights already made elsewhere - in this case, the idea 'maybe intra-group altruism can drive intragroup conflict' was first ventured by Darwin in the Origin of Species, and there has been considerable research since, usually under the heading of 'parochial altruism'.

When I made these suggestions to Turchick (alongside a recommendation he would be better served trying to work in academia) he offered in reply a vituperative parting shot suggesting I was demonstrably incompetent in the subject of my PhD, that I failed to review his second paper because I plan to steal ideas from it for my own academic career, that I'm an 'egotistical little punk running my mouth', and so on and so forth ad nauseam.

I hope the wider EA movement does not mourn the loss of his contributions too heavily, and beg forgiveness to whatever extent my interaction with him provoked this state of affairs - which I, of course, gravely and bitterly lament. I hope others take some solace from, as Achilles was spurred on by guilt by his role at causing the death of his friend Patroclus to redouble his efforts against the Trojans, so I redouble my meagre egotistical punk-like efforts to in some small part compensate for what Turchick would have provided. I also take further solace that Turchick is not wholly lost to us, and the shrewd and penetrating criticism he offers may provide some glimmer of hope for our movement to avoid his prognostications, although I fear they are Cassandra-esque in their accuracy.

[I am a moderator for the EA FB group, but moderation decisions regarding any of Turchick's posts were 'before my time'.]

Comment author: Gregory_Lewis 09 November 2017 01:22:42AM *  9 points [-]

I am wiser, albeit poorer: the bet resolved in Carl's favour. I will edit this comment with the donation destination he selects, with further lamentations from me in due course.

Comment author: Gregory_Lewis 22 November 2017 08:10:02PM 7 points [-]

Carl has gotten back to me with where he would like to donate his gains, ill-gotten through picking on epistemic inferiors - akin to crocodiles in the Serengeti river picking off particularly frail or inept wildebeest on their crossing. The $1000 will go to MIRI.

With cognitive function mildly superior to the median geriatric wildebeest, I can take some solace that these circumstances imply this sum is better donated by him than I, and that MIRI is doing better on a crucial problem for the far future than I had supposed.

Comment author: kbog  (EA Profile) 10 November 2017 12:30:08AM *  4 points [-]

Eliezer's solution wasn't dietary treatment, it was to use imported Nizoral.

Comment author: Gregory_Lewis 11 November 2017 12:29:59AM 1 point [-]

Although it isn't clear in the story, doctors often use empirical treatment: i.e. treat the most likely culprit, then rely on the patient to come back if it doesn't work.

So I don't take this as a huge strike against the medical profession - after all, knowing a treatment hasn't worked is a large informational edge.

View more: Prev | Next