Comment author: Jan_Kulveit 28 December 2017 11:58:17AM 1 point [-]

For scientific publishing, I looked into the latest available paper[1] and apparently the data are best fitted by a model where the impact of scientific papers is predicted by Q.p, where p is "intrinsic value" of the project and Q is a parameter capturing the cognitive ability of the researcher. Notably, Q is independent of the total number of papers written by the scientist, and Q and p are also independent. Translating into the language of digging for gold, the prospectors differ in their speed and ability to extract gold from the deposits (Q). The gold in the deposits actually is randomly distributed. To extract exceptional value, you have to have both high Q and be very lucky. What is encouraging in selecting the talent is the Q seems relatively stable in the career and can be usefully estimated after ~20 publications. I would guess you can predict even with less data, but the correct "formula" would be trying to disentangle interestingness of the problems the person is working on from the interestingness of the results.

(As a side note, I was wrong in guessing this is strongly field-dependent, as the model seems stable across several disciplines, time periods, and many other parameters.)

Interesting heuristics about people :)

I agree the problem is somewhat different in areas not that established/institutionalized where you don't have clear dimensions of competition, or the well measurable dimensions are not that well aligned with what is important. Loooks like another understudied area.

[1] Quantifying the evolution of individual scientific impact, Sinatra et.al. Science, http://www.sciencesuccess.org/uploads/1/5/5/4/15543620/science_quantifying_aaf5239_sinatra.pdf

Comment author: Benito 31 December 2017 12:26:05AM 0 points [-]

I copied this exchange to my blog, and there were an additonal bunch of interesting comments there.

Comment author: Jan_Kulveit 26 December 2017 11:27:43PM *  3 points [-]

Obviously the toy model is wrong in describing reality: it's one end of the possible spectrum, where you have complete randomness. On the other you have another toy model: results in a field neatly ordered by cognitive difficulty, and the best person at a time picks all the available fruit. My actual claims roughly are

  • reality is somewhere in between

  • it is field-dependent

  • even in fields more toward the random end, there actually would be differences like different speeds of travel among prospectors

It is quite unclear to me where on this scale the relevant fields are.

I believe your conclusion, that the power law distribution is all due to the properties of the peoples cognitive processes, and no to the randomness of the field, is not supported by the scientometric data for many research fields.

Thanks for a good preemptive answer :) Yes if you are good enough in identifying the "golden" cognitive processes. While it is clear you would be better than random chance, it is very unclear to me how good you would be. *

I think its worth digging into an example in detail: if you look a at early Einstein, you actually see someone with an unusually developed geometric thinking and the very lucky heuristic of interpreting what the equations say as the actual reality. Famously special relativity transformations were written first by Poincare. "All" what needed to be done was to take it seriously. General relativity is a different story, but at that point Einstein was already famous and possibly one of the few brave enough to attack the problem.

Continuing with the same example, I would be extremely doubtful if Einstein would be picked by selection process similar to what CEA or 80k hours will be probably running, before he become famous. 2nd grade patent clerk? Unimpressive. Well connected? No. Unusual geometric imagination? I'm not aware of any LessWrong sequence which would lead to picking this as that important :) Lucky heuristic? Pure gold, in hindsight.

(*) At the end you can take this as an optimization problem depending how good your superior-cognitive-process selection ability is. Let's have a practical example: You have 1000 applicants. If your selection ability is great enough, you should take 20 for individual support. But maybe its just good, and than you may get better expected utility if you are able to reach 100 potentially great people in workshops. Maybe you are much better than chance, but not really good... than, maybe you should create online course taking in 400 participants.

Comment author: Benito 27 December 2017 12:27:32AM *  2 points [-]

Examples are totally worth digging into! Yeah, I actually find myself surprised and slightly confused by the situation with Einstein, and do make the active predictions that he had some strong connections in physics (e.g. at some point had a really great physics teacher who'd done some research). In general I think Ramanujan-like stories of geniuses appearing from nowhere are not the typical example of great thinkers / people who significantly change the world. If I'm I right I should be able to tell such stories about the others, and in general I do think that great people tend to get networked together, and that the thinking patterns of the greatest people are noticed by other good people before they do their seminal work cf. Bell Labs (Shannon/Feynman/Turing etc), Paypal Mafia (Thiel/Musk/Hoffman/Nosek etc), SL4 (Hanson/Bostrom/Yudkowsky/Legg etc), and maybe the Republic of Letters during the enlightenment? But I do want to spend more time digging into some of those.

To approach from the other end, what heuristics might I use to find people who in the future will create massive amounts of value that others miss? One example heuristic that Y Combinator uses to determine who in advance is likely to find novel, deep mines of value that others have missed is whether the individuals regularly build things to fix problems in their life (e.g. Zuckerberg built lots of simple online tools to help his fellow students study while at college).

Some heuristics I use to tell whether I think people are good at figuring out what's true, and make plans for it, include:

  • Does the person, in conversation, regularly take long silent pauses to organise their thoughts, find good analogies, analyse your argument, etc? Many people I talk to take silence as a significant cost, due to social awkwardness, and do not make the trade-off toward figuring out what's true. I always trust the people more that I talk to who make these small trade-offs toward truth versus social cost
  • Does the person have a history of executing long-term plans that weren't incentivised by their local environment? Did they decide a personal-project (not, like, getting a degree) was worth putting 2 years into, and then put 2 years into it?
  • When I ask about a non-standard belief they have, can they give me a straightforward model with a few variables and simple relations, that they use to understand the topic we're discussing? In general, how transparent are their models to themselves, and are the models general simple and backed by lots of little pieces of concrete evidence?
  • Are they good at finding genuine insights in the thinking of people who they believe are totally wrong?

My general thought is that there isn't actually a lot of optimisation process put into this, especially in areas that don't have institutions built around them exactly. For example academia will probably notice you if you're very skilled in one discipline and compete directly in it, but it's very hard to be noticed if you're interdisciplinary (e.g. Robin Hanson's book sitting between neuroscience and economics) or if you're not competing along even just one or two of the dimensions it optimises for (e.g. MIRI researchers don't optimise for publishing basically at all, so when they make big breakthroughs in decision theory and logical induction it doesn't get them much notice from standard academia). So even our best institutions at noticing great thinkers with genuine and valuable insights seem to fail at some of the examples that seem most important. I think there is lots of low hanging fruit I can pick up in terms of figuring out who thinks well and will be able to find and mine deep sources of value.


Edit: Removed Bostrom as an example at the end, because I can't figure out whether his success in academia, while nonetheless going through something of a non-standard path, is evidence for or against academia's ability to figure out whose cognitive processes are best at figuring out what's surprising+true+useful. I have the sense that he had to push against the standard incentive gradients a lot, but I might just be false and Bostrom is one of academia's success stories this generation. He doesn't look like he just rose to the top of a well-defined field though, it looks like he kept having to pick which topics were important and then find some route to publishing on them, as opposed to the other way round.

Comment author: Jan_Kulveit 26 December 2017 03:27:41PM 2 points [-]

I would be also worried. Homophily is of the best predictors of links in social networks, and factors like being member of the same social group, having similar education, opinions, etc. are known to bias selection processes again toward selecting similar people. This risks having the core of the movement be more self encapsulated that it is, which is a shift in bad direction.

Also I would be worried with 80k hours shifting also more toward individual coaching, there is now a bit overemphasis on "individual" approach and too little on "creating systems".

Also it seems lot of this would benefit from knowledge from the fields of "science of success", general scientometry, network science, etc. E.g. when I read concepts like "next Peter Singer" or a lot of thinking along the line "most of the value is created by just a few peple", I'm worried. While such thinking is intuitively appealing, it can be quite superficial. E.g., a toy model: Imagine a landscape with gold scattered in power-law sized deposits. And prospectors, walking randomly, and randomly discovering deposits of gold. What you observe is the value of gold collected by prospectors is also power-law distributed. But obviously the attempts to emulate "the best" or find the "next best" would be futile. It seems open question (worth studying) how much some specific knowledge landscape resembles this model, or how big part of the success is attributable to luck.

Comment author: Benito 26 December 2017 07:36:07PM 3 points [-]

That’s a nice toy model, thanks for being so clear :-)

But it’s definitely wrong. If you look at Bostrom on AI or Einstein on Relativity or Feynman on Quantum Mechanics, you don’t see people who are roughly as competent as their peers, just being lucky in which part of the research space was divvied up and given to them. You tend to see people with rare and useful thinking processes having multiple important insights about their field in succession - getting many thing right that their peers didn’t, not just one as your model would predict (if being right was random luck). Bostrom has looked into half a dozen sci-fi looking areas that others looked to figure out which were important, before concluding with xrisk and AI, and he looked into areas and asked questions that were on nobody’s radar. Feynman made breakthroughs in many different subfields, and his success looked like being very good at fundamentals like being concrete and noticing his confusion. I know less about Einstein, but as I understand it to get to Relativity required a long chain of reasoning that was unclear to his contemporaries. “How would I design the universe if I were god” was probably not a standard tool that was handed out to many physicists to try.

You may respond “sure, these people came up with lots of good ideas that their contemporaries wouldn’t have, but this was probably due to them using the right heuristics, which you can think of as having been handed out randomly in grad school to all the different researchers, so it still is random just on the level of cognitive processes”.

To this I’d say that, you’re right, looking at people’s general cognitive processes is really important, but I think I can do much better than random chance in predicting what cognitive processes will produce valuable insights. I’ll point to Superforecasters and Rationality: AI to Zombies as books with many insights into which cognitive processes are more likely to find novel and important truths than others.

In sum: I think the people who’ve had the most positive impact in history are power law distributed because of their rare and valuable cognitive processes, not just random luck, and that these can be learned from and that can guide my search for people who (in future) will have massive impact.

Comment author: Benito 22 December 2017 09:56:55PM 0 points [-]

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

I haven’t thought about this much, but a natural strategy is to try to have a budget sufficiently large that you know you’ll definitely be able to fund all the good projects, and then binary search down to the amount that only funds all the good projects.

Comment author: Benito 22 December 2017 10:00:31PM *  2 points [-]

I’d also be interested to find out what happens if CEA announces they’re budgeting like 5 million for this, and see if there are any good projects that appear when that much money is potentially available in the community. Naturally CEA neednt give it all away.

(But right now I’d expect most of the best projects are just 1-3 people’s full time salaries for a small team to work together, so each grant being <200k at most.)

Added: On the margin I’d expect the most useful thing EA Grants could do would be to offer multi-year grants, so people in the community can consider major careeer changes based on what’s most effective rather than what’s most stable.

Comment author: weeatquince  (EA Profile) 22 December 2017 03:19:17PM *  1 point [-]

This is fantastic. Thank you for writing up. Whilst reading I jotted down a number of thoughts, comments, questions and concerns.

.

ON EA GRANTS

I am very excited about this and very glad that CEA is doing more of this. How to best move funding to the projects that need it most within the EA community is a really important question that we have yet to solve. I saw a lot of people with some amazing ideas looking to apply for these grants.

1

"with an anticipated budget of around £2m"

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

2

"mostly evaluating the merits of the applicants themselves rather than their specific plans"

Interesting decision. Seems reasonable. However I think it does have a risk of reducing diversity and I would be concerned that the applicants would be judged on their ability to hold philosophise in an academic oxford manner etc.

Best of luck with it

.

OTHER THOUGHTS

3

"encouraging more people to use Try Giving,"

Could CEA comment or provide advise to local group leaders on if they would want local groups to promote the GWWC pledge or the Try Giving pledge or when one might be better than the other? To date the advise seems to have been to as much as possible push the Pledge and not Try Giving

4

"... is likely to be the best way to help others."

I do not like the implication that there is a single answer to this question regardless of individual's moral frameworks (utilitarian / non-utilitarian / religious / etc) or skills and background. Where the mission is to have an impact as a "a global community of people..." the research should focus on supporting those people to do what they has the biggest impact given their positions.

5 Positives

"Self-sorting: People tend to interact with others who they perceive are similar to themselves"

This is a good thing to have picked up on.

"Community Health"

I am glad this is a team

"CEA’s Mistakes"

I think it is good to have this written up.

6

"Impact review"

It would have been interesting to see an estimates for costs (time/money) as well as for the outputs of each team.

.

WELL DONE FOR 2017. GOOD LUCK FOR 2018!

Comment author: Benito 22 December 2017 09:56:55PM 0 points [-]

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

I haven’t thought about this much, but a natural strategy is to try to have a budget sufficiently large that you know you’ll definitely be able to fund all the good projects, and then binary search down to the amount that only funds all the good projects.

Comment author: RyanCarey 25 November 2017 02:34:00PM *  2 points [-]

60k GBP doesn't sound like too much to me to revamp LessWrong at all.

  • probably years of time were spent on design/coding/content-curation for LW1, right?
  • LW has dozens of features that aren't available off the shelf
  • Starting the EA forum took a couple months of time. Remaking LessWrong will involve more content/moderator work, more design, and an order of magnitude more coding.

So it could easily take 1-2 person-years.

Comment author: Benito 25 November 2017 05:34:38PM *  3 points [-]

I agree with Jess, I'd love to hear more about the decision making. I think that the EA Grants programme has been the highest impact thing CEA has done in the past 2-3 years, and think it could be orders of magnitude more impactful if they can reliably expect to get funding for good projects. That would require that (a) it is done regularly and (b) people can know the reasons CEA uses to decide on what projects to fund.

Responding to why building online tools for intellectual progress takes multiple people's full time jobs: The original reddit codebase that LW 1.0 forked from was on the order of 4 years of 4 people's full time work, so say at least 10 person years of coding (we have had so far maybe 1 person year of full time coding work, and LW 2.0 has an entirely original codebase). While we're able to steal some of their insights (so we built a lot of the final product directly without having to fail and rebuild multiple times) LW 2.0 is building a lot of original features like an eigenkarma system, a sequences feature, and a bunch of other things that aren't currently in existence. We have still not yet built 50% of the features the site will have once we stop working on it.

Then also there's content curation and new epistemic and content norms to set up which takes time, and user interviews with writers in the community, and a ton of other things. The strategic overview points in the sorts of directions we'll likely build things.

Comment author: Denkenberger 18 November 2017 06:42:38PM 0 points [-]

Wow - is there a paper to this effect? I would be surprised if it is that high for the technical fields.

Comment author: Benito 18 November 2017 07:32:27PM 1 point [-]

I haven't read Caplan's book, but I can imagine >50% of the math learned in a math course being not used in a technical career outside of research, and furthermore that the heuristics picked up in those courses are not generalisable (e.g. geometry heuristics not applying to differential equations).

Comment author: Benito 01 November 2017 05:38:22AM 5 points [-]

For my own benefit I thought I'd write down examples of markets that I can see are inadequate yet inexploitable. Not all of these I'm sure are actually true, some just fit the pattern.

  • I notice that most charities aren’t cost effective, but if I decide to do better by making a super cost-effective charity I shouldn’t expect to be more successful than the other charities.
  • I notice that most people at university aren’t trying to learn but get good signals for their career, I can’t easily do better in the job market by stopping trying to signal and just learn better
  • I notice most parenting technique books aren't helpful (because genetics), but I probably can’t make money by selling a shorter book that tells you the only parenting techniques that do matter.
  • If I notice that politicians aren’t trying to improve the country very much, I can’t get elected over them by just optimising for improving the country more (because they're optimising for being elected).
  • If most classical musicians spend a lot of money on high-status instruments and spend time with high-status teachers that don’t correlate with quality, you can’t be more successful by just picking high quality instruments and teachers.
  • If most rocket companies are optimising for getting the most money out of government, you probably can’t win government contracts by just making a better rocket company. (?)
  • If I notice that nobody seems to be doing research on the survival of the human species, I probably can’t make it as an academic by making that my focus
  • If I notice that most music recommendation sites are highly reviewing popular music (so that they get advance copies) I can’t have a more successful review site/magazine by just being honest about the music.

Correspondingly, if these models are true, here are groups/individuals who it would be a mistake to infer strong information about if they're not doing well in these markets:

  • Just because a charity has a funding gap doesn't mean it's not very cost-effective
  • Just because someone has bad grades at university doesn't mean they are bad at learning their field
  • Just because a parenting book isn't selling well doesn't mean it isn't more useful than others
  • Just because a politician didn't get elected doesn't mean they wouldn't have made better decisions
  • Just because a rocket company doesn't get a government contract doesn't mean it isn't better at building safe and cheap rockets than other companies Just because an academic is low status / outside academia doesn't mean they're views aren't true
  • Just because a band isn't highly reviewed in major publications doesn't mean it isn't innovative/great

Some of these seem stronger to me than others. I tend to think that academic fields are more adequate at finding truth and useful knowledge than music critics are adequate at figuring out which bands are good.

Comment author: Pablo_Stafforini 31 October 2017 08:39:17PM *  0 points [-]

why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?

Maybe because these people have been surprisingly accurate? In addition, it's not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.

Comment author: Benito 31 October 2017 09:18:06PM 0 points [-]

Yeah, that’s the sort of discussion that seems to me most relevant.

Comment author: Pablo_Stafforini 31 October 2017 02:46:42PM *  2 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to "read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.

To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer try to do this?

ETA: I just realized that the kind of challenge I'm raising here has been carried out, in the form of a "natural experiment", for Eliezer's views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer's TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).

Comment author: Benito 31 October 2017 07:37:44PM 3 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

This seems correct. I just noticed you could phrase this the other way - why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn't find Greg's post particularly persuasive - this isn't a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).

View more: Next