Comment author: Richenda  (EA Profile) 15 March 2018 07:30:47PM 4 points [-]

Thanks for sharing this. I'm looking forward to the second part!

Reflections like this are amazingly valuable for the movement building community. I'm especially interested in how you factored in the local context in order to choose the best strategy for EA in the Czech Republic. I also totally agree that it's great to hear perspectives that come from outside of the Oxbridge/Silicone Valley bubble - and even the anglophile bubble.

A lot of people are grappling with the issue of how to appropriate EA in non-English communities. I'll be sharing this report with each of those that approach LEAN with these challenges.

Comment author: Jan_Kulveit 16 March 2018 01:03:50AM *  0 points [-]

Thanks! I think the algorithm may be explained something like

1] take the most up-to-date reasoning and conclusions from central places ... like prioritization outputs of CEA, FHI, CSER, GiveWell, OpenPhil, etc.

2] try to think what are the local differences, and which of them would most influence the outcomes.

3] think in terms of comparative advantages and disadvantages (e.g., if you are in Geneva, you have access to all that world organizations and people working in them ... so you have advantage in policy)

4] think in terms of the movement optimizing as a unified organization, like "if there was an EA Inc. what units it would create here" (sometimes its possible to learn from actual multinationals)

Possible caveat... doing this may be much less work than creating the original models, but may need good understanding of the models, and good grasp of concepts like parameter sensitivity.

We're starting a prioritization projects attempting to quantitatively model this (factor in local considerations). It will also probably try to model the important question "stay or move"

Comment author: Ben_Todd 14 March 2018 09:07:09PM 1 point [-]

Hi Jan,

It's a useful case study, however, two quick responses:

1) To some extent you were following the suggested approach, because you only pushed ahead having already built a core of native speakers who had been involved in the past with English language materials (e.g. Ales Flidr was head of EA Harvard; core of LW people helped to start it).

You also mention how doing things like meeting CFAR and attending EAGxOxford were very useful in building the group. This suggests to me that doing even more to build expertise and connections with the core English-speaking EA community before pushing ahead with Czech outreach might have led to even better results.

I also guess that most of the group can read English language materials? If so, that makes the situation much easier. As I say, the less the distance, the weaker the arguments for waiting.

2) You don't directly address my main concern. I'm suggesting that if we try to spread EA in new languages and cultures without laying out groundwork could lead to a suboptimal version of EA being locked into the new audience. However, in your report, you don't directly respond to this concern.

You do give some evidence of short-term impact, which is evidence that benefits outweighed the opportunity costs. But I'd also want to look at questions like: (i) how accurately was EA portrayed in your press coverage? (ii) how well do people getting involved in the group understand EA? (iii) might you have put off important groups in ways that could have been avoided?

Comment author: Jan_Kulveit 16 March 2018 12:39:43AM *  1 point [-]

Hi Ben,

1) I understand your concerns.

On the other hand I'm not sure if you take into account the difficuties

  • e.g. going to EAG could require something like "heroic effort". If my academic job was my only source of income, going to EAGxOxford would have meant spending more than my monthly disposable income on it (even if EAGxOxford organizers were great in offering me big reduction in fees)

  • I'm not sure if you are modelling correctly barriers in building connections to the core of the community(*). Here is my guess at some problems people from different countries ol cultures may run into when trying to make connections, e.g. by applying for internships, help, conferences, etc.

1) People unconsciously take language sophistication as a proxy for inteligence. By not being proficient in English you loose points.

2) People are evaluated on proxies like "attending prestigeous university". Universities outside of US or UK are generally discounted as regional

3) People are often in fact evaluated based on social network distance; this leads to "rich gets richer" dynamics

4) People are evaluated based on previous "EA achievements" which are easier to achieve in places where EA is allready present.

(*) you may object e.g. Ales Flidr is a good counterexample, but Harvard alumni are typically relatively "delocalized", in demand everywhere, and may percieve greater value in working in the core than spreading ideas from core to emerging local groups. (Prestige and other incentives also point that way)

one risk I see in your article is it may influence the people who would be best at mitigating risks of "wrong local EA movements" being founded to not work on it at all.

I dont think "the barriers" should be zero, as such barriers in a way help the selection of motivated people. Just in my impression they may be higher than they appear from inside. Asking people to first build conections, while building such connections is not systematically supported, may make the barriers higher than optimal.

Btw yes the core of the group can read English materials. Also it could do research in machine learning, found a startup, work in quantitive finance, in part get elected to the parliament, move out of the country, and more. What I want to point at, if you imagine members of a group of people working on "translation" of EA into new culture you would like, they are likely paying huge opportunity costs in doing so. It may be hard to keep them at some state of waiting and building connections.

In our case, counterfactually it seems plausible "waiting even more" could have also led to the group not working, or worse organization beeing created, as the core people would loose motivation / pursue other opportunities.

2) In counting long-term impact and the lock-in effect you should consider the chance movements in new languages and cultures develop in some respects better versions of effective altruism, and beneficial mutations can than spread, even back to the core. More countries may mean more experiments running and faster evolution of better versions of EA. It's unclear to me how these terms (lock in, more optimization power) add up but both should be counted. One possible resolution may be to prioritize founding movements in smaller countries where you create experience, but the damage is limited.

To your questions

i] with one exception, reasonably well taken from the viewpoint of strategic communication (communicating easily communicable concepts, eg impact and effectivity). I don't think the damage is great, and part of misconceptions is unavoidable givent the name "effective altruism".

ii] it has a distribution... IMO the understanding at the moment is highly correlated with the enagagement in the group. Which may be more important than criteria like "average understanding" or "public understanding"

iii] yes, it's complicated, depends on some misalignments, and I dont want to disscuss it publicly.

Comment author: Jan_Kulveit 13 March 2018 08:15:29AM *  4 points [-]

I just published a short history of creating effective altruism movement in the Czech Republic and I think it is highly relevant to this discussion

Compared to Ben's conclusions I would use it as a data-point showing

  • it can be done

  • it may not be worth delaying

  • there are intermediate forms of communication in between "mass outreach" and "person-to-person outreach"

  • you should consider more complex model of communication than just (personal vs. mass media): specifically, a viable model in new country could be something like "very short message in mass media, few articles translated in national language to lower the bareer and point in the right direction, much larger amount transmitted via conferences & similar

Putting too much weight into "person to person" interaction runs into the problem you are less likely to find the right persons (consider how such connections may be created)

Btw it seems to me the way e.g. 80k hours and CEA works are inadeqate in creating the required personal connections in new countries, so it's questionable if it makes sense to focus on it

(I completely agree China is extremely difficult, but I don't think China should be considered a typical example - considering mentality it's possibly one of the most remote countries from from Eurpoean POV)

Comment author: Dunja 28 February 2018 02:04:25PM *  3 points [-]

Hi Jan, I am aware of the fact that "publish or perish" environment may be problematic (and that MIRI isn't very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.

Now, if we don't want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?

Comment author: Jan_Kulveit 28 February 2018 02:54:25PM 1 point [-]

What I would do when evaluating potentially high-impact, high uncertainty "moonshot type" research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)

Comment author: Dunja 28 February 2018 11:17:05AM *  1 point [-]

Thanks for the comment, Gregory! I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations. Only once your research results are published in a peer-reviewed journal (including peer-reviewed conference proceedings) can other scholars in the field take them as a (minimally) reliable source for further research that would build on it. By the way, many prestigious AI conferences actually come with a peer-reviewed proceedings (take e.g. AAAI or IJCAI), so you can't even present at the conference without submitting a paper.

Again, MIRI might be doing excellent work. All i am asking is: in view of which criteria can we judge this to be the case? What are the criteria of assessment, which EA community finds extremely important when it comes to the assessment of charities, and which I think we should find just as important when it comes to the funding of scientific research?

Comment author: Jan_Kulveit 28 February 2018 01:11:40PM 1 point [-]

I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.

In it you can find lot of Eliezer Yudkowsky's thought - I would recommend reading his latest book " Inadequate equilibria" where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.

MIRI is explicitly founded on the premise to free some people from the "publish or perish" pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.

Comment author: Jan_Kulveit 24 February 2018 01:07:19AM *  2 points [-]

Thanks for writing it.

Here are my reasons for the belief wild animal/small minds/... suffering agenda is based mostly on errors and uncertainties. Some of the uncertainties should warrant research effort, but I do not believe the current state of knowledge justifies prioritization ofany kind of advocacy or value spreading.

1] The endeavour seems to be based on extrapolating intuitive models far outside the scope for which we have data. The whole suffering calculus is based on extrapolating the concept of suffering far away from the domain for which we have data from human experience.

2] Big part of it seems arbitrary. When expanding the moral circle toward small computational processes and simple systems, why not expand it toward large computational processes and complex systems? E.g. we can think about the DNA based evolution as about large computational/optimization process - suddenly "wild animal suffering" has a purpose and traditional environmnet and biodiversity protection efforts make sense.

(Similarly we could argue much "human utility" is in the larger system structure above individual humans)

3] We do not know how to measure and aggregate utility of mind states. Like, really don't know. E.g. it sems to me completely plausible the utility of 10 people reaching some highly joyful mindstates is the dominanat contribution over all human and animal minds.

4] Part of the reasoning usually seems contradictory. If the human cognitive processes are in the priviledged position of creating meaning in this universe ... well, then they are in the priviledged postion, and there is a categorical difference between humans and other minds. If they are not in the priviledged positions, how it comes humans should impose their ideas about meaning on other agents?

5] MCE efforts directed toward AI researchers with the intent of influencing values of some powerful AI may increase x-risk. E.g. if the AI is not "speciist" and gives the same weight to satysfing prefrences of all humans and all chicken, the chicken would outnumber humans.

Comment author: Jan_Kulveit 30 January 2018 02:55:26PM 0 points [-]

As many others have expressed, the concern with systemic changes is you are often dealing with complex poorly understood systems.

Let's take the example of EU's Common Agricultural Policy. It is most likely evil toward world's poor, but it's not clear for example whether it works toward or against EU unity. It is plausible it is somehow important for EU unity, either because being a form of fiscal transfer or a way how to corrupt important voter block... So we should include possible political consequences in the utility calculation, and the problem becomes really tricky.

On the other hand I agree systemic interventions are worth considering, and e.g. change in drug policy seems to be an excellent candidate, as the action has been tested, we understand most of the consequecnes.

Comment author: Carl_Shulman 03 January 2018 03:17:15AM *  1 point [-]

Here is 80k's mea culpa on replaceability.

Comment author: Jan_Kulveit 03 January 2018 09:54:41AM 1 point [-]

Sure, first 80k thought your counterfactual impact is "often negligible" due to replaceability, then they changed position toward replaceability being "very uncertain" in general. I don't think you can just remove it from the model completely.

I also don't think in the particular case of central EA organizations hiring the uncertainty is as big as in general / I'm uncertain about this, but my vague impression was there is a usually a selection of good candidates to choose from when they are hiring.

Comment author: Jan_Kulveit 01 January 2018 10:43:37PM *  2 points [-]

After thinking about it for a while I'm still a bit puzzled by the rated-100 or rated-1000 plan changes, and their expressed value in donor dollars. What exactly is here the counterfactual? As I read it, it seems based just on comparing "the person not changing their career path". However, with some of the examples of the most valued changes, leading to people landing in EA organizations it seems the counterfactual state "of the world" would be "someone else doing a similar work in a central EA organization". As AFAIK recruitment process for positions at places like central EA organizations is competitive, why don't count as the real impact just the marginal improvement of the 80k hours influenced candidate over the next best candidate?

Another question is how do you estimate your uncertainty in valuing something rate-n?

Comment author: Benito 27 December 2017 12:27:32AM *  2 points [-]

Examples are totally worth digging into! Yeah, I actually find myself surprised and slightly confused by the situation with Einstein, and do make the active predictions that he had some strong connections in physics (e.g. at some point had a really great physics teacher who'd done some research). In general I think Ramanujan-like stories of geniuses appearing from nowhere are not the typical example of great thinkers / people who significantly change the world. If I'm I right I should be able to tell such stories about the others, and in general I do think that great people tend to get networked together, and that the thinking patterns of the greatest people are noticed by other good people before they do their seminal work cf. Bell Labs (Shannon/Feynman/Turing etc), Paypal Mafia (Thiel/Musk/Hoffman/Nosek etc), SL4 (Hanson/Bostrom/Yudkowsky/Legg etc), and maybe the Republic of Letters during the enlightenment? But I do want to spend more time digging into some of those.

To approach from the other end, what heuristics might I use to find people who in the future will create massive amounts of value that others miss? One example heuristic that Y Combinator uses to determine who in advance is likely to find novel, deep mines of value that others have missed is whether the individuals regularly build things to fix problems in their life (e.g. Zuckerberg built lots of simple online tools to help his fellow students study while at college).

Some heuristics I use to tell whether I think people are good at figuring out what's true, and make plans for it, include:

  • Does the person, in conversation, regularly take long silent pauses to organise their thoughts, find good analogies, analyse your argument, etc? Many people I talk to take silence as a significant cost, due to social awkwardness, and do not make the trade-off toward figuring out what's true. I always trust the people more that I talk to who make these small trade-offs toward truth versus social cost
  • Does the person have a history of executing long-term plans that weren't incentivised by their local environment? Did they decide a personal-project (not, like, getting a degree) was worth putting 2 years into, and then put 2 years into it?
  • When I ask about a non-standard belief they have, can they give me a straightforward model with a few variables and simple relations, that they use to understand the topic we're discussing? In general, how transparent are their models to themselves, and are the models general simple and backed by lots of little pieces of concrete evidence?
  • Are they good at finding genuine insights in the thinking of people who they believe are totally wrong?

My general thought is that there isn't actually a lot of optimisation process put into this, especially in areas that don't have institutions built around them exactly. For example academia will probably notice you if you're very skilled in one discipline and compete directly in it, but it's very hard to be noticed if you're interdisciplinary (e.g. Robin Hanson's book sitting between neuroscience and economics) or if you're not competing along even just one or two of the dimensions it optimises for (e.g. MIRI researchers don't optimise for publishing basically at all, so when they make big breakthroughs in decision theory and logical induction it doesn't get them much notice from standard academia). So even our best institutions at noticing great thinkers with genuine and valuable insights seem to fail at some of the examples that seem most important. I think there is lots of low hanging fruit I can pick up in terms of figuring out who thinks well and will be able to find and mine deep sources of value.

Edit: Removed Bostrom as an example at the end, because I can't figure out whether his success in academia, while nonetheless going through something of a non-standard path, is evidence for or against academia's ability to figure out whose cognitive processes are best at figuring out what's surprising+true+useful. I have the sense that he had to push against the standard incentive gradients a lot, but I might just be false and Bostrom is one of academia's success stories this generation. He doesn't look like he just rose to the top of a well-defined field though, it looks like he kept having to pick which topics were important and then find some route to publishing on them, as opposed to the other way round.

Comment author: Jan_Kulveit 28 December 2017 11:58:17AM 1 point [-]

For scientific publishing, I looked into the latest available paper[1] and apparently the data are best fitted by a model where the impact of scientific papers is predicted by Q.p, where p is "intrinsic value" of the project and Q is a parameter capturing the cognitive ability of the researcher. Notably, Q is independent of the total number of papers written by the scientist, and Q and p are also independent. Translating into the language of digging for gold, the prospectors differ in their speed and ability to extract gold from the deposits (Q). The gold in the deposits actually is randomly distributed. To extract exceptional value, you have to have both high Q and be very lucky. What is encouraging in selecting the talent is the Q seems relatively stable in the career and can be usefully estimated after ~20 publications. I would guess you can predict even with less data, but the correct "formula" would be trying to disentangle interestingness of the problems the person is working on from the interestingness of the results.

(As a side note, I was wrong in guessing this is strongly field-dependent, as the model seems stable across several disciplines, time periods, and many other parameters.)

Interesting heuristics about people :)

I agree the problem is somewhat different in areas not that established/institutionalized where you don't have clear dimensions of competition, or the well measurable dimensions are not that well aligned with what is important. Loooks like another understudied area.

[1] Quantifying the evolution of individual scientific impact, Sinatra Science,

View more: Next