Optimal level of hierarchy for effective altruism

Hierarchical modular network structure is an organizing principle found in many complex systems, ranging from ecosystems to human brain , and of course it is commonly seen in social networks . By "modularity" we mean, technically speaking, that there are parts of a network that are more densely connected than... Read More
Comment author: MichaelPlant 20 March 2018 09:50:23PM *  1 point [-]

The thing I find confusing about what Will says is

effective altruism is the project of using evidence and reason to figure out how to benefit others

I draw attention to 'benefit others'. Two of EA's main causes are farm animal welfare and reducing risks of human extinction. The former is about causing happy animals to exist rather than miserable ones, and the latter is about ensuring future humans exist (and trying to improve their welfare). But it doesn't really make sense to say that you can benefit someone by causing them to exist. It's certainly bizarre to say it's better for someone to exist than not to exist, because if the person doesn't exist there's no object to attach any predicates to. There's been a recent move by some philosophers, such as McMahan and Parfit, to say it can be good (without being better) for someone to exist, but that just seems like philosophical sleight of hand.

A great many EA philosophers, including I think Singer, MacAskill, Greaves, Ord either are totalists or very sympathetic to it. Totalis the view the best outcome is the one with the largest sum of lifetime well-being of all people - past, present, future and it's known as impersonal view in population ethics. Outcomes are not deemed good, on impersonal views, because they are good for anyone, or because the benefit anyone, they are good because there is more of the thing which is valuable, namely welfare.

So there's something fishy about saying EA is trying to benefit others when many EA activities, as mentioned, don't benefit anyone, and many EAs think we shouldn't, strictly, be trying to benefit people so much as realising more impersonal value. It would make more sense to replace 'benefit others as much as possible' with 'do as much good as possible'.

Comment author: Jan_Kulveit 20 March 2018 11:29:09PM 0 points [-]

I agree it may seem to point toward some "person-affecting views" which many EAs consider to be wrong.

Possibly the aim was to describe the motivation is altruistic?

The disadvantage of 'do as much good as possible' may be it would associtate EA with utilitarianism even more than it is.

I think about EA as a movement trying to answer a question "how to change the world for better most effectively with limited resources" in a rational way, and act on the answer. Which seems to me a tiny bit more open than 'do as much good as possible' as it requires just some sort of comparison on world-sates, while 'as much good as possible' seems to depend on more complex structure.

Comment author: Richenda  (EA Profile) 15 March 2018 07:30:47PM 4 points [-]

Thanks for sharing this. I'm looking forward to the second part!

Reflections like this are amazingly valuable for the movement building community. I'm especially interested in how you factored in the local context in order to choose the best strategy for EA in the Czech Republic. I also totally agree that it's great to hear perspectives that come from outside of the Oxbridge/Silicone Valley bubble - and even the anglophile bubble.

A lot of people are grappling with the issue of how to appropriate EA in non-English communities. I'll be sharing this report with each of those that approach LEAN with these challenges.

Comment author: Jan_Kulveit 16 March 2018 01:03:50AM *  2 points [-]

Thanks! I think the algorithm may be explained something like

1] take the most up-to-date reasoning and conclusions from central places ... like prioritization outputs of CEA, FHI, CSER, GiveWell, OpenPhil, etc.

2] try to think what are the local differences, and which of them would most influence the outcomes.

3] think in terms of comparative advantages and disadvantages (e.g., if you are in Geneva, you have access to all that world organizations and people working in them ... so you have advantage in policy)

4] think in terms of the movement optimizing as a unified organization, like "if there was an EA Inc. what units it would create here" (sometimes its possible to learn from actual multinationals)

Possible caveat... doing this may be much less work than creating the original models, but may need good understanding of the models, and good grasp of concepts like parameter sensitivity.

We're starting a prioritization projects attempting to quantitatively model this (factor in local considerations). It will also probably try to model the important question "stay or move"

Comment author: Ben_Todd 14 March 2018 09:07:09PM 2 points [-]

Hi Jan,

It's a useful case study, however, two quick responses:

1) To some extent you were following the suggested approach, because you only pushed ahead having already built a core of native speakers who had been involved in the past with English language materials (e.g. Ales Flidr was head of EA Harvard; core of LW people helped to start it).

You also mention how doing things like meeting CFAR and attending EAGxOxford were very useful in building the group. This suggests to me that doing even more to build expertise and connections with the core English-speaking EA community before pushing ahead with Czech outreach might have led to even better results.

I also guess that most of the group can read English language materials? If so, that makes the situation much easier. As I say, the less the distance, the weaker the arguments for waiting.

2) You don't directly address my main concern. I'm suggesting that if we try to spread EA in new languages and cultures without laying out groundwork could lead to a suboptimal version of EA being locked into the new audience. However, in your report, you don't directly respond to this concern.

You do give some evidence of short-term impact, which is evidence that benefits outweighed the opportunity costs. But I'd also want to look at questions like: (i) how accurately was EA portrayed in your press coverage? (ii) how well do people getting involved in the group understand EA? (iii) might you have put off important groups in ways that could have been avoided?

Comment author: Jan_Kulveit 16 March 2018 12:39:43AM *  2 points [-]

Hi Ben,

1) I understand your concerns.

On the other hand I'm not sure if you take into account the difficuties

  • e.g. going to EAG could require something like "heroic effort". If my academic job was my only source of income, going to EAGxOxford would have meant spending more than my monthly disposable income on it (even if EAGxOxford organizers were great in offering me big reduction in fees)

  • I'm not sure if you are modelling correctly barriers in building connections to the core of the community(*). Here is my guess at some problems people from different countries ol cultures may run into when trying to make connections, e.g. by applying for internships, help, conferences, etc.

1) People unconsciously take language sophistication as a proxy for inteligence. By not being proficient in English you loose points.

2) People are evaluated on proxies like "attending prestigeous university". Universities outside of US or UK are generally discounted as regional

3) People are often in fact evaluated based on social network distance; this leads to "rich gets richer" dynamics

4) People are evaluated based on previous "EA achievements" which are easier to achieve in places where EA is allready present.

(*) you may object e.g. Ales Flidr is a good counterexample, but Harvard alumni are typically relatively "delocalized", in demand everywhere, and may percieve greater value in working in the core than spreading ideas from core to emerging local groups. (Prestige and other incentives also point that way)

one risk I see in your article is it may influence the people who would be best at mitigating risks of "wrong local EA movements" being founded to not work on it at all.

I dont think "the barriers" should be zero, as such barriers in a way help the selection of motivated people. Just in my impression they may be higher than they appear from inside. Asking people to first build conections, while building such connections is not systematically supported, may make the barriers higher than optimal.

Btw yes the core of the group can read English materials. Also it could do research in machine learning, found a startup, work in quantitive finance, in part get elected to the parliament, move out of the country, and more. What I want to point at, if you imagine members of a group of people working on "translation" of EA into new culture you would like, they are likely paying huge opportunity costs in doing so. It may be hard to keep them at some state of waiting and building connections.

In our case, counterfactually it seems plausible "waiting even more" could have also led to the group not working, or worse organization beeing created, as the core people would loose motivation / pursue other opportunities.

2) In counting long-term impact and the lock-in effect you should consider the chance movements in new languages and cultures develop in some respects better versions of effective altruism, and beneficial mutations can than spread, even back to the core. More countries may mean more experiments running and faster evolution of better versions of EA. It's unclear to me how these terms (lock in, more optimization power) add up but both should be counted. One possible resolution may be to prioritize founding movements in smaller countries where you create experience, but the damage is limited.

To your questions

i] with one exception, reasonably well taken from the viewpoint of strategic communication (communicating easily communicable concepts, eg impact and effectivity). I don't think the damage is great, and part of misconceptions is unavoidable givent the name "effective altruism".

ii] it has a distribution... IMO the understanding at the moment is highly correlated with the enagagement in the group. Which may be more important than criteria like "average understanding" or "public understanding"

iii] yes, it's complicated, depends on some misalignments, and I dont want to disscuss it publicly.

Comment author: Jan_Kulveit 13 March 2018 08:15:29AM *  7 points [-]

I just published a short history of creating effective altruism movement in the Czech Republic http://effective-altruism.com/ea/1ls/introducing_czech_association_for_effective/ and I think it is highly relevant to this discussion

Compared to Ben's conclusions I would use it as a data-point showing

  • it can be done

  • it may not be worth delaying

  • there are intermediate forms of communication in between "mass outreach" and "person-to-person outreach"

  • you should consider more complex model of communication than just (personal vs. mass media): specifically, a viable model in new country could be something like "very short message in mass media, few articles translated in national language to lower the bareer and point in the right direction, much larger amount transmitted via conferences & similar

Putting too much weight into "person to person" interaction runs into the problem you are less likely to find the right persons (consider how such connections may be created)

Btw it seems to me the way e.g. 80k hours and CEA works are inadeqate in creating the required personal connections in new countries, so it's questionable if it makes sense to focus on it

(I completely agree China is extremely difficult, but I don't think China should be considered a typical example - considering mentality it's possibly one of the most remote countries from from Eurpoean POV)


Introducing Czech Association for Effective Altruism - history

This is a first part of two-part post about effective altruism in the Czech Republic, describing its history and reflections on what worked.   I think it can be useful - to share experiences with building the EA community in a country where it did not exist (even if it... Read More
Comment author: Dunja 28 February 2018 02:04:25PM *  3 points [-]

Hi Jan, I am aware of the fact that "publish or perish" environment may be problematic (and that MIRI isn't very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.

Now, if we don't want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?

Comment author: Jan_Kulveit 28 February 2018 02:54:25PM 1 point [-]

What I would do when evaluating potentially high-impact, high uncertainty "moonshot type" research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)

Comment author: Dunja 28 February 2018 11:17:05AM *  1 point [-]

Thanks for the comment, Gregory! I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations. Only once your research results are published in a peer-reviewed journal (including peer-reviewed conference proceedings) can other scholars in the field take them as a (minimally) reliable source for further research that would build on it. By the way, many prestigious AI conferences actually come with a peer-reviewed proceedings (take e.g. AAAI or IJCAI), so you can't even present at the conference without submitting a paper.

Again, MIRI might be doing excellent work. All i am asking is: in view of which criteria can we judge this to be the case? What are the criteria of assessment, which EA community finds extremely important when it comes to the assessment of charities, and which I think we should find just as important when it comes to the funding of scientific research?

Comment author: Jan_Kulveit 28 February 2018 01:11:40PM 1 point [-]

I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.

In it you can find lot of Eliezer Yudkowsky's thought - I would recommend reading his latest book " Inadequate equilibria" where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.

MIRI is explicitly founded on the premise to free some people from the "publish or perish" pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.

Comment author: Jan_Kulveit 24 February 2018 01:07:19AM *  2 points [-]

Thanks for writing it.

Here are my reasons for the belief wild animal/small minds/... suffering agenda is based mostly on errors and uncertainties. Some of the uncertainties should warrant research effort, but I do not believe the current state of knowledge justifies prioritization ofany kind of advocacy or value spreading.

1] The endeavour seems to be based on extrapolating intuitive models far outside the scope for which we have data. The whole suffering calculus is based on extrapolating the concept of suffering far away from the domain for which we have data from human experience.

2] Big part of it seems arbitrary. When expanding the moral circle toward small computational processes and simple systems, why not expand it toward large computational processes and complex systems? E.g. we can think about the DNA based evolution as about large computational/optimization process - suddenly "wild animal suffering" has a purpose and traditional environmnet and biodiversity protection efforts make sense.

(Similarly we could argue much "human utility" is in the larger system structure above individual humans)

3] We do not know how to measure and aggregate utility of mind states. Like, really don't know. E.g. it sems to me completely plausible the utility of 10 people reaching some highly joyful mindstates is the dominanat contribution over all human and animal minds.

4] Part of the reasoning usually seems contradictory. If the human cognitive processes are in the priviledged position of creating meaning in this universe ... well, then they are in the priviledged postion, and there is a categorical difference between humans and other minds. If they are not in the priviledged positions, how it comes humans should impose their ideas about meaning on other agents?

5] MCE efforts directed toward AI researchers with the intent of influencing values of some powerful AI may increase x-risk. E.g. if the AI is not "speciist" and gives the same weight to satysfing prefrences of all humans and all chicken, the chicken would outnumber humans.

Comment author: Jan_Kulveit 30 January 2018 02:55:26PM 0 points [-]

As many others have expressed, the concern with systemic changes is you are often dealing with complex poorly understood systems.

Let's take the example of EU's Common Agricultural Policy. It is most likely evil toward world's poor, but it's not clear for example whether it works toward or against EU unity. It is plausible it is somehow important for EU unity, either because being a form of fiscal transfer or a way how to corrupt important voter block... So we should include possible political consequences in the utility calculation, and the problem becomes really tricky.

On the other hand I agree systemic interventions are worth considering, and e.g. change in drug policy seems to be an excellent candidate, as the action has been tested, we understand most of the consequecnes.

View more: Prev | Next