Comment author: Sebastian_Oehm 13 April 2018 01:06:00PM *  7 points [-]

You could try to model by estimating how (i) the talent needs and (ii) the talent availability will be distributed if we further scale the community.

(i) If you assume that the EA community grows, you may think that the percentage of different skillsets that we need in the community will be different. E.g. you might believe that if the community grows by a factor of 10, we don't need 10x as many people thinking about movement building strategy (the problems size increases not linearly with the number of people) or entrepreneurial skills (as the average org will be larger and more established), but an increase by a factor of say 2-5 might be sufficient. On the other hand, you'd quite likely need ~10x as many ops people.

(ii) For the talent distribution, one could model this using one of the following assumptions:

1) Linearly scale the current talent distribution (i.e. assume that the distribution of skillsets in the future community would be the same as today).

2) Assume that the future talent distribution will become more similar to a relevant reference class (e.g. talent distribution for graduates from top unis)

A few conclusions e.g. I'd get from this

  • weak point against skills building in start-ups - if you're great at this, start stuff now

  • weak point in favour of building management skills, especially with assumption 1), but less so with assumption 2)

  • weak point against specialising in areas where EA would really benefit from having just 2-3 experts but unlikely need many more (e.g. history, psychology, institutional decision making, nanotech, geoengineering) if you're also a good fit for sth else, as we might just find them along the way

  • esp. if 2), weak points against working on biorisk (or investing substantially in skills building in bio) if you might be an equal fit for technical AI safety, as the maths/computer science : biologists ratio at most unis is more 1 : 1 (see https://www.hesa.ac.uk/news/11-01-2018/sfr247-higher-education-student-statistics/subjects), but we probably want to have 5-10x as many people working on AI rather than biorisk. [The naive view using current talent distribution might suggest that you should work on bio rather than AI if you're an equal fit, as the current AI : bio talent ratio seems to be > 10: 1]

All of this is less relevant if you believe in high discount rates on work done now rather than in 5-10 years.

Comment author: JanBrauner 14 April 2018 12:19:40PM 0 points [-]

I really like that idea. It might also be useful to check whether this model would have predicted past changes of career recommendations.

Comment author: JanBrauner 26 March 2018 05:07:37PM *  2 points [-]

Hey Holden, thanks for doing this. Suppose I applied for the research analyst position and didn't get it. Which of the following would then be more likely to eventually land me a job at OPP, and how much more likely (assuming I would perform well in both)?

a) becoming research analyst at GiveWell

b) doing research in one of OPP's focus areas (biosecurity/AI safety).

Comment author: JanBrauner 13 March 2018 09:02:02AM 5 points [-]

You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.

You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.

This conclusion seems correct only for clear-cut textbook examples. In the real world, I think, your values fail to recommend anything. You can never know for certain how many people you are going help. Everything is probabilities and expected value:

Say, for the sake of the argument, you think that severe depression is the cause of most intense individual suffering. You could give your $10.000 to a mental health charity, and they will in expectation prevent 100 people (made up number) from getting severe depression.

However, if you give $10.000 to GiveDirectly, certainly that will affect they recipients strongly, and maybe in expectation prevent 0.1 cases of severe depression.

Actually, if you take your $10.000, and buy that sweet, sweet Rolex with it, there is a tiny chance that this will prevent the jewelry store owner from going bankrupt, being dumped by their partner and, well, developing severe depression. $10.000 to the jeweller prevent an expected 0.0001 cases of severe depression.

So, given your values, you should be indifferent between those.

Even worse, all three actions also harbour tiny chances of causing severe depression. Even the mental health charity, for every 100 patients they prevent from developing depression, will maybe cause depression in 1 patient (because interventions sometimse have adverse effects, ...). So if you decide between burning the money or giving it to the mental health charity, you decide between preventing 100 or 1 episodes of depression. An decision that you are, given your stated values, indifferent between.

Further arguments why approaches that try to avoid interpersonal welfare aggregation fail in the real world can be found here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781092

Comment author: JanBrauner 03 January 2018 04:50:24PM 5 points [-]

You write: "In this discussion, there are two considerations that might at first have ap- peared to be crucial, but turn out to look less important. The first such consid- eration is whether existence is in general good or bad, `a la Benatar (2008). If existence really should turn out to be a harm, sufficiently unbiased descendants would plausibly be able to end it. This is the option value argument. In turn, option value itself might appear to be a decisive argument against doing some- thing so irreversible as ending humanity: we should temporise, and delegate this decision to our descendants. But not everyone enjoys option value, and those who suffer are relatively less likely to do so. If our descendants are selfish, and find it advantageous to allow the suffering of powerless beings, we may not wish to give them option value. If our descendants are altruistic, we do want civilisation to continue, but for reasons that are more general than option value."

Since the option value argument is not very strong, it seems to be a very important consideration "whether existence in general is good or bad" - or, less dichotomous, where the threshold for a life worth living lies. Space colonization means more (sentient) beings. If our descendants are altruistic (or have values that we, upon reflection, would endorse), everything is fine anyway. If our descendants are selfish, and the threshold for a life worth living is fairly low, then not much harm will be done (as long as they don't actively value causing harm, which seems unlikely). If they are selfish and the threshold is fairly high - i.e. a lot of things in a life have to go right in order to make the life worth living - then most powerless beings will probably have bad lives, possibly rendering overall utility negative.

Comment author: vollmer 13 November 2017 09:38:31PM 0 points [-]

Thanks for sharing!

Do you have recommendations for tools to manage reading lists? Especially doing the things that you describe in your flowchart (list types/categories/tags, dragging items around and reordering them, etc.). Mobile apps would be a plus. I've experimented with several tools (e.g. Pocket / Instapaper) but will probably stick with Google Docs / Evernote.

Comment author: JanBrauner 14 November 2017 04:34:25PM 1 point [-]

Sorry, I use plain old google docs as well :|

Comment author: JanBrauner 03 November 2017 11:51:00AM 3 points [-]

This was really interesting and probably as clear as such a topic can possibly be displayed.

Disclaimer: I dont know how to deal with infinities mathematically. What I am about to say is probably very wrong.

For every conceivable value system, there is an exactly opposing value system, so that there is no room for gains from trade between the systems (e.g. suffering maximizers vs suffering minimizers).

In an infinite multiverse, there are infinite agents with decision algorithms sufficiently similar to mine to allow for MSR. Among them, there are infinite agents that hold any value system. So whenever I cooperate with one value system, I defect on infinite agents that hold the exactly opposing values. So infinity seems to make cooperation impossble??

Sidenote: If you assume decision algorithm and values to be orthogonal, why do you suggest to "adjust [the values to cooperate with] by the degree their proponents are receptive to MSR ideas"?

Best, Jan

Comment author: JanBrauner 13 October 2017 09:34:38AM 4 points [-]

Just wanted to say that I found this article really helpful and already sent it to many people who asked me for how they should make a decision. Please never take it down :D

In response to Introducing Enthea
Comment author: JanBrauner 09 August 2017 09:09:13AM 1 point [-]

Seems interesting, how can one stay updated?

Comment author: JanBrauner 02 August 2017 09:02:48AM 1 point [-]

http://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/

This could be constructed as arguing for an approach that takes all perspectives that one can think of into account, and then discount them by uncertainty.

Comment author: JanBrauner 21 July 2017 04:54:38PM *  3 points [-]

Here is another argument why the future with humanity is likely better than the future without it. Possibly, there are many things of moral weight that are independent of humanity's survival. And if you think that humanity would care about moral outcomes more than zero, then it might be better to have humanity around.

For example in many szenarios of human extinction, wild animals would continue existing. In your post you assigned farmed animals enough moral weight to determine the moral value of the future, and wild animals should probably have even more moral weight. There are 10 x more wild birds than farmed birds, 100-1000x more wild mammals than farmed animals (and of course many, many more fish or even invertebrates). I am not convinced that wild animals' lives are on average not worth living (= that they contain more suffering than happiness), but even without that, there surely is a huge amount of suffering. If you believe that humanity will have the potential to prevent/alleviate that suffering some time in the future, that seems pretty important.

The same goes for unknown unknowns. I think we know extremely little about what is morally good or bad, and maybe our views will fundamentally change in the (far) future. Maybe there are suffering non-intelligent extraterrestrials, maybe bacteria suffer, maybe there is moral weight in places were we would not have expected it (http://reducing-suffering.org/is-there-suffering-in-fundamental-physics/), maybe something completely different.

Let's see what the future brings, but it might be better to have an intelligent and at least slightly utility-concerned species around, as compared to no intelligent species.

View more: Next