Comment author: turchin 04 March 2018 09:24:44PM 1 point [-]

Basically, there are two constraints on the timing of the new civilization, which are explored in details in the article:

1) As closest our relative are chimps with 7 million genetic difference from us, human extinction means that at least 7 million years there will be no other civilization, and likely more, as most causes of human extinction would kill great apes too. 2) Life on Earth will be possible approximately next 600 mln years based on the Earth and Sun models.

Thus the next civilization timing is between 7 and 600 mln years, but the probability peaks closer to 100 mln years, as it is time needed for the evolution of primates "again" from the "rodents", and it will later decline as the conditions on the planet will deteriorate.

We explored the difference between human extinction risks and l-risks, that is life extinction risk in another article:

In it, we show that life extinction is worse than human extinction, and universe destruction is even worse than life extinction, and this should be taken into account in risk prevention prioritisation.

Comment author: SiebeRozendal 15 March 2018 12:23:58PM *  1 point [-]

This is a fascinating question! However, I think you are making a mistake in estimating the lower bound: The fact that chimps are removed by 7 million years of evolution (Wikipedia says 4-13 million) rests on the assumptions that:

  • Chimpanzees needed these 7 million years to evolve to their current level of intelligence. Instead, their evolution could have contained multiple intervals of random length with no changes to intelligence. This implies that chimpanzees could have evolved from our common ancestor to their current level of intelligence much faster or much slower than 7 million years.

  • The time since our divergence with chimpanzees is indicative of how long it takes from their level of intelligence to ours. I am not quite sure what to think of this. I assume your reasoning is "it took us 7 million years to evolve to our current level of intelligence from the common ancestor, and chimpanzees probably did not lose intelligence in those 7 million years, so the starting conditions are at least as favorable as they were 7 million years ago." This might be right. On the other hand, evolutionary paths are difficult to understand and maybe chimps developed in some way that makes it unlikely to evolve into a technologically advanced society. Nonetheless, this doesn't seem the case because they do show traits beneficial to evolution of higher intelligence, e.g. tool use, social structure, and eating meat. All in all, thinking about this I keep coming back to the question: how contingent is evolution instead of directional when we look at intellectual and social capability? There seems to be disagreement here in the field of evolutionary biology, even though there are many different evolutionary branches where intelligence evolved and increased.

Also, you have given the time periods when a next civilisation might arise if it arises, but how likely do you think that it arises?

Comment author: SiebeRozendal 15 March 2018 11:51:09AM 2 points [-]

I would therefore say that large-scale catastrophes related to biorisk or nuclear war are quite likely (~80–90%) to merely delay space colonization in expectation.[17] (With more uncertainty being not on the likelihood of recovery, but on whether some outlier-type catastrophes might directly lead to extinction.)

You seem to be highly certain that humans will recover from near-extinction. Is this based on solely the arguments in the text and footnote, or is there more? It seems to rest on the assumption that only population growth/size is the bottleneck, and key technologies and infrastructures will be developed anyway.

Comment author: SiebeRozendal 05 March 2018 04:41:10PM 2 points [-]

Regarding Doing Good Better, is there any follow-up in the pipeline that is more up-to-date?

I find the book a great introduction into EA, but I have had multiple instances where I needed to point out to new members who'd just read the book that for some points "that's not actually what's thought anymore".

Comment author: remmelt  (EA Profile) 02 March 2018 03:29:17PM *  0 points [-]

To clarify: by implying that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don't mean to say that they should both become generalists.

Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.

In a similar vein, I think it makes sense for CEA's Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support.

However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.

Comment author: SiebeRozendal 02 March 2018 03:56:43PM *  1 point [-]

I think it would be better to include this in the OP.

Comment author: SiebeRozendal 02 March 2018 03:53:54PM *  2 points [-]

Could you be a little more specific about the levels/traits you name? I'm interpreting them roughly as follows:

  • Values: "how close are they to the moral truth or our current understanding of it" (replace moral truth with whatever you want values to approximate).
  • Epistemology: how well do people respond to new and relevant information?
  • Causes: how effective are the causes in comparison to other causes?
  • Strategies: how well are strategies chosen withing those causes?
  • Systems: how well are the actors embedded in a supportive and complementary system?
  • Actions: how well are the strategies executed?

I think a rough categorisation of these 6 traits would be Prioritisation (Values, Epistemology, Causes) & Execution (Strategies, Systems, Actions), and I suppose you'd expect a stronger correlation within these two branches than between?

Comment author: Milan_Griffes 19 February 2018 04:46:33PM 0 points [-]

Wisdom and predictive power seem not conceptually distinct.

I'm using "predictive power" as something like "ability to see what's coming down the pipe" and "wisdom" as something like "ability to assess whether what's coming down the pipe is good or bad, according to one's value system."

On your broader point, I agree that these attributes are all tangled up in each other. I don't think there's a useful way to draw clean distinctions here.

I was a bit confused that you write about things to prioritise, but don't refer back to the 5 attributes of the steering capacity.

This is a good point, I'll think about this more & get back to you.

quite similar to my own experience in that I wrote a philosophy essay about cluelessness

I'd like to read this. Could you link to it here, or (if private) send it to the email address on this page?

Comment author: SiebeRozendal 20 February 2018 10:32:37AM 1 point [-]

Sure! Here it is.

Comment author: SiebeRozendal 19 February 2018 04:31:52PM 0 points [-]

Which role would attractor states have in this thinking? Some thoughts:

If an attractor is strong/large, then many different starting points have the same end points. But if it is small, or if we are in between two (or more) attractors, our decisions could make all the difference in the world.

The technological completion conjecture conjects an attractor that we end up with if we are not caught by x-risk attractors.

Can we somehow affect the fragility of history so that we bring it into the center of the goldilocks zone?

Comment author: SiebeRozendal 19 February 2018 02:38:28PM *  3 points [-]

I like this post Milan, I think it's the best of your series. I think that you rightly picked a very important topic to write about (cluelessness) that should receive more attention than it currently does. I do have some comments:

Although I admire new ways to think about prioritisation, I have two worries: Conceptual distinction. Wisdom and predictive power seem not conceptually distinct. Both are about our ability to identifying and predicting the probability of good and bad outcomes. Intent also seems a little tangled up in wisdom, although I can see that we want to seperate those. Furthermore, intent influences coordination capability: the more different the intentions are of a population, the more difficult coordination becomes.

This creates the second worry that this model adds only one dimension (Intent) to the 3-dimensional model of Bostrom's Technology [Capacity] - Insight [Wisdom] - Coordination. Do you think this increases to usefulness of the model enough? The advantage of Bostrom's model is that it allows for differential progress (wisdom > coordination > capacity), while you don't specify the interplay of attributes. Are they supposed to be multiplied, or are some combinations better than others, or do we want differential progress?

I was a bit confused that you write about things to prioritise, but don't refer back to the 5 attributes of the steering capacity. Some relate more strongly to specific attributes, and some attributes are not discussed much (coordination) or at all (capability).

Further our understanding of what matters

This seems to be Intent in your framework. I totally agree that this is valuable. I would call this moral (or more precisely: axiological) uncertainty, and people work on this outside of EA as well. By the way, besides resolving uncertainty, another pathway is to improve our methods to deal with moral uncertainty. (Like MacAskill argues for)

Improve governance

I am not sure to which this concept this relates to, though I suppose it is Coordination. I find the discussion a bit shallow here as it discusses only institutions, and not the coordination of individuals in e.g. the EA community, or the coordination between nation states.

Improve prediction-making & foresight

This seems to be the attribute predictive power. I agree with you that this is very important. To a large extent, this is also what science in general is aiming to do: improving our understanding so that we can better predict and alter the future. However, straight up forecasting seems more neglected. I think this could also just be called "reducing empirical uncertainty"? If we call it that, we can also consider other approaches, such as researching effects in complex systems.

Reduce existential risk

I'm not sure this was intended to relate to a specific attribute. Guess not.

Increase the number of well-intentioned, highly capable people

This seems to relate mostly to "Intent"as well. I wanted to remark that this can either be done by increasing capability and knowledge of well-intentioned people, or by improving intentions of capable (and knowledgeable) people. My observation is that so far, the focus has been on the latter in term of growth and outreach, and only some effort has been expended to develop the skills of effective altruists. (Although this is noted as a comparative advantage for EA Groups)

Lastly, I wanted to remark that hits-based giving does not imply a portfolio approach in my opinion. It just implies being more or less risk-neutral in altruistic efforts. What drives the diversification in OPP's grants seems to be worldview diversification, option value, and the possibility that high-value opportunities are spread over cause areas, rather than concentrated in one cause area. I think what would support the conclusion that we need to diversify could be that we need to hit a certain value on each of the attributes otherwise the project fails (a bit like that power-laws arise from success needing ABC instead of A+B+C).

All in all, an important project, but I'm not sure how much novel insight it has brought (yet). This is quite similar to my own experience in that I wrote a philosophy essay about cluelessness and arrived at not-so-novel conclusion. Let me know if you'd like to read the essay :)

Comment author: SiebeRozendal 02 January 2018 03:59:10PM *  1 point [-]

This is an interesting project! I am wondering how valuable you have found it, and whether there are any plans for further development. I can imagine that it would be valuable to

  • Increase complexity to increase robustness of the model, but then find some balance between robustness and user-friendliness, perhaps by allowing users to view the model on different 'levels' of complexity.
  • Use some form of crowd-sourcing to get much more reliable estimates, ideally weighted by expertise or forecasting ability.
  • Incorporate some insights from the moral uncertainty literature, so that low probability of something being very bad (e.g. wild animal suffering, or insect suffering) are given appropriate weight.

However, I have no idea how feasible this is, and imagine it would require many and valuable resources (lots of time, money, and capable researchers). Do you already have thoughts on this?

P.S. The link is missing for part IV

Comment author: Milan_Griffes 29 November 2017 04:08:29AM *  1 point [-]

Thanks for the thoughtful comment :-)

This seems like a case of what Greaves calls simple cluelessness.

I'm fuzzy on Greaves' distinction between simple & complex cluelessness. Greaves uses the notion of "systematic tendency" to draw out complex cluelessness from simple, but "This talk of ‘having some reasons’ and ‘systematic tendencies’ is not as precise as one would like;" (from p. 9 of Greaves 2016).

Perhaps it comes down to symmetry. When we notice that for every imagined consequence, there is an equal & opposite consequence that feels about as likely, we can consider our cluelessness "simple." But when we can't do this, our cluelessness is complex.

This criterion is unsatisfyingly subjective though, because it relies on our assessing the equal-opposite consequence as "about as likely," plus relying on whether we are able to imagine an equal-opposite consequence or not.

Comment author: SiebeRozendal 02 January 2018 01:48:26PM 1 point [-]

I take Greaves' distinction between simple and complex cluelessness to be in the symmetry (just as you seem to do). However, I believe that this symmetry consists in that we are evaluating the same consequences following from either an act A, or a refraining of act A. For every story of long-term consequences happening from performing act A, there is a parallel story of these consequences C happening from refraining to do A. Thus, we can invoke a specific Principle of Indifference, where we take the probabilities of the options to be equal, reflecting our ignorance. Thus, P(C|A) = P(C|~A), where C is a story of some long-term consequences of either performing or refraining from doing A.

In complex cases, this symmetry does not exist, because we're trying to compare different consequences (C1, C2, .., Cn) resulting from the same act.

View more: Next