Comment author: HoldenKarnofsky 26 March 2018 04:56:23PM *  5 points [-]

We could certainly imagine ramping up grantmaking without a much better answer. As an institution we're often happy to go with a "hacky" approach that is suboptimal, but captures most of the value available under multiple different assumptions.

If someone at Open Phil has an idea for how to make useful progress on this kind of question in a reasonable amount of time, we'll very likely find that worthwhile and go forward. But there are lots of other things for Research Analysts to work on even if we don't put much more time into researching or reflecting on moral uncertainty.

Also note that we may pursue an improved understanding via grantmaking rather than via researching the question ourselves.

Comment author: SiebeRozendal 26 March 2018 05:16:06PM 1 point [-]

I'm very curious about how that improved understanding would come about via grantmaking. Any write-up you have about this? I can see how you'd learn about tractability, and maybe about neglectedness, but I wonder how you incorporate this in your decision-making.

Anyway, this might go a little too off-topic so I'd understand if you replied to other questions first :)

Comment author: SiebeRozendal 26 March 2018 05:11:20PM 5 points [-]

What are the working hours like for a position like Research Analyst? Strict/flexible? 40 hours/week or other? What is the overtime like on average, and what is it like on peaks?

Comment author: HoldenKarnofsky 26 March 2018 04:52:37PM 3 points [-]

All else equal, we consider applicants stronger when they have degrees in challenging fields from strong institutions. It’s not the only thing we’re looking at, even at that early stage. And the early stage is for filtering; ultimately, things like work trial assignments will be far more important to hiring decisions.

Comment author: SiebeRozendal 26 March 2018 05:02:43PM 1 point [-]

Not sure if interpreting Khorton correctly, but interested anyway: Why focus on undergrad and not on postgrad (or highest level achieved/pursuing)?

Comment author: SiebeRozendal 26 March 2018 04:50:33PM 5 points [-]

It seems that OpenPhil wants a more satisfactory answer to moral uncertainty than just worldview diversification before ramping up the amount of grants per year. Is this part of why you are hiring new Research Analysts, and if so, how much will they work on this problem? (This seems like a very interesting but hard problem)

Comment author: SiebeRozendal 26 March 2018 04:42:45PM 1 point [-]

Hi Holden, nice initiative.

I have a question about the Research Analyst role. How generalist will they be? I can imagine them being somewhat focused on one or two focus areas besides more general issues such as how to implement moral uncertainty practically.

Comment author: turchin 04 March 2018 09:24:44PM 1 point [-]

Basically, there are two constraints on the timing of the new civilization, which are explored in details in the article:

1) As closest our relative are chimps with 7 million genetic difference from us, human extinction means that at least 7 million years there will be no other civilization, and likely more, as most causes of human extinction would kill great apes too. 2) Life on Earth will be possible approximately next 600 mln years based on the Earth and Sun models.

Thus the next civilization timing is between 7 and 600 mln years, but the probability peaks closer to 100 mln years, as it is time needed for the evolution of primates "again" from the "rodents", and it will later decline as the conditions on the planet will deteriorate.

We explored the difference between human extinction risks and l-risks, that is life extinction risk in another article: http://effective-altruism.com/ea/1jm/paper_global_catastrophic_and_existential_risks/

In it, we show that life extinction is worse than human extinction, and universe destruction is even worse than life extinction, and this should be taken into account in risk prevention prioritisation.

Comment author: SiebeRozendal 15 March 2018 12:23:58PM *  2 points [-]

This is a fascinating question! However, I think you are making a mistake in estimating the lower bound: The fact that chimps are removed by 7 million years of evolution (Wikipedia says 4-13 million) rests on the assumptions that:

  • Chimpanzees needed these 7 million years to evolve to their current level of intelligence. Instead, their evolution could have contained multiple intervals of random length with no changes to intelligence. This implies that chimpanzees could have evolved from our common ancestor to their current level of intelligence much faster or much slower than 7 million years.

  • The time since our divergence with chimpanzees is indicative of how long it takes from their level of intelligence to ours. I am not quite sure what to think of this. I assume your reasoning is "it took us 7 million years to evolve to our current level of intelligence from the common ancestor, and chimpanzees probably did not lose intelligence in those 7 million years, so the starting conditions are at least as favorable as they were 7 million years ago." This might be right. On the other hand, evolutionary paths are difficult to understand and maybe chimps developed in some way that makes it unlikely to evolve into a technologically advanced society. Nonetheless, this doesn't seem the case because they do show traits beneficial to evolution of higher intelligence, e.g. tool use, social structure, and eating meat. All in all, thinking about this I keep coming back to the question: how contingent is evolution instead of directional when we look at intellectual and social capability? There seems to be disagreement here in the field of evolutionary biology, even though there are many different evolutionary branches where intelligence evolved and increased.

Also, you have given the time periods when a next civilisation might arise if it arises, but how likely do you think that it arises?

Comment author: SiebeRozendal 15 March 2018 11:51:09AM 2 points [-]

I would therefore say that large-scale catastrophes related to biorisk or nuclear war are quite likely (~80–90%) to merely delay space colonization in expectation.[17] (With more uncertainty being not on the likelihood of recovery, but on whether some outlier-type catastrophes might directly lead to extinction.)

You seem to be highly certain that humans will recover from near-extinction. Is this based on solely the arguments in the text and footnote, or is there more? It seems to rest on the assumption that only population growth/size is the bottleneck, and key technologies and infrastructures will be developed anyway.

Comment author: SiebeRozendal 05 March 2018 04:41:10PM 3 points [-]

Regarding Doing Good Better, is there any follow-up in the pipeline that is more up-to-date?

I find the book a great introduction into EA, but I have had multiple instances where I needed to point out to new members who'd just read the book that for some points "that's not actually what's thought anymore".

Comment author: remmelt  (EA Profile) 02 March 2018 03:29:17PM *  0 points [-]

To clarify: by implying that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don't mean to say that they should both become generalists.

Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.

In a similar vein, I think it makes sense for CEA's Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support.

However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.

Comment author: SiebeRozendal 02 March 2018 03:56:43PM *  1 point [-]

I think it would be better to include this in the OP.

Comment author: SiebeRozendal 02 March 2018 03:53:54PM *  2 points [-]

Could you be a little more specific about the levels/traits you name? I'm interpreting them roughly as follows:

  • Values: "how close are they to the moral truth or our current understanding of it" (replace moral truth with whatever you want values to approximate).
  • Epistemology: how well do people respond to new and relevant information?
  • Causes: how effective are the causes in comparison to other causes?
  • Strategies: how well are strategies chosen withing those causes?
  • Systems: how well are the actors embedded in a supportive and complementary system?
  • Actions: how well are the strategies executed?

I think a rough categorisation of these 6 traits would be Prioritisation (Values, Epistemology, Causes) & Execution (Strategies, Systems, Actions), and I suppose you'd expect a stronger correlation within these two branches than between?

View more: Prev | Next