Hide table of contents

Crossposted at LW and AI Alignment Forum.

This is a quick response to Evolution Provides No Evidence For the Sharp Left Turn, due to it winning first prize in The Open Philanthropy Worldviews contest. I think part of the post is sufficiently misleading about evolutionary history and the OP first prize gives it enough visibility, that it makes sense to write a post-long response.

Central evolutionary biology related claim of the original post is this:

  • The animals of the generation learn throughout their lifetimes, collectively performing many billions of steps of learning.
  • The generation dies, and all of the accumulated products of within lifetime learning are lost.
  • Differential reproductive success slightly changes the balance of traits across the species.



The only way to transmit information from one generation to the next is through evolution changing genomic traits, because death wipes out the within lifetime learning of each generation.



However, this sharp left turn does not occur because the inner learning processes suddenly become much better / more foomy / more general in a handful of outer optimization steps. It happens because you devoted billions of times more optimization power to the inner learning processes, but then deleted each inner learner shortly thereafter. Once the inner learning processes become able to pass non-trivial amounts of knowledge along to their successors, you get what looks like a sharp left turn. But that sharp left turn only happens because the inner learners have found a kludgy workaround past the crippling flaw where they all get deleted shortly after initialization.

In my view, this interpretation of evolutionary history is something between "speculative" and "wrong".

Transmitting some of the data gathered during the lifetime of the animal to next generation by some other means is so obviously useful that is it highly convergent. Non-genetic communication channels to the next generation include epigenetics, parental teaching / imitation learning, vertical transmission of symbionts, parameters of prenatal environment, hormonal and chemical signaling, bio-electric signals, and transmission of environmental resources or modifications created by previous generations, which can shape the conditions experienced by future generations (e.g. beaver dams). 

Given the fact overcoming the genetic bottleneck is so highly convergent, it seems a bit surprising if there was a large free lunch on table in exactly this direction, as Quintin assumes:

Evolution's sharp left turn happened because evolution spent compute in a shockingly inefficient manner for increasing capabilities, leaving vast amounts of free energy on the table for any self-improving process that could work around the evolutionary bottleneck. Once you condition on this specific failure mode of evolution, you can easily predict that humans would undergo a sharp left turn at the point where we could pass significant knowledge across generations. I don't think there's anything else to explain here, and no reason to suppose some general tendency towards extreme sharpness in inner capability gains.


It's probably worth to go a bit into technical details here: evolution did manage to discover evolutionary innovations like mirror neurons: A mirror neuron is a neuron that fires both when an organism acts and when the organism observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. ... Further experiments confirmed that about 10% of neurons in the monkey inferior frontal and inferior parietal cortex have "mirror" properties and give similar responses to performed hand actions and observed actions.[1] 

Clearly, mirror neurons are type of an innovation which allows high throughput behavioural cloning / imitation learning. "10% of neurons in the monkey inferior frontal and inferior parietal cortex" is a massive amount of compute. Neurons imitating your parent's motoric policy based on visual channel information about the behaviour of your parent is a high-throughput channel. (I recommend doing a Fermi estimate of this channel capacity).

The situation where you clearly have a system totally able to eat the free lunch on the table, and supposedly the lunch is still there, makes me suspicious.

At the same time: yes, clearly, nowadays, human culture is a lot of data, and humans learn more than monkeys.

Different stories

What are some evolutionary plausible alternatives of Quintin's story? 

Alternative stories would usually suggest that ancestral humans had access to channels to overcome the genetic bottleneck, and were using such channels to the extent it was marginally effective. Then, some other major change happened, the marginal fitness advantage of learning more grew, and humans developed to transmit more bits, so, modern humans transmit more.

An example of such major change could be advent of culture. If you look at the past timeline from a replicator dynamics perspective, the next most interesting event after the beginning of life is cultural replicators running on human brains crossing R>1 and starting  the second vast evolutionary search, cultural evolution.

How is the story "cultural evolution is the pivotal event" different? Roughly speaking, culture is a multi-brain parallel immortal evolutionary search computation. Running at higher speed and a layer of abstraction away from physical reality (compared to genes), it was able to discover many pools of advantage, like fire, versatile symbolic communication, or specialise-and-trade superagent organisation.

In this view, there is a type difference between 'culture' and 'increased channel capacity'. 

You can interpret this in multiple ways, but if you want to cast this as a story of a discontinuity, where biological evolution randomly stumbled upon starting a different powerful open-ended misaligned search, it makes sense. The fact that such search finds caches of fitness and negentropy seems not very surprising. [2]

Was the "increased capacity to transfer what's learned in brain's lifetime to the next generation" at least the most important or notably large direction what to exploit? I'm not a specialist on human evolution, but seems hard to say with confidence: note that 'fire' is also a big deal, as it allows you do spend way less on digestion, and cheaper ability to coordinate is a big deal, as illustrated by ants, and symbolic communication is a big deal, as it is digital, robust and and effective compression.

Unfortunately for attempts to figure out what were the precise marginal costs and fitness benefits for ancestral humans, my impression is, ~ten thousand generations of genetic evolution in a fitness landscape shaped by cultural evolution screens a lot of evidence. In particular, from the fact modern humans are outliers in some phenotype characteristic, you can not infer it was the cause of the change to humans. For example, argument like 'human kids have unusual capacity to absorb significant knowledge across generations compared to chimps, ergo, the likely cause of human explosive development is ancestral humans having more of this capacity than other species' has very little weight. Modern wolfs are also notably different from modern chihuahuas, but the correct causal story is not 'ancestral chihuahuas had an overhang of loyalty and harmlessness'.

Does this partially invalidate the argument toward implications for AI in the original post? In my view yes; if, following Quintin, we translate the actual situation  into quantities and narratives that drive AI progress rates

- the "specific failure mode" of not transmitting what brains learn to the next generation is not there
- the marginal fitness advantage of transmitting more bits to the next generation brains is unclear, similarly to an unclear marginal advantage of e.g. spending more on LLMs curating data for the next gen LLM training
- because we don't really understand what happened, the metaphorical map to AI progress mostly maps this lack of understanding to lack of clear insights for AI
- it seems likely culture is somehow big deal, but it is not clear how you would translate what happened to AI domain; if such thing can happen with AIs, if anything, it seems pushing more toward the discontinuity side, as the cultural search uncovered relatively fast multiple to many caches of negentropy
(- yes, obviously, given culture, it is important that you can transmit it to next generation, but it seems quite possible that for transferring seed culture  the capacity channel you have via mirror neurons is more than enough)
 

Not even approximately true

In case you still believe the original post is still somehow approximately true, and the implications for AI progress still somehow approximately hold, I think it's important to basically un-learn that update. Quoting the original post:

This last paragraph makes an extremely important claim that I want to ensure I convey fully:

- IF we understand the mechanism behind humanity's sharp left turn with respect to evolution

- AND that mechanism is inapplicable to AI development

- THEN, there's no reason to reference evolution at all when forecasting AI development rates, not as evidence for a sharp left turn, not as an "illustrative example" of some mechanism / intuition which might supposedly lead to a sharp left turn in AI development, not for anything.
 

The conjunctive IF is a crux, and because we don't understand what happened with culture enough, the rest of the implication does not hold.

Consider a toy model counterfactual story: in a fantasy word, exactly repeating 128 bits of the first cultural replicator gives the human ancestor the power to cast a spell and gain +50% fitness advantage.  Notice that this is a different story from "overcoming channel to offspring capacity" - you may be in the situation you have plenty of capacity, but don't have the 128 bits, and this is a situation much more prone to discontinuities.

Because it is not clear if reality was more like stumbling upon a specific string, or piece of code, or evolutionary ratchet, or something else, we don't know enough to rule out a metaphor suggesting discontinuities.  

Conclusion

Where I do agree with Quintin is scepticism toward some other stories attempting to draw some strong conclusion from human evolution, including strong conclusions about discontinuities.

I do think there is a reasonably good metaphor genetic evolution : brains ~ base optimiser : mesa-optimiser, but notice that evolution was able to keep brains mostly aligned for all other species except humans.  Relation human brain : cultural evolution is very unlike base optimiser : mesa-optimiser

(Note on AI)

While I mostly wanted to focus on the evolutionary part of the OP, I'm sceptical about the AI claims too. (Paraphrasing: While the current process of AI training is not perfectly efficient, I don't think it has comparably sized overhangs which can be exploited easily.)

In contrast, to me, it seems current way how AIs learn is very obviously inefficient, compared to what's possible. For example, explain to GPT4 something new, or make it derive something new. Open a new chat window, and probe if it now knows it. Compare with a human.
 

  1. ^
  2. ^

    This does not imply the genetic evolutionary search is a particularly bad optimiser - instead, the landscape is such that there are many sources of negentropy available.

22

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 5:59 AM

Caveat: I haven't read the original post.

Re: transgenerational data transfer, the mechanisms you are pointing out are far far weaker than complex speech

1. Epigenetics can only affect levels of protein expression. It's not really transferring information, only adaptation. 
2. Beavers building dams for their offspring isn't a form of passing information, it's just shaping the environment and isn't really going to persist for many generations. If you had evidence that beavers learn to reproduce the damns they grew up in I'd be more convinced.
3. Mirror neurons only exist in a tiny fraction of species. They also carry no information about WHY something is done or how it works, only how to imitate it. I'd argue that why and how it works is the more important part of fostering future innovation

It's a little bit like this meme

Me: can we stop and get food? Mom: we have food at home Food at home: @wilfordbrimly

Your objection to the below thus strikes me as a little bit of a nitpick:

"The generation dies, and all of the accumulated products of within lifetime learning are lost."

If it was rephrased to be:

"The generation dies, and iterative improvement to the accumulated products of its within lifetime learning doesn't take place"

Then the argument would still seem to hold.

Executive summary: Executive summary: The author challenges the view that the 'sharp left turn' in evolution happened due to the inability to pass learned information to the next generation, arguing instead that this interpretation is speculative and possibly incorrect, and that alternatives such as the advent of culture should be considered.

Key points:

  1. The post's claim that overcoming intergenerational information loss was pivotal in human evolution is speculative. Many biological channels emerged to transmit information between generations.
  2. Alternative stories posit cultural evolution itself was the pivotal development, enabling a powerful new evolutionary process. The emergence of cultural replicators and their interactions may represent a qualitative change.
  3. We lack enough understanding of what happened with human culture to validate the post's core claim about the evolutionary mechanism. This undermines using it to forecast AI progress.
  4. The metaphor between genetic and cultural evolution is limited. Relations between human brains and cultural evolution are unlike those between base and mesa optimizers.
  5. Current AI learning processes are obviously inefficient compared to human learning. We should be cautious about underestimating inefficiency and potential for rapid capability gains.
  6. The evolutionary analysis does not provide clear implications for forecasting AI progress. We need more rigorous

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities