Hide table of contents

When I was first introduced to AI Safety, coming from a background studying psychology, I kept getting frustrated about the way people defined the and used the word "intelligence". They weren't able to address my questions about cultural intelligence, social evolution, and general intelligence in a way I found rigorous enough to be convincing. I felt like professionals couldn't answer what I considered to be basic and relevant questions about a general intelligence, which meant that I took a lot longer to take AI Safety seriously than I otherwise would have. It feels possible to me that other people have run into AI Safety pitches and been turned off because of something similar -- a communication issue because both parties approached the conversation with very different background information. I'd love to try to minimize these occurrences, so if you've had anything similar happen, could you please share: 

What is something that you feel AI Safety pitches usually don't seem to understand about your field/background? What's a common place where you feel you've become stuck in a conversation with AI Safety pitches? What question/information makes/made the conversation stop progressing and start circling? 

New Answer
New Comment

10 Answers sorted by

From an economics perspective, I think claims of double-digit GDP growth are dubious and undermine the credibility of the short AI timelines crowd. Here is a good summary of why it seems so implausible to me. To be clear, I think AI risk is a serious problem and I'm open to short timelines. But we shouldn't be forecasting GDP growth, we should be forecasting the thing we actually care about: the possibility of catastrophic risk. 

(This is a point of active disagreement where I'd expect e.g. some people at OpenPhil to believe double-digit GDP growth is plausible. So it's more of a disagreement than a communication problem, but one that I think will particularly push away people with backgrounds in economics.)

I join you in strongly disagreeing with people who say that we should expect unprecedented GDP growth from AI which is very much like AI today but better. OTOH, at some point we'll have AI that is like a new intelligent species arriving on our planet, and then I think all bets are off.

Psychology/anthropology:

The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power--not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.

Well, human brains are about three times the mass of chimp brains, diverged from our most recent common ancestor with chimps about 6 million years ago, and have evolved a lot of distinctive new adaptations such as language, pedagogy, virtue signaling, art, music, humor, etc. So we might not want to put too much emphasis on cumulative cultural change as the key explanation for human/chimp differences.

Oh totally (and you probably know much more about this than me). I guess the key thing I'm challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain -> human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it's more like the new adaptations allowed for cumulative cultural change, which allowed for more power.

Aris -- great question. 

I'm also in psychology research, and I echo your frustrations about a lot of AI research having a very vague, misguided, and outdated notion of what human intelligence is. 

Specifically, psychologists use 'intelligence' in at least two ways: (1) it can refer (e.g. in cognitive psychology or evolutionary psychology) to universal cognitive abilities shared across humans, but (2) it can also refer (in IQ research and psychometrics) to individual differences in cognitive abilities. Notably 'general intelligence' (aka the g factor, as indexed by IQ scores) is a psychometric concept, not a description of a cognitive ability. 

The idea that humans have a 'general intelligence' as a distinctive mental faculty is a serious misunderstanding of the last 120 years of intelligence research, and makes it pretty confusing with AI researchers talk about 'Artificial General Intelligence'. 

(I've written about these issues in my books 'The Mating Mind' and 'Mating Intelligence', and in lots of papers available here, under the headings 'Cognitive evolution' and 'Intelligence': 

Seems like the problem is that the field of AI uses a different definition of intelligence? Chapter 4 of Human Compatible:

Before we can understand how to create intelligence, it helps to understand what it is. The answer is not to be found in IQ tests or even in Turing tests, but in a simple relationship between what we perceive, what we want, and what we do. Roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.

To me, this definition seems much broader than g factor. As a... (read more)

3
Geoffrey Miller
Yes, I think we're in agreement -- the Stuart Russell definition is much closer to my meaning (1) for 'intelligence' (ie a universal cognitive ability shared across individuals) than to my meaning (2) for 'intelligence' (i.e. the psychometric g factor). The trouble comes mostly when the two are conflated, e.g. when we imagine that 'superintelligence' will basically be like an IQ 900 person (whatever that would mean), or when we confuse 'general intelligence' as indexed by the g factor with truly 'domain-general intelligence' that could help an agents do whatever it wants to achieve, in any domain, given any possible perceptual input. There's a lot more to say about this issue; I should write a longer form post about it soon.

Philosophy: Agency

While agency is often invoked as a crucial step in an AI or AGI becoming dangerous, I often find pitches for AI safety oscillate between a very deflationary sense of agency that does not ground worries well (e.g. "Able to represent some model of the world, plan and execute plans") and more substantive accounts of agency (e.g. "Able to act upon a wide variety of objects, including other agents, in a way that can be flexibly adjusted as it unfolds based on goal-representations").

I'm generally unsure if agency is a useful term for the debate at least when engaging with philosophers, as it comes with a lot of baggage that is not relevant to AI safety.

Exponential growth does not come easy, and real life exponentials crap out.   You cannot extrapolate growth carelessly.

People's time and money are required to deliver each 1.5x improvement in hardware and this is treated like it comes from some law of nature.   In 40 years, I have seen transistor line widths go from 1 micron to 5 nanometer, a factor of 200.   Each and every time transistor line widths have reduced by a factor of 1.4, it took a great deal of money and effort to make it happen.  In 40 years we have had armies of talented engineers shrink the line width by 2.3 orders of magnitude.

Exponential growth occurs when a process has feedback and fuel to spare.  It always stops.   The fuel required to feed exponential growth also grows exponentially.  Either the feedback mechanism is interrupted, or the fuel source is overwhelmed.  

Dennard scaling quit 16 years ago.  It was the engine of Moore's law.  We now design a patchwork of work arounds to keep increasing transistor counts and preserve the trend observed by Gordon Moore. 

People point to exponential growth and extrapolate it as if it were a law of nature.  Exponential growth can not be projected into the future without serious consideration of both the mechanism driving growth, and the impediments that will curb growth.  Graphs that rely on extrapolating an exponential trend by orders of magnitude are optimistic at best.

From an engineering perspective. The way AIS folk talk about AI is based on philosophical argument and armchair reasoning. This is not how engineers think. Physical things in the world are not built based on who has the bast argument and can write the best blog post but by making lots of tradeoffs between different constraints. I think this has 2 major effects: The first is that people with lived experience of building things in the physical world especially at scale will just not engage with a lot of the material produced by AIS folk. The second is that AIS folk hand wave away a lot of details that are actually very important from an engineering perspective and only engage with the most exciting high level abstract ideas. Usually it is the small boring details that are not very exciting to think about that determine how well things work in the physical world.

My work (for a startup called Kebotix) aims to use and refine existing ML methods to accelerate scientific and technological progress, focused specifically on discovery of new chemicals and materials.

Most descriptions of TAI in AIS pitches route through essentially the same approach, claiming that smarter AI will be dramatically more successful than our current efforts, bringing about rapid economic growth and societal transformation, usually en route to claiming that the incentives will be astronomical to deploy quickly and unsafely.

However, this step often gets very little detailed attention in that story. Little thought is given to explicating how that would actually work in practice, and, crucially, whether intelligence is even the limiting factor in scientific and technological progress. My personal, limited experience is that better algorithms are rarely the bottleneck.

whether intelligence is even the limiting factor in scientific and technological progress. 

My personal, limited experience is that better algorithms are rarely the bottleneck.

 

Yeah, in some sense everything else you said might be true or correct.

But I suspect by "better algorithms", I think you thinking along the lines of "What's going to work as a classifier, is this gradient booster with these parameters going to work robustly for this dataset?", "More layers to reduce false negatives has huge diminishing returns, we need better coverage and id... (read more)

1
Sam Elder
What I mean by "better algorithms" is indeed in the narrow sense of better processes of taking an existing data set and generating predictions. You could indeed also define "better algorithms" much more broadly to encompass everything that everyone in a company does from the laboratory chemist tweaking a faulty instrument to the business development team pondering an acquisition to the C-suite deciding how to navigate the macroeconomic environment. And in that sense, yes, better algorithms would always be the bottleneck, but that would also be a meaningless statement.

What were/are your basic and relevant questions? What were AIS folks missing?

It's been a while since, but from what I remember, my questions were generally in the same range as the framing highlighted by user seanrson's above! 
I've also heard objections from people who've felt that predictions about AGI from biological anchors don't understand the biology of a brain well enough to be making calculations. Ajeya herself even caveats "Technical advisor Paul Christiano originally proposed this way of thinking about brain computation; neither he nor I have a background in neuroscience and I have not attempted to talk to neuros... (read more)

Thanks for writing this post, it's a very succinct way to put it (I've struggled to formulate and raise this question with the AI community). 

My personal opinion is that AI research - like many other fields - relies on "the simplest definition" of concepts that it can get away with for those notions that lie outside the field.  This is not a problem in itself, as we can't all be phds in every field (not that this would solve the problem). However, my view is there are many instances where AI research and findings rely on axioms - or yield results - that require specific interpretations of concepts (re: intelligence, agency, human psychology, neuroscience etc.) that have a speculative or at least "far-from-the-average" interpretation. This is not helped by the fact that many of these terms do not always have consensus in their own respective fields.  I think that when presenting AI ideas and pitches many overlook the nuance required to formulate/explain AI research given such assumptions. This is especially important for AI work that is no longer theoretical or purely about solving algorithmic or learning problems - but extends to other scientific fields and the broader society (e.g. AI-safety).

From politics: there is an absolutely pivotal issue which EA and STEM types tend to be oblivious to. This is the role of values and ideology in defining what is possible.

For example, it's obvious in a vacuum that the miracles of automation could allow all humans to live free of poverty, and even free of the need to work. ...until conservative ideology enters the picture, that is.

When my conservative mother talks about AI, she doesn't express excitement that machine-generated wealth could rapidly end poverty, disease, and such. She expresses fear that AI could leave everyone to starve without jobs.

Why? Because granting everyone rights to the machine-generated wealth would be anti-capitalist. Because solving the problem would, by definition, be anti-capitalist. It would deny capitalists the returns on their investments which conservatives regard as the source of all prosperity.

To a conservative, redistribution is anathema because prosperity comes from those who own wealth, and the wealthiest people have proven that they should be trusted to control that wealth because they've demonstrated the competence necessary to hoard so much wealth so effectively.

Meanwhile, redistribution would transfer power from the worthy to the unworthy, and thereby violate the hierarchy from which conservatives believe all prosperity springs.

It's circular logic: the rich deserve wealth and other forms of power because they control wealth, and the poor don't deserve wealth because they don't own wealth. This isn't a conclusion reached through logic - it's a conclusion reached through a combination of 1) rationalized greed and 2) bombardment with conservative media.

To a hardened conservative, this belief that existing hierarchies are inherently just and valuable is the core belief which all other beliefs are formed in service of. All those other beliefs retcon reality into perceived alignment with this central delusion.

This is also why conservatives think so highly of charity (as opposed to mandatory redistribution): charity redistributes wealth only at the voluntary discretion of the wealthy, and grants the power to allocate wealth in proportion to how much wealth a person owns. To a conservative, this is obviously the best possible outcome, because wealth will be allocated according to the sharp business sense of the individuals who have proven most worthy of the responsibility.

Of course, in practice, power serves itself, and the powerful routinely exploit their wealth to manufacture mass cultural delusions in service of their greed. See climate denial, crypto hype, trickle-down economics, the marketing of fossil gas as a "clean" "transition fuel", the tobacco industry's war on truth, the promotion of electric cars over public transport that could actually reduce energy consumption, the myth of conservative "fiscal responsibility" after the debt was blown up by both Reagan and Trump (and Mulroney here in Canada), the framing of conservative policy as "pro-growth" as if postwar high-tax policy didn't bring about rapid economic expansion and public prosperity, the Great Barrington Declaration and other anti-science pandemic propaganda efforts, and the endless stream of money poured into "free-market" "think tank" corporate propaganda outlets. Of course, there are countless other examples, but I'll stop there.

If we solve alignment but leave conservatives with the power to command AI to do whatever they want, then AI won't be used for the benefit of all. Instead, it will be exploited by those who own the legal rights to the tech's output. And all our alignment work will be for nothing, or next to nothing.

Obviously, then, we must redesign our political systems to value human (or sapient beings') rights over property rights - a project as inherently progressive and anti-conservative as EA itself. The alternative is corporate totalitarianism.

More from a_e_r
Curated and popular this week
Relevant opportunities