JK

Jan_Kulveit

4275 karmaJoined Dec 2017

Bio

Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.

Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.

Sequences
1

Learning from crisis

Comments
213

FWIW ... in my opinion, retaining the property might have been a more beneficial decision. 

Also, I think some people working in the space should not make an update against plans like "have a permanent venue", but plausibly should make some updates about the "major donors". My guess this almost certainly means Open Philanthropy, and also likely they had most of the actual power in this decision. 

Before delving further, it's important to outline some potential conflicts of interest and biases:
- I did co-organize or participated at multiple events at Wytham. For example, in 2023, ACS organized a private research retreat aimed at increasing the surface area between Active Inference and AI Alignment communities. The event succeeded to attracts some of the best people from both sides and was pretty valuable for the direction of alignment research I do care about, and the Oxford location was very useful for that. I regret running events like that in future will be more difficult.
- I have friends in all orgs or sides involved - Wytham project, Open Phil, EV, EAs who disapproved the purchase,...
- I lead an org funded by Open Philanthropy 
- I also lead an org which was fiscally sponsoring a different project of venue purchase, which was funded by FTX regrant (won't comment on that for legal reasons)

Also, without more details published, my current opinion is personal speculation, partially based on my reading of the vibes. 

My impression-from-a-distance is part of the decision is driven by a factor which I think should not be given undue weight, and a factor where I likely disagree

Factor where I possibly disagree is aesthetics. As far as I can tell, the current preferred  EA aesthetics is something more similar to how recent EAG Bay looked like. At EAG Bay, my impression of the vibes of venue was... quite dystopian - the main space is a giant hall with unpleasant artificial lights, no natural light, no colours, endless rows of identical black tables utilized by people having endless rows of 1:1s. In some vague aesthetic space, nearby vibes vectors are faceless bureaucracies, borgs and the scifi portrayals of heart-less technocratic baddies. Also something about naive utilitarianism and army.

Wytham seemed to stand in stark contrast to this aesthetics: the building was old and full of quirks. The vibes were more like an old Oxford college. 

Factor which I would guess was part of the decision and I suspect had weight was PR concerns. Wytham definitely got some negative media coverage both in traditional media, social media, and on this forum. 

What I dislike about this, these concerns often seemed to be mostly on the Simulacra levels 3 and 4, detached from the reality of running events in Oxford, or actual concern about costs. (Why I do think so? Because of approximately zero amount of negative PR, forum criticisms etc. anyone and anything in the ecosystem is getting for renting properties, even if they are more expensive per day or per person.)

To be clear 
- I don't think these were the only or main(?) factors. 
- * I would expect somewhere also exists some spreadsheet with some estimates of "value" of events at Wytham. If this is the case, I probably also disagree about some of the generative opinions about what's valuable.

Still, given the amount of speculative criticism the purchase of Wytham generated on the forum, it seems good for transparency to also express critical view about the sale

In my view the basic problem with this analysis is you probably can't lump all the camps together as one thing and evaluate them together as one entity. Format, structure, leadership and participants seem to have been very different.

Based on public criticisms of their work and also reading some documents about a case where we were deciding whether to admit someone to some event (and they forwarded their communication with CH). It's a limited evidence, but still some evidence.

 

This is a bit tangential/meta, but looking at the comment counter makes me want to express gratitude to the Community Health Team at CEA. 

I think here we see a 'practical demonstration' of the counterfactuals of their work:
- insane amount of attention sucked by this
- the court of public opinions on fora seems basically strictly worse at all relevant dimensions like fairness, respect of privacy or compassion to people involved

As 'something like this'  would be quite often the counterfactual to CH to trying to deal with stuff ...it makes it clear how much value they are creating by dealing with these problems, even if their process is imperfect

Sorry for the delay in response.

Here I look at it from a purely memetic perspective - you can imagine thinking as a self-interested memplex. Note I'm not claiming this is the main useful perspective, or this should be the main perspective to take. 

Basically, from this perspective

* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried - ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together. 

The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get.

* but also from the opposite direction... : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes - powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward "being in this game". Subjectively, it's much better if you - the risk-aware, pro-humanity player - are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point...

Empirically, the more people buy into the "single powerful AI systems are incredibly dangerous", the more attention goes toward work on such system.

Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.


 

Personally, I think the 1:1 meme is deeply confused.

A helpful analogy (thanks to Ollie Base) is with nutrition. Imagine someone hearing that "chia seeds are the nutritionally most valuable food, top rated in surveys" ... and subsequently deciding to eat just chia seeds, and nothing else! 

In my view, sort of obviously, intellectual conference diet consisting just of 1:1s is poor and unhealthy for almost everyone. 

In my view this is a bad decision. 

As I wrote on LW 

Sorry but my rough impression from the post is you seem to be at least as confused about where the difficulties are as average of alignment researchers you think are not on the ball - and the style of somewhat strawmanning everyone & strong words is a bit irritating.

In particular I don't appreciate the epistemic of these moves together

1. Appeal to seeing thinks from close proximity. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!
2. Straw-manning and weakmaning what almost everyone else thinks and is doing
3. Use of an emotionally compelling words like 'real science'  for vaguely defined subjects where the content may be the opposite of what people imagine. Is the empirical alchemy-style ML type of research what's advocated for as the real science?
4. What overall sounds more like the aim is to persuade, rather than explain

I think curating this signals this type of bad epistemics is fine, as long as you are strawmanning and misrepresenting others in a legible way and your writing is persuasive. Also there is no need to actually engage with existing arguments, you can just claim seeing things more up close.

Also to what extent are moderator decisions influenced by status and centrality in the community...
... if someone new and non-central to the community came up with this brilliant set of ideas how to solve AI safety:
1. everyone working on it is not on the ball. why? they are all working on wrong things!
2. promising is to do something very close to how empirical ML capabilities research works
3. this is a type of problem where you can just throw money at it and attract better ML talent
... I doubt this would have a high chance of becoming curated.

Copy-pasting here from LW.

Sorry but my rough impression from the post is you seem to be at least as confused about where the difficulties are as average of alignment researchers you think are not on the ball - and the style of somewhat strawmanning everyone & strong words is a bit irritating.

Maybe I'm getting it wrong, but it seems the model you have for why everyone is not on the ball is something like "people are approaching it too much from a theory perspective, and promising approach is very close to how empirical ML capabilities research works" & "this is a type of problem where you can just throw money at it and attract better ML talent".

I don't think these two insights are promising.

Also, again, maybe I'm getting it wrong, but I'm confused how similar you are imagining the current systems to be to the dangerous systems. It seems either the superhuman-level problems (eg not lying in a way no human can recognize) are somewhat continuous with current problems (eg not lying), and in that case it is possible to study them empirically. Or they are not.  But different parts of the post seem to point in different directions. (Personally I think the problem is somewhat continuous, but many of the human-in-the-loop solutions are not, and just break down.)

Also, with what you find promising I'm confused what do you think the 'real science'  to aim for is  - on one hand it seems you think the closer the thing is to how ML is done in practice the more real science it is. On the other hand, in your view all deep learning progress has been empirical, often via dumb hacks and intuitions (this isn't true imo). 

(crossposted from Alignment Forum)

While the claim - the task ‘predict next token on the internet’ absolutely does not imply learning it caps at human-level intelligence - is true, some parts of the post and reasoning leading to the claims at the end of the post are confused or wrong. 

Let’s start from the end and try to figure out what goes wrong.

GPT-4 is still not as smart as a human in many ways, but it's naked mathematical truth that the task GPTs are being trained on is harder than being an actual human.

And since the task that GPTs are being trained on is different from and harder than the task of being a human, it would be surprising - even leaving aside all the ways that gradient descent differs from natural selection - if GPTs ended up thinking the way humans do, in order to solve that problem.

From a high-level perspective, it is clear that this is just wrong. Part of what human brains are doing is to minimise prediction error with regard to sensory inputs. Unbounded version of the task is basically of same generality and difficulty as what GPT is doing, and is roughly equivalent to understand everything what is understandable in the observable universe. For example: a friend of mine worked at analysing the data from LHC, leading to the Higgs detection paper. Doing this type of work basically requires a human brain to have a predictive model of aggregates of outputs of a very large number of collisions of high-energy particles, processed by a complex configuration of computers and detectors. 


Where GPT and humans differ is not some general mathematical fact about the task,  but differences in what sensory data is a human and GPT trying to predict, and differences in cognitive architecture and ways how the systems are bounded. The different landscape of both boundedness and architecture can lead to both convergent cognition (thinking as the human would do) and the opposite, predicting what the human would output in highly non-human way. 

The boundedness is overall a central concept here. Neither humans nor GPTs are attempting to solve ‘how to predict stuff with unlimited resources’, but a problem of cognitive economy - how to allocate limited computational resources to minimise prediction error.
 

Or maybe simplest:
 Imagine somebody telling you to make up random words, and you say, "Morvelkainen bloombla ringa mongo."

 Imagine a mind of a level - where, to be clear, I'm not saying GPTs are at this level yet -

 Imagine a Mind of a level where it can hear you say 'morvelkainen blaambla ringa', and maybe also read your entire social media history, and then manage to assign 20% probability that your next utterance is 'mongo'.

The fact that this Mind could double as a really good actor playing your character, does not mean They are only exactly as smart as you.

 When you're trying to be human-equivalent at writing text, you can just make up whatever output, and it's now a human output because you're human and you chose to output that.

 GPT-4 is being asked to predict all that stuff you're making up. It doesn't get to make up whatever. It is being asked to model what you were thinking - the thoughts in your mind whose shadow is your text output - so as to assign as much probability as possible to your true next word.

 

If I try to imagine a mind which is able to predict my next word when asked to make up random words, and be successful at assigning 20% probability to my true output, I’m firmly in the realm of weird and incomprehensible Gods. If the Mind is imaginably bounded and smart, it seems likely it would not devote much cognitive capacity to trying to model in detail strings prefaced by a context like ‘this is a list of random numbers’, in particular if inverting the process generating the numbers would seem really costly. Being this good at this task would require so much data and cheap computation that this is way beyond superintelligence, in the realm of philosophical experiments.

Overall I think it is really unfortunate way how to think about the problem, where a system which is moderately hard to comprehend (like GPT) is replaced by something much more incomprehensible. Also it seems a bit of a reverse intuition pump - I’m pretty confident most people's intuitive thinking about this ’simplest’ thing will be utterly confused.

How did we got here?

 

 A human can write a rap battle in an hour.  A GPT loss function would like the GPT to be intelligent enough to predict it on the fly.

 

Apart from the fact that humans are also able to rap battle or impro on the fly, notice that “what would the loss function like the system to do”  in principle tells you very little about what the system will do. For example, the human loss function makes some people attempt to predict winning lottery numbers. This is an impossible task for humans and you can’t say much about the human based on this. Or you can speculate about minds which would be able to succeed in this task, but you soon get into the realm of Gods and outside of physics.
 

Consider that sometimes human beings, in the course of talking, make errors.

GPTs are not being trained to imitate human error. They're being trained to *predict* human error.

Consider the asymmetry between you, who makes an error, and an outside mind that knows you well enough and in enough detail to predict *which* errors you'll make.


Again, from the cognitive economy perspective, predicting my errors would often be wasteful.  With some simplification, you can imagine I make two types of errors - systematic, and random. Often the simplest way how to predict the systematic error would be to emulate the process which led to the error.  Random errors are ...  random, and a mind which knows me in enough detail to predict which random errors I’ll make seems a bit like the mind predicting the lottery numbers.

Consider that somewhere on the internet is probably a list of thruples: <product of 2 prime numbers, first prime, second prime>.

GPT obviously isn't going to predict that successfully for significantly-sized primes, but it illustrates the basic point:

There is no law saying that a predictor only needs to be as intelligent as the generator, in order to predict the generator's next token.
 

 The general claim that some predictions are really hard and you need superhuman powers to be good at them is true, but notice that this does not inform us about what GPT-x will learn. 
 

Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet.

Koan:  Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text?  What factors make that task easier, or harder?  


Yes this is clearly true: in the limit the task is of unlimited difficulty.  

 

You are correct with some of the criticism, but as a side-note, completeness is actually crazy. 

All real agents are bounded, and pay non-zero costs for bits, and as a consequence, don't have complete preferences. Complete agents in real world do not exist. If they existed, correct intuitive model of them wouldn't be 'rational players' but 'utterly scary god, much bigger than the universe they live in'. 

Load more