tl;dr: OpenAI leaked AI breakthrough called Q*, acing grade-school math. It is hypothesized combination of Q-learning and A*. It was then refuted. DeepMind is working on something similar with Gemini, AlphaGo-style Monte Carlo Tree Search. Scaling these might be crux of planning for increasingly abstract goals and agentic behavior. Academic community has been circling around these ideas for a while.

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ 

https://twitter.com/MichaelTrazzi/status/1727473723597353386

"Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity

Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions.

Given vast computing resources, the new model was able to solve certain mathematical problems. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success."

https://twitter.com/SilasAlberti/status/1727486985336660347

"What could OpenAI’s breakthrough Q* be about?

It sounds like it’s related to Q-learning. (For example, Q* denotes the optimal solution of the Bellman equation.) Alternatively, referring to a combination of the A* algorithm and Q learning.

One natural guess is that it is AlphaGo-style Monte Carlo Tree Search of the token trajectory. 🔎 It seems like a natural next step: Previously, papers like AlphaCode showed that even very naive brute force sampling in an LLM can get you huge improvements in competitive programming. The next logical step is to search the token tree in a more principled way. This particularly makes sense in settings like coding and math where there is an easy way to determine correctness. -> Indeed, Q* seems to be about solving Math problems 🧮"

https://twitter.com/mark_riedl/status/1727476666329411975

"Anyone want to speculate on OpenAI’s secret Q* project? 

- Something similar to tree-of-thought with intermediate evaluation (like A*)? 

- Monte-Carlo Tree Search like forward roll-outs with LLM decoder and q-learning (like AlphaGo)?

- Maybe they meant Q-Bert, which combines LLMs and deep Q-learning

Before we get too excited, the academic community has been circling around these ideas for a while. There are a ton of papers in the last 6 months that could be said to combine some sort of tree-of-thought and graph search. Also some work on state-space RL and LLMs."

https://www.theverge.com/2023/11/22/23973354/a-recent-openai-breakthrough-on-the-path-to-agi-has-caused-a-stir 

OpenAI spokesperson Lindsey Held Bolton refuted it:

"refuted that notion in a statement shared with The Verge: “Mira told employees what the media reports were about but she did not comment on the accuracy of the information.”"

https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ 

Google DeepMind's Gemini, that is currently the biggest rival with GPT4, which was delayed to the start of 2024, is also trying similar things: AlphaZero-based MCTS through chains of thought, according to Hassabis.

Demis Hassabis: "At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models. We also have some new innovations that are going to be pretty interesting."

https://twitter.com/abacaj/status/1727494917356703829

Aligns with DeepMind Chief AGI scientist Shane Legg saying: "To do really creative problem solving you need to start searching."

https://twitter.com/iamgingertrash/status/1727482695356494132

"With Q*, OpenAI have likely solved planning/agentic behavior for small models. Scale this up to a very large model and you can start planning for increasingly abstract goals. It is a fundamental breakthrough that is the crux of agentic behavior. To solve problems effectively next token prediction is not enough. You need an internal monologue of sorts where you traverse a tree of possibilities using less compute before using compute to actually venture down a branch. Planning in this case refers to generating the tree and predicting the quickest path to solution"

My thoughts:

If this is true, and really a breakthrough, that might have caused the whole chaos: For true superintelligence you need flexibility and systematicity. Combining the machinery of general and narrow intelligence (I like the DeepMind's taxonomy of AGI https://arxiv.org/pdf/2311.02462.pdf ) might be the path to both general and narrow superintelligence. 

13

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

I don't think it would very surprising if a giant pile of linear algebra calculations figured out how to do linear algebra calculations. 

I guess you're sort of joking, but it should be really surprising (from an outside perspective) that biological brains have figured out how to understand neural networks (and it's taken billions of years of evolution).

Whatever the nature of Q*, there is not much evidence that it could have prompted the Altman firing. It's not clear why some very early preliminary results about a Q* as described would prompt a firing nor why the firing would be so abrupt or when it was if the research happened months ago (and Altman was alluding to it publicly weeks ago), while Sutskever's involvement & the exact timing of the firing appear to be adequately explained by other issues.

As there is still nothing leaking or confirming Q*, I'm increasingly skeptical of its relevance - for something supposedly being demoed and discussed company-wide, if it was so cosmically important or so safety-relevant, you'd think there'd be more meat on the rumor bones by now, and there wouldn't be issues like denying that Murati confirmed the rumors as opposed to merely described the rumors (which is the sort of garbling that is in line with past leaks like the initial misdescriptions of Sutskever refusing to give examples of Altman's deceptive behavior). This increasingly sounds like a real system (maybe) which has been seized on by hype and yoked to a scandal it's mostly uninvolved with. (Maybe related to issues of candor, but not 'the' reason.)

Discussing Q* seems like a big distraction and waste of time, IMO, when there are better-reported things to discuss (like Altman trying to fire Helen Toner from the OA board).

I'm confused why people are broadcasting such a tiny morsel of news so much, and it's getting so much play on places like Marginal Revolution, while other parts of the recent OA drama don't seem to rate a mention. Nobody seemed to care even 1% as much about the prior OA research like GPT-f or process evaluation...

Thoughts on this? Supposedly shows the leaked letter to the board. But seems pretty far out, and if true, it's basically game over (AES-192 encryption broken by the AI with new unintelligible maths; the AI proposing a new more efficient and flexible architecture for itself). Really hope the letter is just a troll!

Curated and popular this week
Relevant opportunities