Baba Is Eval
It has the same problem with playing chess. But I’m not sure if there is a datatype it could work with for this kinda game. Currently it seems more like LLMs can’t really work on spacial problems. But this should actually be something that can be fixed (pretty sure I saw an article about it on HN recently)
LLMs might be used to translate requests into keywords, but I didn’t think LLMs themselves did any of the image generation.
Am I wrong here?
I’ve created MCP servers that can scrape websites but that doesn’t mean the LLM itself can make HTTP calls.
The reason I make this distinction is because someone claimed that LLMs can read images. But they don’t. They act as an agent for another model that reads images and creates metadata from it. LLMs then turn that meta data into natural language.
The LLM itself doesn’t see any pixels. It sees textual information that another model has provided.
Edit: reading more about this online, it seems LLMs can work with pixel level data. I had no idea that was possible.
My apologies.
E: I found the paper: https://arxiv.org/pdf/2010.11929
We use standard learnable 1D position embeddings, since we have not observed significant performance gains from using more advanced 2D-aware position embeddings (Appendix D.4).
Although it looks like that was just ImageNet so maybe this isn't that surprising.
For LLMs we only have one axis of position and - more importantly - the vast majority of training data only is oriented in this way.
In some ways, this reminds me of the history of AI Go (board game). But the resolution there was MCTS, which wasn't at all what we wanted (insofar as MCTS is not generalizable to most things).
However, most levels can be expressed as a few intermediate goals
I think generally the whole thing with puzzle games is that you have to determine the “right” intermediate goals. In fact, the naive intermediate goals are often entirely wrong!
A canonical sokoban-like inversion might be where you have to push two blocks into goal areas. You might think “ok, push one block into its goal area and then push another into it.”
But many of these games will have mechanisms meaning you would first want to push one block into its goal, then undo that for some reason (it might activate some extra functionality) push the other block, and then finally go back and do the thing.
There’s always weird tricks that mean that you’re going to walk backwards before walking forwards. I don’t think it’s impossible for these things to stumble into it, though. Just might spin a lot of cycles to get there (humans do too I guess)
To me, those discoveries are the fun part of most puzzle games. When you unlock the "trick" for each level and the dopamine flies, heh.
The approach works in big part because these kinds of puzzles intentionally have very few degrees of freedom. Every element of the puzzle has a specific role to play, so if you look at a piece at random and notice something interesting about it, it's almost certain that this aspect is a part of the solution.
This is very similar to math and physics problems in school as well: they're intentionally structured to contain exactly the minimum amount of data that's necessary for a single solution; take anything away, and the problem is unsolvable. Solving these, students are taught to check if they used all data provided - if they didn't, it means their solution is wrong. A less obvious realization is that this also lets you make educated guesses at the solution - since every piece of information has, by design, a specific role to play, you can start guessing what those roles are; if those guesses start to connect into a structure, it's highly likely you've just identified the middle part of the solution.
Non-puzzle games, as well as real-world scenarios are, unfortunately, almost always underconstrained and full of irrelevant information; most things you see are there for random reasons, entirely unrelated to the problem you're solving. However, they may still be useful, so it's worth looking around anyway.
But the resolution there was MCTS
MCTS wasn't _really_ the solution to go. MCTS-based AIs existed for years and they weren't _that_ good. They weren't superhuman for sure, and the moves/games they played were kind of boring.
The key to doing go well was doing something that vaguely looks like MCTS but the real guts are a network that can answer: "who's winning?" and "what are good moves to try here?" and using that to guide search. Additionally essential was realizing that computation (run search for a while) with a bad model could be effectively+efficiently used to generate better training data to train a better model.
Additionally essential was realizing that computation (run search for a while) with a bad model could be effectively+efficiently used to generate better training data to train a better model.
That has been known since at least the 1990s with TD-Gammon beating the world champions in Backgammon. See eg http://incompleteideas.net/book/ebook/node108.html or https://en.wikipedia.org/wiki/TD-Gammon
In a sense, classic chess engines do that, too: alpha-beta-search uses a very weak model (eg just checking for checkmate, otherwise counting material, or what have you) and search to generate a much stronger player. You can use that to generate data for training a better model.
That has been known since at least the 1990s with TD-Gammon beating the world champions in Backgammon.
Yeah, I didn't mean to imply that reinforcement learning (or applying it in this way) is novel. It was just important to work out how to apply that to go specifically.
In a sense, classic chess engines do that, too: alpha-beta-search uses a very weak model (eg just checking for checkmate, otherwise counting material, or what have you) and search to generate a much stronger player. You can use that to generate data for training a better model.
I would say that classic chess AIs specifically don't do the important part. They aren't able to use a worst model to, with computation, train a better model. They can generate training data, but then they have no way to incorporate it back into the AI.
I've seen AI struggle with ASCII, but when presented as other data structures, it performs better.
edit:
e.g. JSON with structured coordinates, graph based JSON, or a semantic representation with the coordinates
To the extent that the current generation of AI isn't general, yeah, papering over some of its weaknesses may allow you to expose other parts of it, both strengths and other weaknesses.
I wonder if the author would be willing to try with another representation.
[1] Does Prompt Formatting Have Any Impact on LLM Performance? https://arxiv.org/html/2411.10541v1
[2] Large Language Models(LLMs) on Tabular Data: Prediction, Generation, and Understanding - A Survey https://arxiv.org/html/2402.17944v2
(Shameless plug: I am one of the developers of Thinky.gg (https://thinky.gg), which is a thinky puzzle game site for a 'shortest path style' [Pathology] and a Sokoban variant [Sokoath] )
These games are typically NP Hard so the typical techniques that solvers have employed for Sokoban (or Pathology) have been brute forced with varying heuristics (like BFS, dead-lock detection, and Zobrist hashing). However, once levels get beyond a certain size with enough movable blocks you end up exhausting memory pretty quickly.
These types of games are still "AI Proof" so far in that LLMs are absolutely awful at solving these while humans are very good (so seems reasonable to consider for for ARC-AGI benchmarks). Whenever a new reasoning model gets released I typically try it on some basic Pathology levels (like 'One at a Time' https://pathology.thinky.gg/level/ybbun/one-at-a-time) and they fail miserably.
Simple level code for the above level (1 is a wall, 2 is a movable block, 4 is starting block, 3 is the exit):
000
020
023
041
Similar to OP, I've found Claude couldn’t manage rule dynamics, blocked paths, or game objectives well and spits out random results.
SMT/SAT solvers or integer linear programming can get you pretty far. Many classic puzzle games like Minesweeper are NP hard, and you can solve any instance that a human would be able to solve in their lifetime fairly quickly on a computer.
This is why the video of Claude solving level 1 at the top was actually (dramatic musical cue) staged, and only possible via a move-for-move tutorial that Claude nicely rationalized post hoc.
One of the things this arc of history has taught me is that post-hoc rationalization is depressingly easy. Especially if it doesn't have to make sense, but even passing basic logical checks isn't too difficult. Ripping the rationalization apart often requires identifying novel, non-obvious logical checks.
I thought I had learned that time and time again from human politics, but AI somehow made it even clearer than I thought possible. Perhaps simply because of knowing that a machine is doing it.
Edit: after watching the video more carefully:
"This forms WALL IS WIN horizontally. But I need "FLAG IS WIN" instead. Let me check if walls now have the WIN property. If they do, I just need to touch a wall to win. Let me try moving to a wall:
There's something extremely uncanny-valley about this. A human player absolutely would accidentally win like this, and have similar reasoning (not expressed so formally) about how the win was achieved after the fact. (Winning depends on the walls having WIN and also not having STOP; many players get stuck on later levels, even after having supposedly learned the lesson of this one, by trying to make something WIN and walk onto it while it is still STOP.)
But the WIN block was not originally in line with the WALL IS text, so a human player would never accidentally form the rule, but would only do it with the expectation of being able to win that way. Especially since there was already an obvious, clear path to FLAG — a level like this has no Sokoban puzzle element to it; it's purely about learning that the walls only block the player because they are STOP.
Nor would (from my experience watching streamers at least) a human spontaneously notice that the rule "WALL IS WIN" had been formed and treat that as a cue to reconsider the entire strategy. The natural human response to unintentionally forming a useful rule is to keep pushing in the same direction.
On the other hand, an actually dedicated AI system (in the way that AlphaGo was dedicated to Go) could, I'm sure, figure out a game like Baba Is You pretty easily. It would lack the human instinct to treat the walls as if they were implicitly always STOP; so it would never struggle with overriding it.
Still, its interesting to see the challenges with dynamic rules (like "Key is Stop") that change where are you able to move etc.
But I am fairly sure all of Baba Is You solutions are present in the training data for modern LLMs so it won’t make for a good eval.
One key difference to ARC in its current iteration is that there is a defined and learnable game physics.
Arc requires generalization based on few examples for problems that are not well defined per se.
Hence ARC currently requires the models that work on it to possess biases that are comparable to the ones that humans possess.
LLMs will probably continue to scale on such benchmarks, as they have been, without needing real ingenuity or intelligence.
Obviously I don't know the answer but I think it's the same root problem as why neural networks will never lead to intelligence. We're building and testing idiot savants.