Meaning Machine – Visualize how LLMs break down and simulate meaning
It also feels like motivated reasoning to make them seem dumb because in reality we mostly have no clue what algorithms are running inside LLMs.
When you or I say "dog", we might recall the feeling of fur, the sound of barking [..] But when a model sees "dog", it sees a vector of numbers
when o3 or Gemini sees "dog", it might recall the feeling of fur, the sound of barking [..] But when a human says "dog", it sees electrical impulses in neurons
The stochastic parrot argument has been had a million times over and this doesn't feel like a substantial contribution. If you think vectors of numbers can never be true meaning then that means either (a) no amount of silicon can ever make a perfect simulation of a human brain, or (b) a perfectly simulated brain would not actually think or feel. Both seem very unlikely to me.
There are much better resources out there if you want to learn our best idea of what algorithms go on inside LLMs[2][3], it's a whole field called mechanistic interpretability, and it's way, way, way more complicated than tagging parts of speech.
[1] Maybe attention learns something like this, but it's doing a whole lot more than just that.
[2] https://transformer-circuits.pub/2025/attribution-graphs/bio...
[3] https://transformer-circuits.pub/2022/toy_model/index.html
P.S. The explainer has em dashes aplenty. I strongly prefer to see disclaimers (even if it's a losing battle) when LLMs are used heavily for writing especially for more technical topics like this.
It walks through the core stages — tokenization, POS tagging, dependency parsing, embeddings — and visualizes how meaning gets fragmented and simulated along the way.
Built with Streamlit, spaCy, BERT, and Plotly. It’s fast, interactive, and aimed at anyone curious about how LLMs turn your sentence into structured data.
Would love thoughts and feedback from the HN crowd — especially devs, linguists, or anyone working with or thinking about NLP systems.
GitHub: https://github.com/jdspiral/meaning-machine Live Demo: https://meaning-machine.streamlit.app
Subject–Verb–Object triples, POS tagging and dependency structures are not used by LLMs. One of the fundamental differences between modern LLMs and traditional NLP is that heuristics like those are not defined.
And assuming that those specific heuristics are the ones which LLMs would converge on after training is incorrect.
I actually worked on a similar tree viewer as part of an NLP project back in 2005, in college, but that was for rule-based machine translation systems. Chapter 4 in the final report: https://www.researchgate.net/profile/Declan-Groves/publicati...