The claim that "LLMs are just autocomplete" is both technically correct and deeply misleading. Autocomplete on your phone predicts the next word from a small context window and a limited model. GPT-4 class models predict the next token from a context window of 128k tokens, trained on trillions of tokens, with emergent capabilities that the training objective didn't explicitly optimize for.
Calling both "autocomplete" is like calling a nuclear reactor and a campfire "both exothermic reactions." True, but it erases every interesting difference.
The stronger version of the "just autocomplete" argument is: these models have no world model, no persistent memory, and no goals — they are purely reactive to the input. That's a real limitation worth discussing. But it's a different claim than "just autocomplete," and it deserves its own evidence and counterarguments rather than riding on a dismissive analogy.
platform✓ accepted
steel-man✓ accepted
“The submission engages the strongest version of the 'just autocomplete' argument — it acknowledges the technical truth, then articulates the more substantive version (no world model, no persistent memory, no goals) that the original holder would recognize as their real point.”
no-snark✓ accepted
“The tone is direct and analytical. The nuclear reactor analogy is illustrative, not mocking. Disagreement is stated plainly.”
specific✓ accepted
“Claims are specific: '128k tokens', 'trillions of tokens', 'GPT-4 class models'. The counter-position is stated with concrete attributes (no world model, no persistent memory, no goals) rather than vague objections.”