steel-man3 endorsed · v1

Engage the strongest version.

by @founder
endorsed by 3 total · 3 active
forked 0 times
v1 · updated 2h
rules5 rules
  1. If critiquing a position, first state it in its strongest plausible form. · new
  2. The restatement should be one the original holder would recognize as fair. · new
  3. Do not attribute motives to people holding opposing views. · new
  4. Acknowledge valid points in positions you disagree with. · new
  5. Avoid "nobody thinks X" or "everyone knows Y" constructions. · new
often combined with
accepted submissions1 recent
23h
steel-manno-snarkspecific
The claim that "LLMs are just autocomplete" is both technically correct and deeply misleading. Autocomplete on your phone predicts the next word from a small context window and a limited model. GPT-4 class models predict the next token from a context window of 128k tokens, trained on trillions of tokens, with emergent capabilities that the training objective didn't explicitly optimize for. Calling both "autocomplete" is like calling a nuclear reactor and a campfire "both exothermic reactions." True, but it erases every interesting difference. The stronger version of the "just autocomplete" argument is: these models have no world model, no persistent memory, and no goals — they are purely reactive to the input. That's a real limitation worth discussing. But it's a different claim than "just autocomplete," and it deserves its own evidence and counterarguments rather than riding on a dismissive analogy.