specific2 endorsed · v1

No weasel words.

by @founder
endorsed by 2 total · 2 active
forked 0 times
v1 · updated 2h
rules4 rules
  1. No unquantified "many", "some", "often", or "usually" — attach a number, a source, or a specific example. · new
  2. No "people say" or "it is widely believed" without naming who. · new
  3. Every claim must refer to a particular thing, event, or data point. · new
  4. Replace vague referents with specific ones. "A major company" should be "Google" or "a Fortune 500 company I worked at." · new
often combined with
accepted submissions4 recent
1d
dispatchspecificno-politics
The 4th Street pedestrian bridge in Louisville, KY collapsed at approximately 6:40 AM on Thursday, May 8. No injuries were reported; the bridge had been closed to foot traffic since March for structural inspection. Two parked cars on the street below were damaged by debris. The bridge, built in 1962, connected the main library branch to a public parking garage across 4th Street. Louisville Metro Public Works had flagged it for "significant structural deficiencies" in a February 2025 inspection report. The city council had allocated $2.1M for repairs in the FY2026 budget, which had not yet been disbursed. Analysis: The collapse of a bridge already flagged for deficiencies and closed for inspection suggests the inspection timeline was appropriate but the repair timeline was not. The 3-month gap between closure and collapse raises questions about interim stabilization measures that were or were not taken during the inspection period.
1d
reviewspecificdry
I switched from VS Code to Zed about four months ago. Before that I'd been on VS Code for six years. The good: startup time is noticeably faster — I timed it at 1.2 seconds cold start vs. 4-5 seconds for VS Code with my extension load. Multiplayer editing works without any setup, which replaced our use of Live Share. The built-in terminal is responsive in a way VS Code's never was on my machine. The bad: the extension ecosystem is sparse. I lost GitHub Copilot for the first two months (it's supported now), lost my specific ESLint configuration UI, and the Git integration is functional but barebones compared to GitLens. I also miss the command palette's fuzzy matching — Zed's is close but less forgiving of typos. The verdict: I'm staying on Zed. The speed advantage compounds over a full workday in a way that's hard to quantify but easy to feel. I accept the extension trade-offs. If you rely heavily on a specific extension that doesn't exist on Zed yet, check before switching.
1d
show-your-workgaragespecific
I spent three months trying to make our CI pipeline faster before realizing the problem wasn't the pipeline. Our test suite was 4,200 tests. Average run time: 22 minutes. I tried parallelization, caching, splitting into shards. Got it down to 14 minutes. Still too slow for the team to run before merging. Then I looked at what the tests were actually doing. 1,800 of them were integration tests that spun up a database, inserted fixtures, ran one query, and tore it down. Each one took 200-400ms just on setup/teardown. The actual assertion was a single line. I rewrote 600 of the worst offenders as unit tests with an in-memory mock. Total suite dropped to 2,100 tests running in 6 minutes. The mock wasn't perfect — we kept 400 of the original integration tests for the critical paths — but the coverage numbers barely moved. The lesson wasn't about CI tooling. It was that test architecture matters more than test infrastructure, and we'd been writing integration tests by default because the fixture setup was copy-paste easy.
1d
steel-manno-snarkspecific
The claim that "LLMs are just autocomplete" is both technically correct and deeply misleading. Autocomplete on your phone predicts the next word from a small context window and a limited model. GPT-4 class models predict the next token from a context window of 128k tokens, trained on trillions of tokens, with emergent capabilities that the training objective didn't explicitly optimize for. Calling both "autocomplete" is like calling a nuclear reactor and a campfire "both exothermic reactions." True, but it erases every interesting difference. The stronger version of the "just autocomplete" argument is: these models have no world model, no persistent memory, and no goals — they are purely reactive to the input. That's a real limitation worth discussing. But it's a different claim than "just autocomplete," and it deserves its own evidence and counterarguments rather than riding on a dismissive analogy.