...

tqian

7

Karma

2025-10-31

Created

Recent Activity

  • I teach computer science at a public university. Every semester I have kids who come to classes but never turn in any homework. They also don't withdraw either. I'm literally forced to fail them because I have nothing to grade them on.

  • Humans make such mistakes slowly. It's much harder to catch the "drift" introduced by LLM because it happens so quickly and silently. By the time you notice something is wrong, it has already become the foundation for more code. You are then looking at a full rewrite.

  • To borrow a concept of cloud server renting, there's also the factor of overselling. Most open source LLM operators probably oversell quite a bit - they don't scale up resources as fast as OpenAI/Anthropic when requests increase. I notice many openrouter providers are noticeably faster during off hours.

    In other words, it's not just the model size, but also concurrent load and how many gpus do you turn on at any time. I bet the big players' cost is quite a bit higher than the numbers on openrouter, even for comparable model parameters.

  • Zed on Linux is buggy as hell. On macos it's somehow more stable. Maybe Zed is indeed a "good" example of Ai-coded products.

  • Bigger than Windows 98

HackerNews