...

jychang

1837

Karma

2012-10-23

Created

Recent Activity

  • Yeah, that's exactly completely missing the point. A bayonet multiplies a person's power by 1.1x, a nuke multiplies it by more than 1,000,000x. Trying to be cute and lumping them together as "every technology is a force multiplier" is peak autistic literalism of 1.1x and 1,000,000x both technically being a multiplier even if they're clearly different.

  • Well, duh. Same thing applies for "Technology X can be used for war". But anyone with a brain can see nukes are on a different level than bayonets.

    Claiming AI isn't unique in being a tool for evil isn't interesting, the point is that it's a force multiplier as such.

  • Uh, "we" ?

    From someone who uses quotes „like this”?

    ... https://i.imgur.com/swpYbpv.png

  • Nontrivial way to do it is dynamically scan the python 3.12 namespace, and add these warnings.

    Is there any big downside to do it the boring way, hardcode a list and compare the error to the list?

  • 16GB on a mac with unified memory is too small for good coding models. Anything on that machine is severely compromised. Maybe in ~1 year we will see better models that fit in ~8gb vram, but not yet.

    Right now, for a coding LLM on a Mac, the standard is Qwen 3 32b, which runs great on any M1 mac with 32gb memory or better. Qwen 3 235b is better, but fewer people have 128gb memory.

    Anything smaller than 32b, you start seeing a big drop off in quality. Qwen 3 14b Q4_K_M is probably your best option at 16gb memory, but it's significantly worse in quality than 32b.

HackerNews