Software Engineer · QA Engineer · Teacher for Life I enjoy teaching, mentoring, and learning new things.
Fi’s customer service has long since turned to shit, but the things keeping me on it are the data sims, simple international roaming, and international calling. That trifecta is pretty hard to find a match for. Especially the data sims. But if you don’t need that, I probably wouldn’t recommend Fi. My wife had endless trouble with multiple bad sim cards and the customer service experience was just as dreadful as every other carrier.
(Don’t take this as advice. Just writing my own experience with this.)
This is the reason why I take the time to summarize all “why” decisions and implementation tradeoffs being made in my (too lengthy) PR descriptions with links, etc. I’ve gotten into the habit of using <detail/> to collapse everything because I’ve gotten feedback multiple times that no one reads my walls of text. However, I still write it (with short <summary/>s now) because I’ve lost track of the number of times I’ve been able to search my PRs and quickly answer mine or someone’s “why” question. I do it mostly for me because I find it invaluable as I prefer writing shit down instead of relying on my flaky memory. People are forgetful and people come and go. What doesn’t disappear is documentation tied to code commits (well… unless you nuke your repo).
IMO, the spirit of the idea is to put higher information density fields first, and let that smooth out the UX for the remaining fields as you go downwards. Yes, there will be exceptions but that only matters if you’re trying to absolve the user of all work for 100% of situations. Trying to do that is a fools errand. Invert the order and use the information gathered to make inputting the rest simpler for 80% of the users. Then make it easy for the other 20% to course correct (ex: don’t disable autofilled fields, highlight all text when tabbing to the next field, etc). I think this pattern is a good one to keep in mind, but not blindly follow, when designing the UX of a UI.
While I commend Ars and the author for taking responsibility, I am a bit off put by the wording used for the retraction on the original article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje...
> Following additional review, Ars has determined that the story “After a routine code rejection, an AI agent published a hit piece on someone by name,” did not meet our standards. Ars Technica has retracted this article. Originally published on Feb 13, 2026 at 2:40PM EST and removed on Feb 13, 2026 at 4:22PM EST.
Rather than say “did not meet our standards,” I’d much prefer if they stated what was false - that they published false, AI generated quotes. Anyone who previously read the article (which realistically are the only people who would return to the article) and might want to go back to it as a reference isn’t going to have their knowledge corrected of the falsehoods that they read.
Another fascinating thing that the Reddit thread discussing the original PR pointed out is that whoever owns that AI account opened another PR (same commits) and later posted this comment: https://github.com/matplotlib/matplotlib/pull/31138#issuecom...
> Original PR from #31132 but now with 100% more meat. Do you need me to upload a birth certificate to prove that I'm human?
It’s a bit wild to me that people are siding with the AI agent / whoever is commanding it. Combined with the LLM hallucinated reporting and all the discussion this has spawned, I think this is making out to be a great case study on the social impact of LLM tooling.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io