probably not
Nah. Python gets it right; all high level languages should operate this way. Division by zero is a bug 90% of the time. Errors should never pass silently. Special cases aren't special enough to break the rules.
IEEE floats should be a base on which more reasonable math semantics are built. Saying that Python should return NaN or inf instead of throwing an error is like saying that Python should return a random value from memory or segfault when reading an out-of-bounds list index.
There are no non-confusing options, but some of those are still clearly worse than others.
What should sorted([3, nan, 2, 4, 1]) give you in Python?
A) [1, 2, 3, 4, nan] is an good option
B) [nan, 1, 2, 3, 4] is an good option
C) An error is an good option
D) [3, nan, 1, 2, 4] is a silly, bad option. It's definitely not what you want, and it's quiet enough to slip by unnoticed. This is what you get when Nan != NaN
NaN == NaN is wrong. NaN != NaN is wrong, unintuitive, and breaks the rest of your code. If you want to signal that an operation is invalid, then throw an error. The silently nonsensical semantics of NaN are the worst possible response
Respectfully, I disagree.
If NaNs were meant to represent unknown quantities, then they would return false for all comparisons. But NaN != NaN is true. Assuming that two unknowns are always different is just as incorrect as assuming that they're always the same.
I'd also push back on the idea that this behavior makes sense. In my experience it's a consistent source of confusion for anyone learning to program. It's one of the clearest violations of the principle of least astonishment in programming language design.
As others have noted, it makes conscientious languages like Rust do all sorts of gymnastics to accommodate. It's a weird edge case, and imo a design mistake. "Special cases aren't special enough to break the rules."
Also, I think high level languages should avoid exposing programmers to NaN whenever possible. Python gets this right: 0/0 should be an error, not a NaN.
This reminds me of conversations around plagiarism that come up when working with students: that question of "this other person expressed this idea better than I can, why can't I just use their writing"?
Because I want to know what you think, because putting our thoughts into words and sharing them is an important part of thinking, because we'll lose these skills if we don't use them, because in thinking for yourself you might come up with something interesting that nobody has ever thought before.
Of course, writers are allowed to reference and use other peoples writing: with proper attribution. I don't have a problem with people sharing quality AI generated content when it's labelled as such. The issue is that most people writing AI comments don't do this, which is itself probably the strongest indictment of the practice.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io