As a maintainer of a medium size OSS project I agree. We've been running the produce for over a decade and a few years back Google came out with a competitor that pretty much sucked the air out of our field. It didn't matter that our product was better, we didn't have the resources to compete with a google hobby project.
As a result our work on the project got reduced to maintenance until coding agents got better. Over the past year I've rewritten a spectacular amount of the code using AI agents. More importantly, I was able to construct enterprise level testing which was a herculean task I just couldn't take up on my own.
The way I see it, AI brought back my OSS project that was heading to purgatory.
EDIT: Also about OPs post. It's really f*ing bug bounties that are the problem. These things are horrible and should die in fire...
Predicting the future is futile, but I would guess this would be exactly the opposite. LLMs make it remarkably easy to generate a lot of code so they can easily generate a lot of Rust code that looks good. It probably wouldn't be, and for us it would be unreadable when something goes wrong. We would end up in LLM debugging hell.
The solution is to use a higher level safer, strict language (e.g. Java) that would be easy for us to debug and deeply familiar to all LLMs. Yes, we will generate more code, but if you spend the LLM time focusing on nitpicking performance rather than productivity you would end up in the same problem you have with humans. LLMs also have capacity limits and the engineers that operate them have capacity limits, neither one is going away.
Well... 64kb isn't exactly enormous for the type of functionality it offered. It did support copy and paste you just had to enter editing mode. The underlying APIs didn't offer access to copy and paste directly.
Having said that, it doesn't really matter if you didn't like it. It was a pretty big part of the J2ME ecosystem at the time and it's a huge omission.
Cloud companies will charge you if you use their features without limits: news at 11...
Observability solutions of this type are for the big companies that can typically afford the bill shock. These companies can also afford the routine auditing process that makes sure the level of observability is sufficient and inexpensive. Smaller companies can just log into a few servers or a dashboard to get a sense of what's going on. They don't need something at this scale.
Pretty much every solution listed lets you fine tune the level of data you observe/retain to a very deep level. I'm personally more versed in OneAgent than OTel and you can control everything to a very fine level of data ingestion.