Unfortunately, cyber attacks are an application that AI models should excel at. Mistakes that in normal software would be major problems will just have the impact of wasting resources, and it's often not that hard to directly verify whether it in fact succeeded.
Meanwhile, AI coding seems likely to have the impact of more security bugs being introduced in systems.
Maybe there's some story where everyone finds the security bugs with AI tools before the bad guys, but I'm not very optimistic about how this will work out...
Poor quality, user-hostile experiences are a very common consequence of entrenched monopolies.
So many companies buy Microsoft regardless of the quality of the actual products. Given that context, why would directors invest in an expensive, invisible effort like controlling quality, when they could spend those resources on product launches that are naturally legible to upper management?
I'm not sure for Android. Chrome's store has a history of legitimate free apps with millions of users but little revenue being purchase by threat actors who then add malware to the app.
But I've seen fewer stories of that sort of thing with Android apps. Maybe the app store review process is able to catch it? But just as likely to me is that it's harder to discover that a mobile app is now maliciously sending data somewhere.