> This is a strange comment. We have a well known example in openclaw, which is notoriously vibe coded, which again if you follow the thread, I’m not defending. whereas I know senior and staff engineers at most FAANG companies and every single one uses AI to code, so many many products you know are being written with AI.
Oh it's a product? What does it do? Leak data and delete inboxes? I would not call that a "product" at least not in the commercial sense.
> I don’t wanna dox myself but last year my company developed a greenfield product with a pretty large headcount of eng (multiple teams)that was built with an AI first development workflow
Yeah you sure are not "doxxing" yourself with this generic statement. I am sure you guys built something with the "AI first" workflow. The point being, based on what he AI CEOs and AI boosters are saying, this should have been a project with one person organising a "fleet of agents" . Why wasn't it? If it still requires a large engineering headcount, what's the point of using the AI?
Of course you can use them for whatever you want. Its also not disputable that some people will be more careful than the other. The issue however is that the idiots who pushed for widespread usage of AIs in the companies, i.e. clueless MBAs, have also pushed them onto exactly the types you are mentioning - the ones who will screw things over because they are incompetent or don't care, or most likely - are both of those things. So it's not a criticism of people who are careful in their usage of LLMs in critical scenarios - it's a criticism of the morons who bought into the AI hype and really believe an LLM will produce equally great terraform code previously written by 10 engineers at the 1% of the cost.
> “Folks, as you likely know, the availability of the site and related infrastructure has not been good recently,” Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT.
Also some SVP over there: '"folks", we'll measure your performance and bonus based on how much you use Gen AI:)'
> I’m just saying the productivity gains are real, even in serious production level and life critical systems.
Again, neither serious studies (See that METR study on dev productivity), nor the ever increasing rate of major incidents caused by AI support your statement. Not to mention the absolute lack of well known AI-produced products that we know of.
> If you are only able to think in binaries, no-AI or phd-AI, that’s a you problem.
No, you see if I were a CEO of a public ompany and I lied through my teeth to the investors and the general public, about the capabilities of my product, then I would normally go to jail. The CEOs of major AI companies are making claims that do not seem to be confirmed in reality. They have burned several hundred billion dollars so far, in pursuit of "god-level intelligence". What came out instead is "your prompting sucks" or similar level of nonsense.
I am only holding them to the standards they have repeatedly, boldly and insistently set themselves. You should be too.