
I freelanced for a funded startup and slowly realized no one actually built anything — they just pasted prompts into OpenAI.
It sounded like every other freelance gig.
A founder messaged me asking if I had time for a short-term contract. His startup had just closed a $500K pre-seed round. They had traction (supposedly), early customers (maybe), and an MVP that “just needed to be cleaned up and scaled.” I was between clients, so I took the call.
The founder was charismatic. Slick pitch. Buzzwords well-deployed. Said they had a “lean technical team” and needed “a senior engineer to add structure and velocity.”
I asked what stack they were using.
There was a pause. Then he said: “OpenAI… mostly.”
I asked about the team.
There was no CTO. No technical cofounder. Just the founder and two generalists — one doing ops, the other doing “growth.” They had a Figma mockup, a Notion roadmap, and a Python backend… kind of.
It turned out the MVP was built in less than two weeks using ChatGPT. Literally. They pasted prompts into GPT-4, copied the output, and deployed it to Replit. Somehow, it worked well…
I see a lot of money in the future for competent engineers who can unfuck what ChatGPT has fucked.
> I see a lot of money in the future for competent engineers who can unfuck what ChatGPT has fucked.
I do not want to do that work. Cleaning up junior code is easy, because they mess in predictable ways.
LLM generated code can be extremely complex at the same time that nonsensical. It has a line of genius and then code that does nothing. It can use many different libraries mixed in inhuman ways.
Better to let that companies shut down and do something better elsewhere.
People have patterns for messing things up. That’s the reason they’re called anti patterns. They learned the wrong thing and then apply it consistently. And you’ll still have human limitations to help you. They’ll want the thing to be working at least superficially, so a lot of the glue code will be correct. And the anount of code will have an upper limit.
No such thing with AI.
Yup. When digging into vibe coded apps I get brainfuck. Its not organized for a human brain to process and full of weird things
It helps that my background is reverse engineering, so i am used to code that makes no sense or has been purposefully obfuscated.
I have been exclusively doing this in the past year, selling my services as “hardening vibe-coded prototypes for production” or “helping early stage startups scale”.
In the best cases, they were able to reach funding or paying users. Architecture debt is one of the worst kinds of tech debt, so if you set it up right, it’s really hard to mess up.
In the worst case, after my contract ended, the CEO fired the whole US engineering team and replaced them with offshore resources. This was an example of messing up despite the architectural and procedural safeguards we built.
after the fall (literal + season) there will be no more money.
billions were poured over AI companies (and yes, nowadays, if you are writing a stateful loop on top of LLM API you are considered as an AI company).
it will take sometime for new money to arrive into the cycle.
this CTO was simply naively trying to fulfill the Sam Altman (and the likes) promise "the future of AI", "90% of our code is written by AI" and so on.
this is the is on the borderline of scam, sure enough misleading the public.
Amen to this. Everyone is worried about losing their job. I’m pretty sure at the same time it’s cementing them.
That future is not that far. Without a truly Senior Engineer whatever AI writes is a slop in most cases.
They key part, as usual, will be figuring out how to sell the service to people who thought ChatGPT would have been enough.
Ironically perhaps, this article has some very tell-tale AI-authored language to it e.g. "This founder didn’t fake it — he outsourced it". The cadence and writing style are redolent of ChatGPT.
There are no details I could find in there that would give me the impression this is a true story.
I think it's a hoax.
A fake story generated with ChatGPT about a company with a fake technical stack that was generated with ChatGPT...
Think about it: the hoax could be published by someone to bolster their LinkedIn profile to sell code they themselves will outsource to LLMs ; but the joke is on them, as LinkedIn engagement is all automated through bots anyway, so there is no audience ; and here we are on HN where you, hoppp, are the only human on this thread and we're all bots prompted by the author to generate engagement.
Is the solipsism hitting yet?