
We are in the middle of the biggest red ocean I have ever seen in software development.
(Originally from iamcharliegraham.com)
We are in the middle of the biggest red ocean I have ever seen in software development.
Thanks to AI coding, it has never been easier to design, develop, and distribute software. A process that once took months - designing in Figma, having developers write and test code, and deploying to AWS - can now take days with tools like Claude, other vibe-based coding assistants, and quick and easy deployment sites.
Yes, non-developers may hit a “vibe wall,” and yes, the code may run into huge technical debt quickly, but developers using AI coding tools can build new software from scratch probably 5x faster than before.
The result is a Cambrian explosion of software launches.
Where a great idea in a space once had 5-10 competitors, hundreds now appear - all competing for attention. Big companies used to move slowly, but now a ragtag team of two developers at a large firm can whip up something that looks top-of-market to the untrained eye in a matter of weeks.
Your company can scream to anyone that listens that all the competition is AI SLOP, but when hundreds of companies are pitching the same solution, your one voice will get lost.
In the past, the best practice to win in a competitive market was to differentiate yourself - “be different,” as Steve Jobs would say.
But product differentiation is no longer effective in this new world.
Differentiate on an amazing UX? You used to rely on your awesome UX team for a sustainable advantage. Now, dozens of competitors can screenshot (or soon video) your flow and give it to an AI to reproduce quickly.
Differentiate by excelling at one feature? You might get a temporary lead, but it’s now pretty trivial for competitors to get close to your functionality.
Differentiate on business model? If it starts working, dozens of your recently started competitors will vibe-code a switch over.
Differentiate on “proprietary data”? This isn’t the key differentiator it was expected to be, as we are finding data can be simulated or companies can find similar-enough data to get 80% of the way there.
Instead we live in a red ocean where features are copied in days or weeks and everyone is fighting with similar products for the same scraps.
So what does work?
Proprietary & Large Distribution In a red ocean, distribution is king. The companies that have existing distribution channels (their own communities, customer lists, celebrity CEOs) will get market share. Big companies have the edge here, but startups can compete if they have a pre-existing distribution network.
Going into a Complex and Unknown Niche The best way to avoid the red ocean is to build for an obscure and complex niche. Think: automating claims paperwork for agricultural veterinarians. Very few builders know enough about the space, let alone have distribution, and there are likely enough regulatory requirements to make a niche solution essential. Unfortunately (or fortunately) most of these niche markets will be too small to be VC-viable.
“Hard” or Expensive Integrations Builders gravitate toward the easiest solutions - it’s why AI wrappers are so prevalent. The annoying-to-build products will still have barriers. This includes software requiring difficult, per-company integrations or hundreds of integration points before becoming viable. Most builders will shy away from it for lower-hanging fruit. Companies that require expensive data sets to work also fit here, as most won’t spend hundreds of thousands of dollars before going to market.
Network Effects Businesses Products that have true, large network effects will still rule. The product needs to get noticeably better the more people use it. Social networks and marketplaces fit this model. Small, on-the-margin network effects or those that reach a limit (like optimizing based on usage) will not result in a sustainable advantage.
Compounding Data Lock-in Products can thrive if they become the system of record for operational data that your company constantly references. Consider CRM systems with years of customer interactions or content management systems with thousands of interconnected files. With every new entry, migration becomes more painful—not because of the raw data, but because of the platform-specific relationships and context that are hard to export. Your business becomes so dependent on that history that leaving is prohibitively complex and risky.
Regulatory Barriers If your product requires lots of regulatory permissions (FDA or SEC approval), this can prevent others from entering. Of course, these are also harder to get off the ground.
Bundling By Big Companies - For many products, the red ocean won’t be won. It will be absorbed. We’re going to see a wave of big companies building 80% good-enough solutions and simply bundling them into their platforms. Most of today’s standalone AI products are destined to become mere “features” in a larger solution. We’re already seeing this with Box, Notion, and Google. Expect a lot more of it.
This is the best of times and the worst of times for entrepreneurs. Knowing how and when to compete is the difference between having a good chance of building a sustainable, successful company and just picking a lottery ticket and hoping.
If you are experienced in a complex niche and want to build something there, let me know!
Can't say I agree with this article at all. This has not been my experience.
I don't quite know how to articulate this well, but there's something that I'd call a "complexity cliff" in the software business: if you want to compete in certain spaces, you need to build very complex software (even if the software, to the user, is easy to use). And while AI tools can assist you in the construction of this software, it cannot be "vibe coded" or copied whole-cloth - complexity, scale, and reliability requirements are far too great and your potential customer base will not tolerate you fumbling around.
You eventually reach a point where there are no blog posts or stackoverflow questions that walk you through step-by-step how to make this stuff happen. It's the kind of stuff that your company and maybe a few dozen others are trying to build - and of those few dozen, less than 10 are seeing actual success.
> there's something that I'd call a "complexity cliff" in the software business: if you want to compete in certain spaces, you need to build very complex software (even if the software, to the user, is easy to use)
I recognized something similar when I first started interviewing candidates.
I try to interview promising resumes even if they don't have the perfect experience match. Something that becomes obvious when doing this is that many developers have only operated on relatively simple projects. They would repeat things like "Everything is just a CRUD app" or not understand that going from Python or JavaScript to C++ for embedded systems was more complicated than learning different syntax for your if blocks and for loops.
The new variant of this is the software developer who has only worked on projects where getting to production is a matter of prompting an LLM continuously for a few months. Do this once and it feels like any problem can be solved the same way. These people are in for a shock when they stray from the common path and enter territory that isn't represented in the training data.
I'm in that boat, everything is just a crud app. I've worked on some fairly complex apps but at their core they were crud apps and most of their complexity were caused by bad developers overcomplicating and fumbling things.
That's not to say something like Figma isn't on an entirely different level, but most apps aren't Figma and don't need to be. Most apps are simple crud apps and if they aren't it's usually because the devs are bad.
It's also worth noting that a crud app can be quite complex too. There can be a lot of complexity even if the core is simple.
I also think that those of us who can recognize simple apps for what they are and design them simply are also the people best equipped to tackle more complex apps. Those guys who can make a simple crud app into an incomprehensible buggy monster certainly can't be trusted with that kind of complexity.
> Most apps are simple crud apps and if they aren't it's usually because the devs are bad.
I heard this a lot from candidates who had only worked on software that could be described as an app. They bounced from company to company adjusting a mobile app here, fitting into a React framework there, and changing some REST endpoints.
There is a large world of software out there and not all of it is user-facing apps.
>I heard this a lot from candidates who had only worked on software that could be described as an app.
Similar to that thinking, I made a previous comment how many developers in the "L.O.B. Line-Of-Business / CRUD" group are not familiar with "algorithms engineering" type of programming: https://news.ycombinator.com/item?id=12078147
Vibe coding is easiest for CRUD apps. However, it's useless for developing new scientific/engineering code for new system architectures that require combining algorithms & data structures in novel ways that Claude Code has no examples for.
I can attest to that. I was using Gemini to help with some spherical geometry that I just couldn't figure out myself. This was for an engineering system to define and avoid attitude deadzones for a system that can rotate arbitrarily.
About 75% of the time the code snippets it provided did what it said they did. But the other 25% was killer. Luckily I made a visualization system and was able to see when it made mistakes, but I think if I had tried to vibe code this months ago I'd still be trying.
(These were things like "how can I detect if an arbitrary small circle arc on a unit sphere intersects a circle of arbitrary size projected onto the surface of the unit sphere". With the right MATLAB setup this was easy to visualize and check; but I'm quite convinced it would have taken me a lot longer to understand the geometry and come up with the equations myself than it actually took me to complete the tool)
Aye.
One my standard coding tests for LLM is a spherical geometry problem, a near-triangle with all three corners being 90 degrees.
Until GPT-5, no model got it right, they only operated in the space of a euclidian projection; perhaps notably, while GPT-5 did get it right, it did so by writing and running a python script that imported a suitable library, not with its own world model.
Do you have any advice for entering group 2? I graduated university expecting to at least see jobs that needed those skills at least some of the time, but the hardest problem I've worked on was a variant of the knapsack problem and it happened in my first year out.
Take a look at the data storage industry, e.g. EBS and EFS teams at AWS, pure storage, net app, etc. The people there who work on the filesystems and block data path are doing legit applied computer science. I did it earlier in my career and it felt like being at Bell Labs in the 70s and 80s.
is Claude Code actually useless or is is a prompt engineering/context issue?
I haven't done anything with an UI for decade and a half. Backend integrations, data transformations, piping, transfer protocols and so on. Javascript hell avoided so far. No thank you
Just to interject one bit... I actually really like JS/TS for basic data transformations, piping and ETL type work in general. If you understand how type coercion works with JS it can be really powerful as a tool for these types of workloads.
Meanwhile, for actual low-level work data is bytes not Javascript objects in memory, and Javascript is a miserable tool for transforming/processing bytes.
Agree
I work on a vision based traffic monitoring system, and modelling the idea of traffic and events goes into so much complexity without a word of code written
These people are working on problems that have tutorials online and dont know that someone had to make all that
I agree 100%. IMHO this is the software that is vibe-codeable, by the way.
If you're going to deny people just because they haven't had a chance to work on more exciting stuff yet you will skip a lot of good candidates.
> It's also worth noting that a crud app can be quite complex too. There can be a lot of complexity even if the core is simple.
I suppose technically a database is just a CRUD app
Yeah, that's essentially what I mean when I say crud app. It's basically a web api written in something like C# or whatever you prefer, which receives HTTP requests and translates them into DB operations. CRUD and views basically.
For this type of development you want the DB to handle basically all the heavy lifting, the trick is to design the DB schema well and have the web API send the right SQL to get exactly the data you need for any given request and then your app will generally be nice and snappy. 90-99% of the logic is just SQL.
For the C# example you'd typically use Entity Framework so the entirety of the DB schema and the DB interaction etc is defined within the C# code.
I was actually going for the opposite point - databases generally meet the definition of CRUD app. You create rows, read them, update them and delete them (literally SQL verbs INSERT, SELECT, UPDATE, DELETE). But they are highly complex pieces of software. People who program them are generally hard core.
I think databases are more of the development environment for such apps, rather than the apps themselves. C.f. how Electron apps are "just web pages shipped with their own browser", and yet a browser (and its JavaScript runtime) are significantly more complex than almost any app built in the manner of an Electron app.
I prefer to just use Dapper for most DB interactions with C#... EF (and ORM tooling in general) can come with a lot of gotchas that simply understanding the SQL you are generating can go a long way to avoid.
Dapper is nice, but what you don't get as far as I know is migrations. With EF the app just spins up the whole DB from scratch which is great for onboarding or when you just needed a new laptop etc. Also EF is fine as long as you know what you're doing, or at least pay attention to what you're doing.
I use grate for migrations. I prefer a more manual, hands on approach to that as well.
I think you're missing the point of the comment you've replied to. That comment is talking about implementing the DB. When you're implementing a DB, you can't just forward reads and writes to another DB. Someone has to actually implement that DB on top of a filesystem, raw disk, or similar storage.
With the post's logic if you work on DB you likely have an SQL engine that is a CRUD on top of storage engine. And storage engine is a CRUD on top of some disk storage, which is CRUD over syscalls on files. Think Mysql using MyRocks using RocksDB. And keep applying until there's no sense left.
If you are referring to my post, it's about web applications. I'm not in any way claiming that postgres is a crud app. I'm describing how to design a good web application that mostly revolves around a database. Which is what people mean when they say crud app. It's just any app that's mostly crud. Where the majority of the logic can be handled by the database like I described.
A lot of apps are just about putting data into a DB and then providing access to that data. Maybe using the data to draw some nice graphs and stuff. That's a crud app.
> I'm in that boat, everything is just a crud app. I've worked on some fairly complex apps but at their core they were crud apps and most of their complexity were caused by bad developers overcomplicating and fumbling things.
Far too much of my recent work has been CRUD apps, several with wildly and needlessly overengineered plumbing.
But not all apps are CRUD. Simulations, games, and document editors where the sensible file format isn't a database, are also codebases I've worked on.
I think several of those codebases would also be vulnerable to vibe coding*; but this is where I'd focus my attention, as this kind of innovation seems to be the comparative advantage of humans over AI, and also is the space of innovative functions that can be patented.
* but not all; I'm currently converting an old C++ game to vanilla JS and the web, and I have to carefully vet everything the AI does because it tends to make un-performant choices.
Sometimes complexity exists because what you're doing is complex and their is a minimum to how simply it can be abstracted.
Yeah, but based on my own experience, most of the time complexity exists because devs suck. I know because I've simplified lots of code written by others, because rewriting it is simpler than maintaining their huge mess.
Or because they used the project as an excuse to learn a complicated but (in this case) unnecessary framework or technology stack as an resume enhancer.
Why wouldn't figma be considered a crud app? It’s still basically adding and updating things in a DB no? With some highly complex things like rendering, collab and stuff. (Fair question btw)
It's very, very far from a CRUD app or "just updating a DB". GUI-heavy apps are notoriously hard to get right. Any kind of "editable canvas" makes it 10x harder. Online collaboration is hard, so that's another 10x—there are known solutions, but it's an entire sidequest you have to pour a massive amount of effort into.
Custom text editing and rendering is really hard to do well.
Making everything smooth and performant to the point it's best-in-class while still adding new features is... remarkable.
(Speaking as someone who's writing a spreadsheet and slideshow editor myself...among other things)
It is a CRUD app, though, which is why that classification isn’t generally meaningful. CRUD basically just means the app has persistent storage.
"Having CRUD operations" and "Just a CRUD app" are very different things.
The custom text rendering bit alone should have been a good cue for this distinction.
I'm trying to imagine a scenario with a non-trivial app that is missing a create, read, update or delete operation. I'm coming up with so few examples that I have to imagine the colloquial use of CRUD app means just CRUD operations.
When people say "a CRUD app" they mean "an app that is mostly just CRUD"
This might have been the response trcf22 needed.
But sure, I’m being too pedantic here I suppose.
You should look into figma. Its one of the few marvels of software engineering made in recent times.
If you want to know how tough realtime editing is, try making a simple collaborative drawing tool or something. Or an online 2 player text adventure game
Theres a reason tutorials for those arent common
What makes Figma more complex?
Check out their old blog post on how they got real-time collaborative editing without conflicts to work: https://www.figma.com/blog/how-figmas-multiplayer-technology...
Collaboration and the whole editor UI in general is a much more complex task than your average glorified PDF with some dynamically rendered data.
It's not the Pinnacle of complexity, just more complex than your average app.
I'd say the multi-user aspects and level of scale add quite a lot of complexity in to the mix.
Saying everything is a CRUD app is a reflection on the level of abstraction a developer usually works in.
Someone who worked more in embedded systems may say something like “everything is ‘just’ a state machine.”
It's also IMO valid. CRUD isn't derogatory. It's also not particularly illuminating though. Almost everything is a CRUD app. If you get the fundamental data structures, access patterns, and control flow right for those CRUD operations, you have the foundations for what can be a successful app. And then you enhance it further - games add nice graphics, collaborative workspaces add good conflict resolution, social media sites add addictive recommendation systems. The core is CRUD but that doesn't mean the work stops there.
Not actually critiquing the comment, just somewhat for my own memory and ref, there's several other "verbs" attached to a lot of those systems.
B / L / S - Browse / List / Summarize, M / T - Move / Transfer, C / R - Copy / Replicate, A / E - Append / Expand, T / S - Trim / Subtract, P - Process, possibly V / G / D - Visualize / Graph / Display
There's probably others that vary from just a Create (POST, PUT), Read (GET), Update (PATCH), Delete (DELETE) the way they're interpreted in something like REST APIs.
Embedded systems using memory-mapped I/O are just dozens of CRUD apps, each "register" in the memory map. You don't even need to worry about the C & D parts, just read & update. We can structure each peripheral's access via a microservice…
Everything is a CRUD app if you're high on buzzwords.
What’s the endpoint for the interrupt service? Does it use OAuth?
> I try to interview promising resumes even if they don't have the perfect experience match. Something that becomes obvious when doing this is that many developers have only operated on relatively simple projects. They would repeat things like "Everything is just a CRUD app" or not understand that going from Python or JavaScript to C++ for embedded systems was more complicated than learning different syntax for your if blocks and for loops.
I agree and disagree here. IMO the sign of experience is when you understand which details really matter, and which details are more or less the same across the different stacks, and also when they don't know enough yet to differentiate and need to ask someone else.
I'm going to give a very concrete example of this so people can understand.
I built a fitness product eons ago where there a million rules that determined what should happen for prescribing exercises to athletes (college/pro teams).
If you gave this to an agent today, you will get a tangled mess of if statements that are impossible to debug or extend. This is primarily because LLMs are still bad at picking the right abstraction for a task. The right solution was to build a rules engine, use a constraint solver, and use some combinatorics.
LLMs just don't have the taste to make these decisions without guidance. They also lack the problem solving skills for things they've never seen.*
Was 95% of the app CRUD? Sure. But last I checked, CRUD was never a moat.
*I suspect this part of why senior developers are in extremely high demand despite LLMs.
---
Another example: for many probability problems, Claude loves to code up simulations rather than explore closed form solutions. Asking Claude to make it faster often drives it to make coding optimizations instead of addressing the math. You have to guide Claude to do the right thing, and that means you have to know the right thing to do.
I think the article is more reflective of the low-to-mid complexity product landscape, where surface-level features dominate and differentiation is minimal. But you're absolutely right: once you're building something that touches real-world complexity, there's a massive moat that AI tools can't easily bridge
true that there is a some kind of a ceiling of what can or can't be done. But that ceiling is way up there. Also, there are enough examples and articles and code that allows enough combination to be made so that its good enough - and that is a very important bar.
There are A LOT of businesses (even big ones managing money and what not) that rely on spreadsheets to do so much. Could this have been an app/service/SaaS/whatever ? probably.
What if these orgs can (mostly) internally solidify some of these processes? what if they don't need an insanely expensive salesforce implementor that can add "custom logic" ?
A lot of times companies will replace "complex software" with half complex process!
What if they don't need Salesforce at all because they need a reasonable simple CRM and don't want to (or shouldn't) pay $10k/seat/year ?
There are still going to be very differentiating apps and services here and there, but as time move on these "technological" advantages will erode and with AI they erode way faster.
>You eventually reach a point where there are no blog posts or stackoverflow questions that walk you through step-by-step how to make this stuff happen.
I wonder if we can use this as a”novelty” test. If AI can explain or corect your ideas, it’s not a novel idea.
Agree. This blog entry has vibes of: „I am software developer so I am so smart I can do everything and I can definitely make revolutionary healthcare app”.
Ignoring actual complexity of things, regulations and fact that there are areas that no one will take seriously some vibe coder and you really have to breath in out and swim with the right fish to be trusted and considered for making business with.
Umm... Complexity (especially with integrations) and regulations were two areas explicitly mentioned in the article as areas where you can still differentiate.
Also areas where large incumbents will keep upstarts out via regulatory capture.
If the software is doing complicated integrations, that may be a barrier as said in the article.
And to be clear, this is people using teams of Claude Code agents (either Sonnet 4.5 or Sonnet 5 and 5.5 in the future). Reliability/scale can be mitigated with a combination of a senior engineer or two, AI Coding tools like the latest Claude Code and the right language and frameworks. (Depending on the scale of course) It no longer takes a team senior and mid-level engineers many months. The barriers even for that have been reduced.
Completely agree that using Lovable, Bolt, etc aren't going to compete except as part of noise, but that's not what this article is saying.
It actually sounds like you agree with the article quite a bit.
If your product doesn't solve problems on the difficult side of the "complexity cliff" then vibe coders will copy it and drive your profit to zero.
There's a lot of ways to define different.
It's a poor choice of word to use as a clearly and universally understood axiom.
Doing only what AI can generate will only generate the average of the corpus.
Maybe it's part of the reason folks with some amount of meaningful problem solving experience, when added to AI are having completely different results, there is someone behind the steering wheel actually pushing and learning with it and also directing it.
I think there's truth in what you say (though if you are building something where you rely on blog posts you are probably doomed anyway).
But AI has huge value in gratuitously bulking out products in ways that are not economically feasible with hand coding.
As an example we are building a golf launch monitor and there is a UI where the golf club's path is rendered as it swings over the surface.
Without AI, the background would be a simple green #008000 rectangle.
With AI I can say "create a lifelike grass surface, viewed from above, here the individual blades of grass range from 2-4 mm wide and 10-14mm length, randomly distributed, and are densely enough placed that they entirely cover the surface, and shadows are cast from ...".
Basically stuff that makes your product stand out, but that you would never invest in putting a programmer onto. The net result is a bunch of complex math code, but it's stuff no human will ever need to understand or maintain.
Your example either supports “be different”, because the competition won’t think of it or won’t come up with the right prompting, or it supports TFA, because it’s easily replicated by the competition. It’s not clear which one you’re arguing for, given that GP argues against TFA.
Isn't this agreeing with the article? You can't just build something and hope for a market, you need to invest heavily to have a chance. You both are saying that, no?
I think GP is saying that this was already the case before LLMs. I.e. LLMs are only helping with things that were never part of a moat to begin with.
This is exactly right and is what one would expect from improving technology. A fractal frontier of new niches crack open as the economy keeps expanding.
I don't agree with the article either.
My view is that every company has its own DNA and that the web presence has to put this DNA in code. By DNA, I mean USP or niche. This USP or niche is tantamount to a trade secret but there doesn't even have to be innovation. Maybe there is just an excellent supplier arrangement going on behind the scenes, however, for projects, I look for more than that. I want an innovation that, because I understand the problem space and the code platform, I can see and implement.
A beginner level version of this, a simple job application form. On the backend I put the details from the browser session into form data. Therefore, HR could quickly filter out those applying for a local job that lived in a foreign country. They found this to be really useful. Furthermore, since some of our products were for the Apple ecosystem, I could get the applicant's OS in the form too, plus how long they agonised over filling in the form. These signals were also helpful.
To implement this I could use lame Stack Overflow solutions. Anyone scraping the site or even applying had no means of finding out if this was going on. Note the 'innovation' was not in any formal specification, that was just me 'being different'. In theory, my clumsy code to reverse lookup the IP address could have broken the backend form, and, had it done so, I would have paid a price for going off-piste and adding in my own non-Easter Egg.
I would not say the above example was encoding company DNA, but you get the idea. How would this stack up compared to today's AI driven recruitment tools?
As a candidate I would prefer my solution. As the employer, I too would prefer my solution, but I am biased. AI might know everything and be awesome at everything, however, sometimes human problems require human solutions and humans working with other humans to get something done.
Would I vibe code the form? Definitely no! My form would use simple form elements and labels with no classes, div wrappers or other nonsense, to leverage CSS grid layout and CSS variables to make it look good on all devices. It took me a while to learn to do forms this way, with a fraction of the markup in a fraction of the time.
I had to 'be different' to master this and disregard everything that had ever been written on Stack Overflow regarding forms, page layout and user experience.
AI does not have the capability to do super-neat forms like mine because it can't think for itself, just cherry-pick Stack Overflow solutions.
I liken what you describe with running out of Stack Overflow solutions to hill walking ('hiking'). You start at the base of the trail with vast quantities of others that have just stepped out of the parking lot, ice cream cones in hand. Then you go a mile in and the crowd has thinned. Another mile on and the crowd has definitely thinned, big time. Then you are on the final approach to the summit and you haven't seen anyone for seemingly hours. Finally, at the summit, you might meet one or two others.
Stack Overflow and blog posts are like this, at some stage you have to put it away and only use the existing code base as a guide. Then, at another level, you find specifications, scientific papers and the like to guide you to the 'summit'. AI isn't going to help you in this territory and you know you haven't got hundreds of competitors able to rip off your innovation in an instant.
> Where a great idea in a space once had 5-10 competitors, hundreds now appear - all competing for attention. Big companies used to move slowly, but now a ragtag team of two developers at a large firm can whip up something that looks top-of-market to the untrained eye in a matter of weeks.
Perhaps I'm out of touch, but I haven't seen this explosion of software competition. I'd LOVE to see some new competitors for MS Office, Gmail, Workday, Jira, EPIC, Salesforce, WebKit, Mint, etc etc but it doesn't seem to be happening.
I think this list demonstrates the OP's point—entrenched, resource-heavy, and reputable firms have and will continue to capture most of the markets, not for lack of competition, but by ownership over the distribution channels.
Having said that, I don't think it's all AI (this trend's been going on for a while), nor do I think startups can't thrive—as the pie gets bigger, competitors can carve out yet smaller niches, as the OP points out.
You're right.
The iOS app store would currently be flooded with newcomers in niche spaces — workout apps, notes, reminders, etc. And games, my god, there would be vibed clones of every game imaginable.
It's simply not happening.
App Store is already flooded with competitors for every simple niche.
That’s why this article makes no sense to me. The “Cambrian explosion” was the introduction of the app stores on phones. There are 2 million apps on Apple’s store.
Because writing code hasnt been the bottleneck for success on the app store in probably a decade, its all how to game algorithms / find someone with the power to boost your app
And the same is true for almost all those names the GP posted - they are big because of network effects - most people dont have time to evaluate the "quality" of software. In the long term the "quality" of software can be extremely variable, so mostly people just hitch their wagons to existing tools because if everyone else is doing it, it must make sense.
Could Apple's $99 fee to get on the App Store be contributing to with holding this flood, or their app review process?
For JIRA competitors are a dime a dozen. A lot of them are targeted at startups and small shops who heard JIRA was hell and think their needs are really basic and will be for a long time.
The funniest I actually had to deal with was Monday. The very premise is that task management is simple and the visual interface will reflect that. Bright colors, low information density, minimal data model and very screenshotable screens. Then when actually using it for a dev team, the first question is how long we decide to try it before giving a verdict.
> Gmail
It really depends on what feature you rely on that aren't IMAP. If it's Google services integration, they might never be a competitor ever, for instance.
You could build the most technically perfect MS Office competitor and still get zero users.
It's not about quality, it's market share, vendor lock-in, people being set in their ways and refusing to change from a known thing in general.
Jira had to get REALLY bad before we switched to Linear for example - and there are still growing pains from heavy Jira users who had very specific ways of working.
Some of us are building those... It just takes a little more time than vibe-coded AI slop
This article is based on vibes just like the trends it hypothesizes.
To pick just one claim:
“Big companies used to move slowly, but now a ragtag team of two developers at a large firm can whip up something that looks top-of-market to the untrained eye in a matter of weeks.”
This is just pure speculation with no consideration of success or longevity. Big companies are going faster now? Where? Which ones?
AI coding allows you to build prototypes quickly. All the reasons big companies are slow haven’t budged.
>This is just pure speculation with no consideration of success or longevity. Big companies are going faster now? Where? Which ones?
Yes but there is a more fundamental problem. The claim doesnt even make sense:
>“Big companies used to move slowly, but now a ragtag team of two developers at a large firm can whip up something that looks top-of-market to the untrained eye in a matter of weeks.”
That was never the problem. I mean really, what is the implication of this? That big companies moved slowly because the developers were slow? What? No one thinks that, including the author (I imagine).
Its from many layers of decision-making, risk aversion, bureaucracy, coordination across many teams, technical debt, internal politics, etc.
This manifests as developers (and others) feeling slowed down by the weight of the company. Developes (and others) being relatively fast is precisely how we know the company is slow. So adding AI to the development workflow isn't going to speed anything up. There are too many other limiting factors.
Whenever people ask me about my job, I always say it’s about 20% programming, 80% dealing with people and I’m not even in a leadership position.
AI has not helped me at all in my corporate job, however on my start up it has been a game changer.
Even with all these tools available, big companies would still be unable to compete in speed simply because in 99% of cases they don't have the required culture set in place.
Yeah, the line about big companies suddenly moving fast felt more like wishful thinking than grounded observation