I'm working on a bunch of different projects trying out new stuff all the time for the past six months.
Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects.
I definitely wouldn't say I'm 10x of what I could do before across the board but a solid 2-3x average.
In some respects like testing, it's perhaps 10x because having proper test coverage is essential to being able to let agentic AI run by itself in a git worktree without fearing that it will fuck everything up.
I do dream of a scenario where I could have a company that's equivalent to 100 or 1000 people with just a small team of close friends and trusted coworkers that are all using this kind of tooling.
I think the feeling of small companies is just better and more intimate and suits me more than expanding and growing by hiring.
That's not really new, the small teams in the 2000's with web frameworks like Rails were able to do as a team of 5 what needed a 50 people team in the 90's. Or even as a week-end solo project.
What happened it that it because the new norm, and the window were you could charge the work of 50 people for a team of 5 was short. Some teams cut the prices to gain marketshare and we were back to usual revenue per employee. At some point nobody thought of a CRUD app with a web UI as a big project.
It's probably what will happen here (if AI does gives the same productivity boost as langages with memory management and web frameworks): soon your company with a small team of friends will not be seen by anyone as equivalent to 100 or 1000 people, even if you can achieve the same thing of a company that size a few years earlier.
That's what Amazon is doing. They simply increase the output norm and promise mass layoffs again. MS promises too, I'm not sure about details, but likely they don't cut the projects. Which means use of some sort of copilot is expected now.
The question is what happens to developers. Will they quit the industry or move to smaller companies?
Instagram was 13 employees before they were purchased by Facebook. The secret is most employees in a 1000 person company don't need to be there or cover very niche cases that your company likely wouldn't have.
Don't fall for the lottery winner bias. Some companies just strike it rich, often for reasons entirely outside their control. That doesn't mean that copying their methods will lead to the same results.
And for enterprise sales, you need a salesforce and many multi-billion companies have an enterprise salesforce. And documentation writers, and support staff, in multiple geographies, and events/marketing teams to support customers, etc.
Except it's not. All the big cloud providers catering to enterprises, whether SaaS or AWS/Google/Azure, definitely have large sales forces whether or not they thought they needed them at first or not.
In comic form: https://xkcd.com/1827/
YouTube had fewer than 70 employees when Google bought them in 2006.
With a good idea and good execution teams can be impressively small.
> The secret is most employees in a 1000 person company don't need to be there or cover very niche cases that your company likely wouldn't have.
That is massively wrong, and frankly an insulting worldview that a lot of people on HN seem to have.
The secret is that some companies - usually ones focused on a single highly scalable technology product, and that don't need a large sales team for whatever reason - those companies can be small.
The majority of companies are more technically complex, and often a 1,000 person company includes many, many people doing marketing, sales, integrations with clients, etc.
I have worked for many companies and the majority of them are not technically complex. They are drowning in tech debt from when they were in a high growth phase and many of the employees they have are due to needing to handle that debt but the product itself is technically very simple.
Which doesn't make my point wrong.
In many companies, tech, whether good or bad, is not the majority of the workforce, nor is it necessarily the "core competency" of the company, even if they are selling technical products! A much bigger deal is often their sales and marketing, their brand, etc.
> Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects
Can you give some examples? What’s worked well?
- Extremely strict linting and formatting rules for every language you use in a project. Including JSON, YAML, SQL.
- Using AI code gen to make your own dev tools to automate tasks. Everything from "I need a make target to automate updating my staging and production config files when I make certain types of changes" or "make an ETL to clean up this dirty database" to "make a codegen tool to automatically generate library functions from the types I have defined" and "generate a polished CLI for this API for me"
- Using Tilt (tilt.dev) to automatically rebuild and live-reload software on a running Kubernetes cluster within seconds. Essentially, deploy-on-save.
- Much more expansive and robust integration test suites with output such that an AI agent can automatically run integration tests, read the errors and use them to iterate. And with some guidance it can write more tests based on a small set of examples. It's also been great at adding formatted messages to every test assertion to make failed tests easier to understand
- Using an editor where an AI agent has access to the language server, linter, etc. via diagnostics to automatically understand when it makes severe mistakes and fix them
A lot of this is traditional programming but sped up so that things that took hours a few years ago now take literally minutes.
I worry that once I've done all that I won't have time for my actual work. I also have to investigate all these new AI editors, and sign up for the API's and work out which is best, then I have to learn how to prompt properly.
I worry that messing with the AI is the equivalent of tweaking my colour schemes and choosing new fonts.
Some of what I learned from a decade of keeping up with the perfusion of JS libraries and frameworks seems relevant to AI:
- anything with good enough adoption is good enough (unless I'm an SME to judge directly)
- build something with it before considering a switch
- they're similar enough that what I learn in one will transfer to others
- everything sucks compared with 2-3 years from now; switching between "sucks" and "sucks+" will look silly in retrospect
> I also have to investigate all these new AI editors, and sign up for the API's and work out which is best, then I have to learn how to prompt properly.
I found this didn't take me very long. Try things in order of how popular they seem and keep notes on what you do and don't like.
I personally settled on Zed (because I genuinely like the editor even with the AI bits turned off), Copilot (because Microsoft gave me a free subscription as an active OSS dev) and Claude Sonnet (seems to be a good balance). Other people I work with like Claude Code.
> make an ETL to clean up this dirty database
Can you provide concrete details?
When I do projects in this realm, it requires significant discussion with the business to understand how reality is modeled in the database and data, and that info is required before any notion of "clean up" can be defined.
Yeah, you still do all of that domain research and requirements gathering and system design as your meatbag job. But now instead of writing the ETL code yourself by hand you can get 80-90% of the way there in a minute or two with AI assistance.
> you can get 80-90% of the way there in a minute or two with AI assistance.
That just leaves the other 80-90% to do manually ;)
Claude recommended I use Tilt for setting up a new project at work. I wasn’t sure if it was worth it…is it pretty easy to set up a debugger? Not only do I have to adopt it, but I have to get a small team to be OK with it.
Our target deploy environment is K8S if that makes a difference. Right now I’m using mise tasks to run everything
If your programming language can do remote debugging you can set it up in Tilt: https://docs.tilt.dev/debuggers_python.html
Even things that took days or weeks are being done in minutes now. And a few hours on top to ensure correctness.
If you haven’t, adding in strict(er) linting rules is an easy win. Enforcing documentation for public methods is a great one imo.
The more you can do to tell the AI what you want via a “code-lint-test” loop, the better the results.
Honestly the same is true for human devs. As frustrating as strict linting can be for newer devs, it’s way less frustrating than having all the same issues pointed out in code review. That’s interesting because I’ve been finding that all sorts of stuff that’s good for AI is actually good for humans too, linting, fast easy to run tests, standardized code layouts, etc. Humans just have more ability to adapt to oddities at the moment, which leads to slack.
My rule of thumb is that if I get a nit, whitespace, or syntax preferences as a PR comment, that goes into the linter. Especially for systemic issues like e.g. not awaiting functions that return a promise, any kind of alphabetization, import styles, etc.
Yeah I find it pretty funny that so much of us (myself included) threw out strict documentation practices because “the code should be self documenting!” Now I want as much of it as I can get.
For us it’s been auto-generating tests - we focus efforts on having the LLM write 1 test, manually verifying it. Then use this as context and tell the llm to extend to all space groups and crystal systems.
So we get code coverage without all the effort, it works well for well defined problems that can be verified with test.
At some point you'll lose that edge because you stop being able to differentiate yourself. If you can x10 with agents, others can too. AI will let you reach the "higher" low hanging fruits.
Same thing as before, some people will x100 with agents while most will at maximum x10.
Using agents is the new skill. AI will always be chasing a tail where some are 10-100x more efficient at using the tool chain as others.
A while back, someone here linked to this story[0].
It's a bit simplified and idealized, but is actually fairly spot-on.
I have been using AI every day. Just today, I used ChatGPT to translate an app string into 5 languages.
[0] https://www.oneusefulthing.org/p/superhuman-what-can-ai-do-i...
Hopefully it’s better for individual strings, but I’ve heard a few native speakers of other languages (who also can speak English) complaining about websites now serving up AI-translated versions of articles by default. They are better than Google Translate of old, but apparently still bad enough that they’d much rather just be served the English original…
I guess similar to my experience with the AI voice translation YouTube has, I’ve felt similar - I’d rather listen to the original voice but with translated subtitles than a fake voice.
> they’d much rather just be served the English original
Yes. And the sites that gives me a poorly translated text (which may or may not be translated by ai) with no means to switch to English is an immediate back-button.
Usually, and especially technical articles, poor/unreadable translations are identifiable within a few words. If the text seems like it could be interesting, I spend more time searching for the in-english button then I spent reading the text.
Exactly. I wouldn't use it for bulk translations. This was literally, 4 words.
What was useful, was that I could explain exactly what the context was, in both a technical and usability context, and it understood it enough to provide appropriate translations.
UPDATE: I went and verified it. The translation was absolutely perfect. Not sure what this means for translation services, but it certainly saved me several hundred dollars, and several days, just to add one label prompt to a free app.
Weblate has been doing that for any number of languages (up to 200 or however many it supports) for many years, using many different sources, including public translation memory reviewed by humans.
It can be plugged into your code forge and fully automated — you push the raw strings and get a PR with every new/modified string translated into every other language supported by your application.
I use its auto-translation feature to prepare quick and dirty translations into five languages, which lets you test right away and saves time for professional translators later — as they have told me.
If anyone is reading this, save yourself the time on AI bullshit and use Weblate — it's a FOSS project.
If someone reports a translation error, how do you verify and fix it? Especially those tricky ones that have no direct translation and require deep understanding of the language?
That's an issue, regardless of the source of translation.
For my bulk translations, I have used Babbelon[0] for years. They do a great job. I wouldn't dream of replacing them entirely with ChatGPT.
What I would use ChatGPT for, is when I need to do a very minor change (like adding a simple prompt label to an app). Maybe just a few words.
Doing that through the translation service is crazy. They have a minimum price, and it can take a day or three to get the results. Since I work quickly, and most of my apps are free (for users; they usually cost me, quite a bit), changes can be a problem. Translations are a major "concrete galosh"[1].
With ChatGPT, I can ask, not only for a direct translation, but can also explain the context, so the translation is relevant to the implementation. That's a lot of work for bulk, but quite feasible for small "spot jobs."
As far as responding to reports of issues, that isn't always "black and white." For instance, I live in the US, and Spanish is basically a second US language. But it isn't just "Spanish." We have a dozen different variants, and proponents of each, can get very passionate about it.
For Spanish, I have learned to just use Castilian Spanish, most times. No one (except Spaniards) are completely happy, but it prevents too much bellyaching.
In some instances (like highly local sites), choosing a specific dialect may be advisable.
You verify by having a lot of friends, all over, who are true native speakers of languages. They usually aren't up for doing the translations, but are willing to vet the ones you do implement.
Localization is a huge topic, and maybe I'll write about it, sometime.
The "small but mighty" team model feels way more appealing to me too - less management overhead, more actual building
Definitely agree small teams are the way to go. The bigger the company the more cognitive dissonance is imposed on the employees. I need to work where everyone is forced to engage with reality and those that don’t are fired.
The thing is the luck as usually on the side of bigger battalions. Smaller teams don't have the rich and width of bigger companies. All in all we need the full spectrum from single person startup to mega corporations.
Some people like working for big companies so this is fine. It’s a shame that outside of Silicon Valley there isn’t a high density of small software companies.
I think we’re going to have to deal with the stories of shareholders wetting themselves over more layoffs more than we’re going to see higher quality software produced. Everyone is claiming huge productivity gains but generally software quality and new products being created seem at best unchanged. Where is all this new amazing software? It’s time to stop all the talk and show something. I don’t care that your SQL query was handled for you, thats not the bigger picture, that’s just talk.
This has been an industry wide problem at silicon valley for years now. For all their talks of changing the world, what we've gotten the last decade has been taxi and hotel apps. Nothing truly revolutionizing.
> what we've gotten the last decade has been taxi and hotel apps. Nothing truly revolutionizing.
I’m not sure where you are from, but this is not my perspective from Northern California.
1. Apps in general, and Uber in particular, have very much revolutionized the part-time work landscape via gig work. There are plenty of criticisms of gig work if/when people try to do it full time, but as a replacement for part time work, it’s incredible. I always try to strike up a conversation with my uber drivers about what they like about driving, and I have gotten quite a few “make my own schedule” and “earn/save for special things” (e.g., vacations, hobby items, etc.). Many young people I know love the flexibility of the gig apps for part-time work, as the pay is essentially market rate or better for their skill set, and they get to set their own schedule.
2. AirBnB has revolutionized housing. It’s easier for folks to realize the middle class dream of buying an house and renting it out fractionally (by the room). I’ve met several people who have spun up a a few of these. Related, mid-term rentals (e.g., weeks or months rather than days or years) are much easier to arrange now than they were 20 years ago. AirBnBs have also created some market efficiency by pricing properties competitively. Note that I think that many of these changes are actually bad (e.g., it’s tougher to buy a house where I am), but it’s revolutionary nonetheless.
Yeah but that's not tech, the positive are just result from legal loop holes. I would say though that Taxi companies now have proper app because Uber forces them to catch up with the tech. Calling taxi in the pre-Uber era was literally hell.
> Calling taxi in the pre-Uber era was literally hell.
We clearly live in two completely separate parts of the world. I'm from Denmark (where Uber ran away after being told they had to operate as a taxi company) and calling a taxi was never a problem for me. You called the dispatch, said roughly where you were, and they can by with a dude in a car who you then told where you wanted to go. By now the taxi companies have apps too, but the experience is roughly identical.
The prices suck, but that's not really a usability problem.
Uber couldn’t exist before a critical mass had a smartphone with GPS in their pocket. I don’t see what the bar is if we don’t consider it revolutionary. Anything that goes mainstream is going to eventually feel pedestrian and watered down from a tech perspective. But if you look at how most people lived and worked 30 years ago to today it’s a massive change.
The best part is those two things have only gotten worse over time. Turns out, they were never really that good of an idea, they just had money to burn and legislative holes to exploit. Now Uber is more expensive than Taxis ever were, and AirBNB is virtually useless now that they have to play the same legal ballgame as hotels. Oh, and that one is more expensive too.
Tech companies forget that software is easy, the real world is hard. Computers are very isolated and perfect environments. But building real stuff, in meatspace, has more variables than anyone can even conceptualize.
> But building real stuff, in meatspace, has more variables than anyone can even conceptualize.
and is also exactly what people want. Having and app is fine and maybe cool, but at some point what I want from my taxi company is to get in a real car with the person who is preferably not a murderer and drive somewhere. The app is not very valuable to me unless it somehow optimizes that desirable part of the exchange.
I've worked a few years in the enterprise now, and the same thing keeps popping up. Startups think they have some cool cutting-edge technology to sell, but we aren't buying technology. We will gladly pay you to take away some real life problem though, but that also means you have to own any problems with your software, since we will be paying for a service, not software.
An internet full of pointless advertising and the invention of adblocks to hide that advertising.
Digital devices that track everything you do, that then generated so much data that the advertising actually got worse. Thereby the data was collected with the promise that the adverts would get more appropriate.
Now comes AI to make sense of the data and the training data (I.e., the internet) is being swamped with AI content so that the training data for AIs is becoming useless.
I wonder what is being invented to remove all the AI content from the training data.
This really resonates with me, I want to see the bigger picture as well.
It's one thing to speed up little tasks, another to ship something truly innovative
I do see the AI agent companies shipping like crazy. Cursor, Windsurf, Claude Code... they are adding features as if they have some magical workforce of tireless AI minions building them. Maybe they do!
Alot of what Cursor, windsurf etc. kinda just feels like the next logical step you take with the invention of LLM's but doesn't actually feel like the greater system of software has changed all that much except the pure volume one individual can produce now.