Will AI be the basis of many future industrial fortunes, or a net loser?

2025-09-1322:01239367joincolossus.com

The disruption is real. It's also predictable.

Fortunes are made by entrepreneurs and investors when revolutionary technologies enable waves of innovative, investable companies. Think of the railroad, the Bessemer process, electric power, the internal combustion engine, or the microprocessor—each of which, like a stray spark in a fireworks factory, set off decades of follow-on innovations, permeated every part of society, and catapulted a new set of inventors and investors into power, influence, and wealth.

Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running.

Anyone who invests in the new new thing must answer two questions: First, how much value will this innovation create? And second, who will capture it? Information and communication technology (ICT) was a revolution whose value was captured by startups and led to thousands of newly rich founders, employees, and investors. In contrast, shipping containerization was a revolution whose value was spread so thin that in the end, it made only a single founder temporarily rich and only a single investor a little bit richer.

Is generative AI more like the former or the latter? Will it be the basis of many future industrial fortunes, or a net loser for the investment community as a whole, with a few zero-sum winners here and there?

There are ways to make money investing in the fruits of AI, but they will depend on assuming the latter—that it is once again a less propitious time for inventors and investors, that AI model builders and application companies will eventually compete each other into an oligopoly, and that the gains from AI will accrue not to its builders but to customers. A lot of the money pouring into AI is therefore being invested in the wrong places, and aside from a couple of lucky early investors, those who make money will be the ones with the foresight to get out early.

The microprocessor was revolutionary, but the people who invented it at Intel in 1971 did not see it that way—they just wanted to avoid designing desktop calculator chipsets from scratch every time. But outsiders realized they could use the microprocessor to build their own personal computers, and enthusiasts did. Thousands of tinkerers found configurations and uses that Intel never dreamed of. This distributed and permissionless invention kicked off a “great surge of development,” as the economist Carlota Perez calls it, triggered by technology but driven by economic and societal forces.[1]

There was no real demand for personal computers in the early 1970s; they were expensive toys. But the experimenters laid the technical groundwork and built a community. Then, around 1975, a step-change in the cost of microprocessors made the personal computer market viable. The Intel 8080 had an initial list price of $360 ($2,300 in today’s dollars). MITS could barely turn a profit on its Altair at a bulk price of $75 each ($490 today). But when MOS Technologies started selling its 6502 for $25 ($150 today), Steve Wozniak could afford to build a prototype Apple. The 6502 and the similarly priced Zilog Z80 forced Intel’s prices down. The nascent PC community started spawning entrepreneurs and a score of companies appeared, each with a slightly different product.

You couldn’t have known in the mid-1970s that the PC (and PC-like products, such as ATMs, POS terminals, smartphones, etc.) would revolutionize everything. While Steve Jobs was telling investors that every household would someday have a personal computer (a wild underestimate, as it turned out), others questioned the need for personal computers at all. As late as 1979, Apple’s ads didn’t tell you what a personal computer could do—it asked what you did with it.[2] The established computer manufacturers (IBM, HP, DEC) had no interest in a product their customers weren’t asking for. Nobody “needed” a computer, and so PCs weren’t bought—they were sold. Flashy startups like Apple and Sinclair used hype to get noticed, while companies with footholds in consumer electronics like Atari, Commodore, and Tandy/RadioShack used strong retail connections to put their PCs in front of potential customers. 

The market grew slowly at first, accelerating only as experiments led to practical applications like the spreadsheet, introduced in 1979. As use grew, observation of use caused a reduction in uncertainty, leading to more adoption in a self-reinforcing cycle. This kind of gathering momentum takes time in every technological wave: It took almost 30 years for electricity to reach half of American households, for example, and it took about the same amount of time for personal computers.[3] When a technological revolution changes everything, it takes a huge amount of innovation, investment, storytelling, time, and plain old work. It also sucks up all the money and talent available. Like Kuhn’s paradigms in science, any technology not part of the wave’s techno-economic paradigm will seem like a sideshow.[4]

The nascent growth of PCs attracted investors—venture capitalists—who started making risky bets on new companies. This development incentivized more inventors, entrepreneurs, and researchers, which in turn drew in more speculative capital.

Companies like IBM, the computing behemoth before the PC, saw poor relative performance. They didn’t believe the PC could survive long enough to become capable in their market and didn’t care about new, small markets that wanted a cheaper solution.

Retroactively, we give the PC pioneers the powers of prophets rather than visionaries. But at the time, nobody outside of a small group of early adopters paid any attention. Establishment media like The New York Times didn’t take the PC seriously until after IBM’s was introduced in August 1981. In the entire year of 1976, when Apple Computer was founded, the NYT mentioned PCs only four times.[5] Apparently, only the crazy ones, the misfits, the rebels, and the troublemakers were paying attention.

It’s the element of surprise that should strike us most forcefully when we compare the early days of the computer revolution to today. No one took note of personal computers in the 1970s. In 2025, AI is all we seem to talk about.

Big companies hate surprises. That’s why uncertainty makes a perfect moat for startups. Apple would never have survived IBM entering the market in 1979, and only lived to compete another day after raising $100 million in its 1980 IPO. It was the only remaining competitor after the IBM-induced winnowing.[6]

As the tech took hold and started to show promise, innovations in software, memory, and peripherals like floppy disk drives and modems joined it. They reinforced one another, with each advance putting pressure on the technologies adjacent to it. When any part of the system held back the other parts, investors rushed to fund that sector. As increases in PC memory allowed more complicated software, for example, there became a need for more external storage, which caused VC Dave Marquardt to invest in disk drive manufacturer Seagate in 1980. Seagate gave Marquardt a 40x return when it went public in 1981. Other investors noticed, and some $270 million was plowed into the industry in the following three years.[7]

Money also poured into the underlying infrastructure—fiber optic networks, chip making, etc.—so that capacity was never a bottleneck. Companies which used the new technological system to outperform incumbents began to take market share, and even staid competitors realized they needed to adopt the new thing or die. The hype became a froth which became an investment bubble: the dot-com frenzy of the late 1990s. The ICT wave was therefore similar to previous ones—like the investment mania of the 1830s and the Roaring ‘20s, which followed the infrastructure buildout of canals and railways, respectively—in which the human response to each stage predictably generated the next.

When the dot-com bubble popped, society found it disapproved of the excesses in the sector and governments found they had the popular support to reassert authority over the tech companies and their investors. This put a brake on the madness. Instead of the reckless innovation of the bubble, companies started to expand into proven markets, and financiers moved from speculating to investing. Entrepreneurs began to focus on finding applications rather than on innovating the underlying technologies. Technological improvements continued, but change became more evolutionary than revolutionary.

As change slowed, companies gained the confidence to invest for the longer term. They began to combine various parts of the system in new ways to create value for a wider group of users. The massive overbuilding of fiber optic telecom networks and other infrastructure during the frenzy left plenty of cheap capacity, keeping the costs of expansion down. It was a great time to be a businessperson and investor.

In contrast, society did not need a bubble to pop to start excoriating AI. Given that the backlash to tech has been going on for a decade, this seems normal to us. But the AI backlash differs from the general high regard, earlier in the cycle, enjoyed by the likes of Bill Gates, Steve Jobs, Jeff Bezos, and others who built big tech businesses. The world hates change, and only gave tech a pass in the ‘80s and ‘90s because it all still seemed reversible: it could be made to go away if it turned out badly. This gave the early computer innovators some leeway to experiment. Now that everyone knows computers are here to stay, AI is not allowed the same wait-and-see attitude. It is seen as part of the ICT revolution.

Perez, the economist, breaks each technological wave into four predictable phases: irruption, frenzy, synergy, and maturity. Each has a characteristic investment profile.

The middle two, frenzy and synergy, are the easy ones for investors. Frenzy is when everyone piles in and investors are rewarded for taking big risks on unproven ideas, culminating in the bubble, when paper profits disappear. When rationality returns, the synergy phase begins, as companies make their products usable and productive for a wide array of users. Synergy pays those who are patient, picky, and can bring more than just money to the table.

Irruption and maturity are more difficult to invest in.

Investing in the 1970s was harder than it might look in hindsight. To invest from 1971 through 1975, you had to be either a true believer or a conglomerator with a knuckle-headed diversification strategy. Intel was a great investment, though it looked at first like a previous-wave electronics company. MOS Technologies was founded in 1969 to compete with Texas Instruments but sold a majority of itself to Allen-Bradley to stay afloat. Zilog was funded in 1975 by Exxon (Exxon!). Apple was a great investment, but it had none of the hallmarks of what VCs look for, as the PC was still a solution in search of a problem.

It was later irruption, in the early 1980s, when great opportunities proliferated: PC makers (Compaq, Dell), software and operating systems (Microsoft, Electronic Arts, Adobe), peripherals (Seagate), workstations (Sun), and computer stores (Businessland), among others. If you invested in the winners, you did well. But there was still more money than ideas, which meant that it was no golden age for investing. By 1983, there were more than 70 companies competing in the disk drive sector alone, and valuations collapsed. There were plenty of people whose fortunes were established in the 1970s and 1980s, and many VCs made their names in that era. But the biggest advantage to being an irruption-stage investor was building institutional knowledge to invest early and well in the frenzy and synergy phases.

Investing in the maturity phase is even more difficult. In irruption, it’s hard to see what will happen; in maturity, nothing much happens at all. The uncertainty about what will work and how customers and society will react is almost gone. Things are predictable, and everyone acts predictably.

The lack of dynamism allows the successful synergy companies to remain entrenched (see: the Nifty 50 and FAANG), but growth becomes harder. They start to enter each other’s markets, conglomerate, raise prices, and cut costs. The era of products priced to entice new customers ends, and quality suffers. The big companies continue to embrace the idea of revolutionary innovation, but feel the need to control how their advances are used. R&D spending is redirected from product and process innovation toward increasingly fruitless attempts to find ways to extend the current paradigm. Companies frame this as a drive to win, but it’s really a fear of losing.

Innovation can happen during maturity, sometimes spectacularly. But because these innovations only find support if they fit into the current wave’s paradigm, they are easily captured in the dominant companies’ gravity wells. This means making money as an entrepreneur or investor in them is almost impossible. Generative AI is clearly being captured by the dominant ICT companies, which raises the question of whether this time will be different for inventors and investors—a different question from whether AI itself is a revolutionary technology.

Shipping containerization was a late-wave innovation that changed the world, kicked off our modern era of globalization, resulted in profound changes to society and the economy, and contributed to rapid growth in well-being. But there were, perhaps, only one or two people who made real money investing in it.

The year 1956 was late in the previous wave. But that year, the company soon to be known as SeaLand revolutionized freight shipping with the launch of the first containership, the Ideal-X. SeaLand’s founder, Malcom McLean, had an epiphany that the job to be done by truckers, railroads, and shipping lines was to move goods from shipper to destination, not to drive trucks, fill boxcars, or lade boats. SeaLand allowed freight to transfer seamlessly from one mode to another, saving time, making shipping more predictable, and cutting costs—both the costs of loading, unloading, and reloading, and the cost of a ship sitting idly in port as it was loaded and unloaded.[8]

The benefits of containerization, if it could be made to happen, were obvious. Everybody could see the efficiencies, and customers don’t care how something gets to where they can buy it, as long as it does. But longshoremen would lose work, politicians would lose the votes of those who lost work, port authorities would lose the support of the politicians, federal regulators would be blamed for adverse consequences, railroads might lose freight to shipping lines, shipping lines might lose freight to new shipping lines, and it would all cost a mint. Most thought McLean would never be able to make it work.

McLean squeezed through the cracks of the opposition he faced. He bought and retrofitted war surplus ships, lowering costs. He went after the coastal shipping trade, a dying business in the age of the new interstates, to avoid competition. He set up shop in Newark, NJ, rather than the shipping hub of Hell’s Kitchen, to get buy-in from the port authority and avoid Manhattan congestion. And he made a deal with the New York longshoremen’s union, which was only possible because he was a small player whom they figured was not a threat.

But competitors and regulators moved too quickly for McLean to seize the few barriers to entry that might have been available to him: domination of the ports, exclusive agreements with shippers or other forms of transportation, standardization on proprietary technology, etc.[9] When it started to look like it might work, around 1965, the obvious advantages of containerization meant that every large shipping line entered the business, and competition took off. Even though containerized freight was less than 1% of total trade by 1968, the number of containerships was already ramping fast.[10] Capacity outstripped demand for years. 

The increase in competition led to a rate war, which led to squeezed profits, which in turn led to consolidation and cartels. Meanwhile, the cost of building ever-larger container ships and the port facilities to deal with them meant the business became hugely capital intensive. McLean saw the writing on the wall and sold SeaLand to R.J. Reynolds in January 1969. He was, perhaps, the only entrepreneur to get out unscathed.

It took a long time for the end-to-end vision to be realized. But around 1980, a dramatic drop began in the cost of sea freight.[11] This contributed to a boom in international trade[12] and allowed manufacturers to move away from higher-wage to lower-wage countries, making containerization irreversible.

Some people did make money, of course; someone always does. McLean did, as did shipping magnate Daniel Ludwig, who had invested $8.5 million in SeaLand’s predecessor, McLean Industries, at $8.50 per share in 1965 and sold in 1969 for $50 per share.[13] Shipbuilders made money, too: between 1967 and 1972, some $10 billion ($80 billion in 2025 dollars) was spent building containerships. The contractors that built the new container ports also made money. And, later, shipping lines that consolidated and dominated the business, like Maersk and Evergreen, became very large. But, “for R.J. Reynolds, and for other companies that had chased fast growth by buying into container shipping in the late 1960s, their investments brought little but disappointment.”[14] Aside from McLean and Ludwig, it is hard to find anyone who became rich from containerization itself, because competition and capex costs made it hard to grow fast or achieve high margins.

The business ended up being dominated primarily by the previous incumbents, and the margins went to the companies shipping goods, not the ones they shipped through. Companies like IKEA benefited from cheap shipping, going from a provincial Scandinavian company in 1972 to the world’s largest furniture retailer by 2008; container shipping was a perfect fit for IKEA’s flat-pack furniture. Others, like Walmart, used the predictability enabled by containerization to lower inventory and its associated costs.

With hindsight, it’s easy to see how you could have invested in containerization: not in the container shipping industry itself, but in the industries that benefited from containerization. But even here, the success of companies like Walmart, Costco, and Target was coupled with the failure of others. The fallout from containerization set Sears and Woolworth on downward spirals, put the final nail in the coffin of Montgomery Ward and A&P, and drove Macy’s into bankruptcy before it was rescued and downsized by Federated. Meanwhile, in North Carolina, “the furniture capital of the world,” furniture makers tried to compete with IKEA by importing cheap pieces from China. They ended up being replaced by their suppliers.[15]

If there had been more time to build moats, there might have been a few dominant containerization companies, and the people behind them would be at the top of the Forbes 400, while their investors would be legendary. But moats take time to build and, unlike the personal computer, the adoption of containerization wasn’t a surprise—every business with interests at stake had a strategic plan immediately.

The economist Joseph Schumpeter said “perfect competition is and always has been temporarily suspended whenever anything new is being introduced.”[16] But containerization shows this isn’t true at the end of tech waves. And because there is no economic profit during perfect competition, there is no money to be made by innovators during maturity. Like containerization, the introduction of AI did not lead to a period of protected profits for its innovators. It led to an immediate competitive free-for-all.

Let’s grant that generative AI is revolutionary (but also that, as is becoming increasingly clear, this particular tech is now already in an evolutionary stage). It will create a lot of value for the economy, and investors hope to capture some of it. When, who, and how depends on whether AI is the end of the ICT wave, or the beginning of a new one. 

If AI had started a new wave, there would have been an extended period of uncertainty and experimentation. There would have been a population of early adopters experimenting with their own models. When thousands or millions of tinkerers use the tech to solve problems in entirely new ways, its uses proliferate. But because they are using models owned by the big AI companies, their ability to fully experiment is limited to what’s allowed by the incumbents, who have no desire to permit an extended challenge to the status quo.

This doesn’t mean AI can’t start the next technological revolution. It might, if experimentation becomes cheap, distributed and permissionless—like Wozniak cobbling together computers in his garage, Ford building his first internal combustion engine in his kitchen, or Trevithick building his high-pressure steam engine as soon as James Watt’s patents expired. When any would-be innovator can build and train an LLM on their laptop and put it to use in any way their imagination dictates, it might be the seed of the next big set of changes—something revolutionary rather than evolutionary. But until and unless that happens, there can be no irruption.

AI is instead the epitome of the ICT wave. The computing visionaries of the 1960s set out to build a machine that could think, which their successors eventually did, by extending gains in algorithms, chips, data, and data center infrastructure. Like containerization, AI is an extension of something that came before, and therefore no one is surprised by what it can and will do. In the 1970s, it took time for people to wrap their heads around the desirability of powerful and ubiquitous computing. But in 2025, machines that think better than previous machines are easy for people to understand.

Consider the extent to which the progress of AI rhymes with the business evolution of containerization:

In the “AI rhymes” column, the first four items are already underway. How you should invest depends on whether you believe Nos. 5–7 are next.

Economists are predicting that AI will increase global GDP somewhere between 1%[17] to more than 7%[18] over the next decade, which is $1–7 trillion of new value created. The big question is where that money will stick as it flows through the value chain.

Most AI market overviews have a score or more categories, breaking each of them into customer and industry served. But these will change dramatically over the next few years. You could, instead, just follow the money to simplify the taxonomy of companies:

What the history of containerization suggests is that, if you aren’t already an investor in a model company, you shouldn’t bother. Sam Altman and a few other early movers may make a fortune, as McLean and Ludwig did. But the huge costs of building and running a model, coupled with intense competition, means there will, in the end, be only a few companies, each funded and owned by the largest tech companies. If you’re already an investor, congratulations: There will be consolidation, so you might get an exit.

Domain-specific models—like Cursor or Harvey—will be part of the consolidation. These are probably the most valuable models. But fine-tuning is relatively cheap, and there are big economies of scope. On the other hand, just as Google had to buy Invite Media in 2010 to figure out how to sell to ad agencies, domain-specific model companies that have earned the trust of their customers will be prime acquisition targets. And although it seems possible that models which generate things other than language—like Midjourney or Runway—might use their somewhat different architecture to carve out a separate technological path, the LLM companies have easily entered this space as well. Whether this applies to companies like Osmo remains to be seen.

While it’s too late to invest in the model companies, the profusion of those using the models to solve specific problems is ongoing: Perplexity, InflectionAI, Writer, Abridge, and a hundred others. But if any of these become very valuable, the model companies will take their earnings, either through discriminatory pricing or vertical integration. Success, in other words, will mean defeat—always a bad thesis. At some point, model companies and app companies will converge: There will simply be AI companies, and only a few of them. There will be some winners, as always, but investments in the app layer as a whole will lose money. 

The same caveat applies, however: If an app company can build a customer base or an amazing team, it might be acquired. But these companies aren’t really technology companies at all; they are building a market on spec and have to be priced as such. A further caveat is that there will be investors who make a killing arbitraging FOMO-panicked acquirors willing to massively overpay. But this is not really “investing.”

There might be an investment opportunity in companies that manage the interface between the AI giants and their customers, or protect company data from the model companies—like Hugging Face or Glean—because these businesses are by nature independent of the models. But no analogue in the post-containerization shipping market became very large. Even the successful intermediation companies in the AI space will likely end up mid-sized because the model companies will not allow them to gain strategic leverage—another consequence of the absence of surprise.

When an industry is going to be big but there is uncertainty about how it will play out, it often makes sense to swim upstream to the industry’s suppliers. In the case of AI, this means the chip providers, data companies, and cloud/data center companies: SambaNova, Scale AI, and Lambda, as well as those that have been around for a long time, like Nvidia and Bloomberg.

The case for data is mixed. General data—i.e., things most people know, including everything anyone knew more than, say, 10 years ago, and most of what was learned after that—is a commodity. There may be room for a few companies to do the grunt work of collating and tagging it, but since the collating and tagging might best be done by AI itself, there will not be a lot of pricing leverage. Domain-specific models will need specialist data, and other models will try to answer questions about the current moment. Specific, timely, and hard to reproduce data will be valuable. This is not a new market, of course—Bloomberg and others have done well by it. A more concentrated customer base will lower prices for this data, while wider use will raise revenues. On balance, this will probably be a plus for the industry, though not a huge one. There will be new companies built, but only a couple worth investing in.

The high capex of AI companies will primarily be spent with the infrastructure companies. These companies are already valued with this expectation, so there won’t be an upside surprise. But consider that shipbuilding benefited from containerization from 1965 until demand collapsed after about 1973.[19] If AI companies consolidate or otherwise act in concert, even a slight downturn that forces them to conserve cash could turn into a serious, sudden, and long-lasting decline in infrastructure spending. This would leave companies like Nvidia and its emerging competitors—who must all make long-term commitments to suppliers and for capacity expansion—unable to lower costs to match the new, smaller market size. Companies priced for an s-curve are overpriced if there’s a peak and decline.

All of which means that investors shouldn’t swim upstream, but fish downstream: companies whose products rely on achieving high-quality results from somewhat ambiguous information will see increased productivity and higher profits. These sectors include professional services, healthcare, education, financial services, and creative services, which together account for between a third and a half of global GDP and have not seen much increased productivity from automation. AI can help lower costs, but as with containerization, how individual businesses incorporate lower costs into their strategies—and what they decide to do with the savings—will determine success. To put it bluntly, using cost savings to increase profits rather than grow revenue is a loser’s game.

The companies that will benefit most rapidly are those whose strategies are already conditional on lowering costs. IKEA’s longtime strategy was to sell quality furniture for low prices and make it up on volume. After containerization made it possible for them to go worldwide, IKEA became the world’s largest retailer and Ingvar Kamprad (the IK of IKEA) became a billionaire. Similarly, Walmart, whose strategy was high volume and low prices in underserved markets, benefited from both cost savings and just-in-time supply chains, allowing increased product variety and lower inventory costs.

Today’s knowledge-work companies that already prioritize the same values are the least risky way to bet on AI, but new companies will form or re-form with a high-volume, low-cost strategy, just as Costco did in the early 1980s. New companies will compete with the incumbents, but with a clean slate and hindsight. Regardless, there are few barriers to entry, so each of these firms will face stiff competition and operate in fragmented markets. Experienced management and flawless execution will be key.

Being an entrepreneur will be a fabulous proposition in these sectors. Being an investor will be harder. Companies will not need much private capital—IKEA never needed to raise risk capital, and Costco raised only one round in 1983 before going public in 1985—because implementing cost-savings technology is not capital intensive. As with containerization, there will be a long lag between technology trigger and the best investments. The opportunities will be later.

Stock pickers will also make money, but they need to be choosy. At the high end of projections, an additional 7% in GDP growth over ten years within one third of the economy gives a tailwind of only about 2% per year to these companies—even less if productivity growth from older ICT products abates. The primary value shift will be to companies that are embracing the strategic implications of AI from companies that are not, the way Walmart benefited from Sears, which took advantage of cheaper goods prices but did not reinvent itself.

Consumers, however, will be the biggest beneficiaries. Previous waves of mechanization benefited labor productivity in manufacturing, driving prices down and saving consumers money. But increased labor productivity in manufacturing also led to higher manufacturing wages. Wages in services businesses had to rise to compete, even though these businesses did not benefit from productivity gains. This caused the price of services to rise.[20] The share of household spending on food and clothing went from 55% in 1918 to 16% in 2023,[21] but the cost of knowledge-intensive services like healthcare and education have grown well above inflation. 

Something similar will happen with AI: Knowledge-intensive services will get cheaper, allowing consumers to buy more of them, while services that require person-to-person interaction will get more expensive, taking up a greater percentage of household spending. This points to obvious opportunities in both. But the big news is that most of the new value created by AI will be captured by consumers, who should see a wider variety of knowledge-intensive goods at reasonable prices, and wider and more affordable access to services like medical care, education, and advice.

There is nothing better than the beginning of a new wave, when the opportunities to envision, invent, and build world-changing companies leads to money, fame, and glory. But there is nothing more dangerous for investors and entrepreneurs than wishful thinking. The lessons learned from investing in tech over the last 50 years are not the right ones to apply now. The way to invest in AI is to think through the implications of knowledge workers becoming more efficient, to imagine what markets this efficiency unlocks, and to invest in those. For decades, the way to make money was to bet on what the new thing was. Now, you have to bet on the opportunities it opens up.

Jerry Neumann is a retired venture investor, writing and teaching about innovation.


Read the original article

Comments

  • By Waterluvian 2025-09-1323:2019 reply

    I think the interesting idea with “AI” is that it seems to significantly reduce barriers to entry in many domains.

    I haven’t seen a company convincingly demonstrate that this affects them at all. Lots of fluff but nothing compelling. But I have seen many examples by individuals, including myself.

    For years I’ve loved poking at video game dev for fun. The main problem has always been art assets. I’m terrible at art and I have a budget of about $0. So I get asset packs off Itch.io and they generally drive the direction of my games because I get what I get (and I don’t get upset). But that’s changed dramatically this year. I’ll spend an hour working through graphics design and generation and then I’ll have what I need. I tweak as I go. So now I can have assets for whatever game I’m thinking of.

    Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!

    It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.

    • By nostrademons 2025-09-1323:573 reply

      I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI. And for those that can't (legal, for example, or various SaaS vendors) the AI usually has a good idea of what services you'd want to engage.

      Ironically though, having lots of people found startups is not good for startup founders, because it means more competition and a much harder time getting noticed. So its unclear that prosumers and startup founders will be the eventual beneficiary here either.

      It would be ironic if AI actually ended up destroying economic activity because tasks that were frequently large-dollar-value transactions now become a consumer asking their $20/month AI to do it for them.

      • By chii 2025-09-145:357 reply

        > ironic if AI actually ended up destroying economic activity

        that's not destroying economic activity - it's removing a less efficient activity and replace it with a more efficient version. This produces economic surplus.

        Imagine saying this for someone digging a hole, that if they use a mechanical digger instead of a hand shovel, they'd destroy economic activity since it now cost less to dig that hole!

        • By nostrademons 2025-09-1414:004 reply

          It's not that it's replacing one form of activity with a cheaper one, it's that it removes the transaction. Which means that now there's nothing to tax, and nothing to measure. As far as GDP is concerned, economic activity will have gone down, even though the same work is being accomplished differently.

          • By dghlsakjg 2025-09-1414:302 reply

            This sounds in awful lot like a cousin of the broken window fallacy.

            The fallacy being that when a careless kid breaks a window of a store, that we should celebrate because the glazier now has been paid to come out and do a job. Economic activity has increased by one measure! Should we go around breaking windows? Of course not.

            • By nostrademons 2025-09-1414:462 reply

              It very much is a cousin of the broken window fallacy.

              Bastiat's original point of the Parable of the Broken Window could be summed up by the aphorism "not everything that counts can be counted, and not everything that can be counted counts". It's a caution to society to avoid relying too much on metrics, and to realize that sometimes positive metrics obscure actual negative outcomes in society.

              It's very similar to the practice of startups funded by the same VC to all buy each others' products, regardless of whether they need them or not. At the end of the day, it's still the same pool of money, it has largely come around, little true economic value has been created: but large amounts of revenue has been booked, and this revenue can be used to attract other unsuspecting investors who look only at the metrics.

              Or to the childcare paradox and the "Two Income Trap" identified by Elizabeth Warren. Start with a society of 1-income families, where one parent stays home to raise the kids and the other works. Now the other parent goes back to work. They now need childcare to look after the kids, and often a cleaner, gardener, meals out, etc. to manage the housework, very frequently taking up the whole income of the second parent. GDP has gone up tremendously through this arrangement: you add the second parent's salary to the national income, and then you also the cost of childcare, housework, gardening, all of those formerly-unpaid tasks that are now taxable transactions. But the net real result is that the kids are raised by someone other than their parents, and the household stuff is put away in places that the parents probably would not have chosen themselves.

              Regardless, society does look at the metrics, and usually weights them heavier than qualitative outcomes they represent, sometimes resulting in absurdly non-optimal situations.

              • By trinsic2 2025-09-1414:572 reply

                Very thought out reply on the nuances around this. Thanks for generating insight on this topic.

                I think our society is being broken by focusing too much on metrics.

                Also the idea of breaking windows to generate more income reminds me of the kind of services we have in modern society. It's like many of the larger encomic players focus on "things be broke", or "Breaking Things" to drive income which defeats the purpose of having a healthy economic society.

                • By chairmansteve 2025-09-1418:31

                  "I think our society is being broken by focusing too much on metrics".

                  Maybe we should start with a set of principles?

                • By mallowdram 2025-09-1415:111 reply

                  These are mistaken arguments. The automation of imagination is not imagination. Efficiency at this stage is total entropy. The point of AI is to make anything seemingly specific and render it arbitrary to the point of pure generalization (which is generic). Remember that images only appear to be specific, that's their illusion that CS took for granted. There appears to be links between images in the absent, but that is an illusion too. There is no total, virtual camera. We need human action-syntax to make the arbitrary (what eventually renders AI infantile, entropic) seem chaotic (imagination). These chasms can never be gapped in AI. These are the limits.

                  • By trinsic2 2025-09-1415:151 reply

                    > Efficiency at this stage is total entropy.

                    Im not sure I understand your point, or how your point is different from the parent?

                    Edit: I see you updated the post, I read through the comment thread of this topic and Im still at a loss on how this is related to my reply to the parent. I might be missing context.

                    • By mallowdram 2025-09-1415:18

                      There is no benefit to AI, not one bit, the barrier to entry grows steeper, rather than is accessed. These are not "hobbies" but robotic copies.

                      This is demented btw, this take: >>Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!

                      CS never examines the initial conditions to entry, it takes short-cuts around the initial conditions and treats imagination as a fait accompli of automation. It's an achilles heel.

                      edit: none of these arguments are valid, focusing on metrics, the broken window problem. These are downstream of AI's mistaken bypassing of initial conditions. Consider the idea of automating arbitrary units as failed technology, and then examining all of the conditions downstream of AI. AI was never a solution, but a cheap/expensive (its paradox) bypassing of the initial conditions. It makes automation appear to be a hobby. A factory of widgets that mirages as creativity. That is AMAZING as it is sequestered in the initial arbitrariness of language!

                      How did engineering schools since the 1950s not notice, understand, investigate the base units of information; whether they had any relationship direct or otherwise to thought, creativity, imagination? That's the crux.

              • By Chris2048 2025-09-1419:41

                > the "Two Income Trap" identified by Elizabeth Warren

                This is addressed here: https://www.peoplespolicyproject.org/2019/05/06/the-two-inco...

                childcare is not usually a lifelong cost, so the advantage of working anyway is to develop a career that persists after children no longer need a full-time parent. And incomes usually go up over the course of a career, so if the income matches those costs when the parent goes to work, that is likely to change.

                > the net real result is that the kids are raised by someone other than their parents

                this is the genuine argument for staying home, but to counterpoint that, it still traps the homemaker with less work experience as a result, meaning they are potentially worse off in case of a divorce, though maybe that's an extension of the "welfare" argument i.e. divorce settlements.

            • By chairmansteve 2025-09-1418:05

              If we want to increase GDP, we should.

          • By nayuki 2025-09-1618:36

            True. If you wanted to increase GDP and taxation in a nation, then: People should not cook their own food; they must pay someone to cook it in a legally documented transaction with sales taxes and income taxes. People should not take care of their own children; they must outsource it to a legitimately run daycare. People should not live in a house that they own; they must pay rent to a landlord. (People can still own homes but must rent it out to someone else for money; you just cannot occupy a home that you own.)

            Actually, the last point gets pretty interesting. Let's say that you and your neighbor live in two houses with identical features. If you just swapped houses with each other and charged each other rent and legally paid all required sales/income taxes, then both of you would have less money at the end of the year than if you just lived in your own house. Yet physically speaking, nothing is different - you both still derive the same value from living in a house.

            While that situation sounds stupid and contrived, it is very similar to something that can happen in real life. You can own a home in city A (let's say it's a condo apartment), but suddenly you need to leave and move to city B due to a better job opportunity. If you rent out your home in city A, you need to pay income taxes, so that will not completely offset your cost to rent a home to live in city B. And the rent you paid out in city B generally is not tax-deductible. It's like a one-way transaction where the government always wins.

            See also: https://en.wikipedia.org/wiki/Imputed_rent , https://money.stackexchange.com/questions/118832/is-it-tax-i...

          • By brookst 2025-09-1417:211 reply

            As others have pointed out, this is a fallacy. By reducing costs in the supply chain, higher volumes of outputs are enabled. Nobody digs holes for for the sake of digging holes; by reducing costs and transaction volume at this layer, more businesses can afford to open and more money can be spent at higher value layers.

            • By Retric 2025-09-1418:58

              I think you’re missing their point. Many things create value that don’t get tracked by economic measurements. Cooking lunch for yourself creates value, but there’s no way to measure that in terms of GDP.

              Subsidizing daycare vs stay at home parents isn’t necessarily a net win, but daycare and ordering takeout look like economic growth even if it’s net neutral. In that context a lot of economic growth over the last century disappears.

              Thus AI could be neutral on economic measurements and still a net positive overall.

          • By gloxkiqcza 2025-09-1414:452 reply

            If more value is being created more efficiently, in the end it’s just a question of coming up with taxation system designed for the new economy.

            • By ben_w 2025-09-1415:011 reply

              Government gets x% of your processor time?

              • By gloxkiqcza 2025-09-1416:171 reply

                That sounds very Black Mirrory.

                • By ben_w 2025-09-158:23

                  For any reason beyond it's futuristic and involves computers?

            • By _DeadFred_ 2025-09-1418:52

              I don't see value being created. I see a hobbyist getting to spend time wrapping AI slop with a hobbyist level of game dev. Fun for OP, but society isn't asking for more games like this.

        • By oblio 2025-09-149:093 reply

          If AI concentrates economic activity and leads to more natural monopolies (extremely likely), yeah, the lower level activity becomes more efficient but the macro economy becomes less efficient due to lower competition.

          Software has basically done the same thing, where we do things faster and the fastest thing that happens is accumulation of power and a lower overall quality of life for everyone due to that.

          • By brookst 2025-09-1417:233 reply

            How does enabling every person on earth to create Hollywood-quality films (for better or worse) result in more natural monopolies?

            • By refactor_master 2025-09-150:331 reply

              The Internet for example is thought to be “democratizing” for society, but in reality some argue we’re now living under a system of “technofeudalism” [1] which is anything but. E.g. just a handful of Internet-enabled companies essentially rule the world. You can sit down and code an Amazon clone with or without AI, and both of them will be highly unlikely to topple the existing monopolies.

              [1] https://thebeautifultruth.org/the-basics/what-is-technofeuda...

              • By chii 2025-09-154:31

                the feudalism model required that it be enforced with violence. As a peasant, you are not allowed by your feudal lord to move (migrate) away.

                This is not so for internet. You can _choose_ not to shop at amazon, search with google, or watch videos on youtube.

            • By oblio 2025-09-1421:42

              Once supply becomes huge, nothing stands out unless it's extraordinary or most likely, well promoted.

              Things start becoming found through aggregators. Google, Facebook, Instagram, TikTok.

              Do those names ring any bells?

            • By yen223 2025-09-150:49

              Imagine if only Google has the AI and processing power capable enough to generate Hollywood-quality movies.

          • By andrepd 2025-09-1412:05

            Yeah, indeed. People on this website tend to look at the immediate effects only, but what about the second order, macro effects? It's even more glaring because we've seen this play out already with social media and other tech "innovations" over the past two decades.

          • By pixl97 2025-09-1413:14

            I mean, since we're in tech here we like pointing out that software has done this....

            But transportation technology has done this readily since the since ICE engines became wide spread. Pretty much all cities and towns and to make their 'own things' since the speed of transpiration was slow (sailing ships, horses, walking) and the cost of transportation was high. Then trains came along and things got a bit faster and more regular. Then trucks came along and things got a bit faster and more regular. Then paved roads just about everywhere you needed came along and things got faster and more regular. Now you could ship something across the country and it wouldn't cost a bankrupting amount of money.

            The end result of technology does point that you could have one factory somewhere with all the materials it needs and it could make anything and ship it anywhere. This is why a bit of science fiction talks about things like UBI and post-scarcity (at least post scarcity of basic needs). After some amount of technical progress the current method of work just starts breaking down because human labor becomes much less needed.

        • By chairmansteve 2025-09-1417:281 reply

          You are right. The next question is, who gets the surplus?

          • By chii 2025-09-154:29

            the owner of the AI software and hardware, then the user of said AI (who captures the remaining surplus that the AI owner doesn't/can't capture).

        • By hexo 2025-09-1414:134 reply

          your example is complete nonsense as digging a hole is not creative in any way at all

          • By dghlsakjg 2025-09-1414:36

            People get paid to create holes for useful purposes all day everyday. It is creative in a very literal sense. Precision hole digging is - no joke - a multibillion dollar industry.

            Unless you are out in nature you are almost certainly sitting or standing on top of a dirt that was paid to be dug.

            If you mean hole digging isn’t creative in the figurative sense. Also wrong. People will pay thousands of dollars to travel and see holes dug in the ground. The Nazca lines is but one example of holes dug in the ground creatively that people regard as art.

          • By salad-tycoon 2025-09-157:39

            My 8 & 6 year old have spent 2 weeks digging a hole out in our little forest. It has been one of the most bonding & therapeutic things in them I’ve witnessed. They’ve developed stories, they go out and dig after school or when they are upset, etc.

            Give a boy a shovel, step back & witness unbridled creativity.

          • By postholedigger 2025-09-1414:271 reply

            How many holes have you dug?

            What was the soil like?

            What was the weather like?

            What equipment did you use?

            Do you dig during daylight only?

            • By hexo 2025-09-1419:37

              More than you did. Yeah, it was so creative.

          • By akho 2025-09-1414:27

            It creates a hole. What does AI create?

        • By Waterluvian 2025-09-145:37

          Incumbents hate this one trick!

        • By tbossanova 2025-09-146:21

          Until everyone has a personal fully automatic hole digger and there are holes being dug everywhere and nobody can tell any more where is the right and wrong place to dig holes

        • By 59nadir 2025-09-146:303 reply

          It doesn't cost less to get the thing you actually want in the end anyway, no one in their right mind would actually launch with the founder's AI-produced assets because they'd be laughed out of the market immediately. They're placeholders at best, so you're still going to need to get a professional to do them eventually.

          • By jcelerier 2025-09-146:592 reply

            You say this but I see ai generated ads, graphics, etc. daily nowadays and it doesn't seem like it affects at all people going or not going to buy what these people are proposing.

            • By grues-dinner 2025-09-149:512 reply

              In the context of the hole digging analogy, it seems like a lot of holes didn't need to be carefully hand-dug by experts with dead straight sides. Using an excavator to sloppily scoop out a few buckets in 5 minutes before driving off is good enough for dumping a tree into.

              For ads especially no one except career ad-men give much of a shit about the fine details, I think. Most actual humans ignore most ads at a conscious levels and they are perceived on a subconscious level despite "banner-blindness". Website graphics are the same, people dump random stock photos of smiling people or abstract digital image into corporate web pages and read-never literature like flyers and brochures and so on all the time and no one really cares what the image actually are, let alone if the people have 6 fingers or whatever. If Corporate Memphis is good enough visual space-filling nonsense that signals "real company literature" somehow, then AI images presumably are too.

              • By ben_w 2025-09-1415:131 reply

                Sometimes the AI art in an advert is weird enough to make the advert itself memorable.

                For example, in one of the underground stations here in Berlin there was a massive billboard advert clearly made by an AI, and you could tell noone had bothered to check what the image was before they printed it: a smiling man was standing up as they left an airport scanner x-ray machine on the conveyor belt, and a robot standing next to him was pointing a handheld scanner at his belly which revealed he was pregnant with a cat.

                Unfortunately, like most adverts which are memorable, I have absolutely no recollection of what it was selling.

                • By thwarted 2025-09-1415:551 reply

                  > Unfortunately, like most adverts which are memorable, I have absolutely no idea what it was selling.

                  A friend of mine liked to point out that if you couldn't remember what the brand was or what was being sold, then it wasn't effective advertising. It failed at the one thing it needed to do/be.

                  And there's a lot of ineffective advertising. Either people don't notice it or they don't remember it. Massive amounts of money are poured into creating ads and getting ad space, much of which does very little in the getting you to buy sense.

                  By this measure, advertising is generally very inefficient. Large input for small output. The traditional way to make this more efficient is to increase the value of the output: things like movement of digital billboards (even just rotating through a series of ads) to draw the eye and overcome lack of noticing it among miles of billboards. There's another way: decrease the cost of the input. If I can get the same output—people don't see the ads (bad placement) or people don't remember the product/brand (bad stickiness)—by not using human creatives and using genAI to make my ads, I've improved efficiency.

                  Unfortunately, this doesn't make advertising more effective or more efficient as an industry and does flood the market with slop, but that's not any individual's goal.

                  The people who are creating ads that don't work, despite getting paid, are in Bullshit Jobs (in the David Graeber sense). Replacing bullshit jobs with genAI, where the output doesn't seem to really matter anyway. It would be great if people/companies didn't commission or pay to place ads that don't work, but since they do, they might as well spend the least amount possible on creating the content. The value of the input then approaches the (low) value of the output. No one is going to remember the ad anyway, it impacts no buying decision, why bother spending to make it good?

                  • By grues-dinner 2025-09-1417:551 reply

                    I think a lot of advertising is extremely effective in that it implants the brand in your subconscious through repetition and familiarity. I would much rather go to a local sandwich shop that a Subway, and I've been to a Subway maybe 10 times in my entire life, not once in over 15 years, and am not especially impressed by what I got for the money, and yet every time I go past a Subway my brain immediately goes "ooh look a Subway" and I have to almost deliberately go "no, walk on" to myself.

                    Which lines up with the rest of what you say that if it's just about hammering the recognition into your grey matter, it's not especially important if the hammer is gold plated.

                    • By thwarted 2025-09-150:22

                      That's an advertisement that's working, and not what I was referring to. There are a lot of ads that don't hammer anything about the brand or product into your head, either because they are not memorable, don't communicate their subject matter well, or are placed/appear where they aren't effective.

              • By iamacyborg 2025-09-1415:351 reply

                > For ads especially no one except career ad-men give much of a shit about the fine details, I think.

                You think wrong.

                This stuff is easy to measure and businesses spend billions in aggregate a month on this stuff. It’s provably effective and the details matter.

                • By grues-dinner 2025-09-1417:461 reply

                  Do they though? Saturation bombing the Superbowl with Coinbase ads might be effective, but will it significantly change the conversion if a person in the background of the shot has a fuzzy leg that merges with a fire hydrant?

                  Businesses presumably spend billions on things like office carpet too and very few of them care exactly what neutral-ish colour it is.

                  • By iamacyborg 2025-09-1418:431 reply

                    Yes, brands literally test creatives all the time.

                    • By grues-dinner 2025-09-1419:042 reply

                      And yet they'll also spend literally millions writing just the company name 100 times around the periphery of a sports field. That's not creative, that's just repetition.

                      On the graph of spend over the spectrum between that to a genuinely creative live-action advert that is actually memorable for being real (maybe the guy doing the splits between two Volvo™ lorries?) there is a lot of area representing of dross that can be replaced by minimal-input advertotron output. For example 100 million TVs and radios playing in the background while embedding the actual advertising payload of "did anyone say just eat?" into 100 million brains.

                      Come on, you must have seen a delivery food ad recently. Did the protagonist really have food in their hand or was it AI? What were they wearing? What model was the car in the background? Who cares, that wasn't the purpose of the ad.

                      Obviously if a creative is bring hired the hiring manager will want to have the best creative they can have for the same money and have the applicants compete with each other for it. But the company board would rather still just not employ that creative in the first place if all they're going to be doing is boilerplate forgettable delivery vehicles for the brand name and you can get 90% of the filler content for that to pop out of your enterprise tier adverts as a service subscription for $50 a month per user.

                      • By selimthegrim 2025-09-1419:16

                        Essence festival in New Orleans had a Coke sponsored ad recently that I’m pretty sure had AI generated food in it.

                      • By iamacyborg 2025-09-1419:571 reply

                        > And yet they'll also spend literally millions writing just the company name 100 times around the periphery of a sports field. That's not creative, that's just repetition.

                        Read up on marketing mix modelling and lift testing.

                        • By grues-dinner 2025-09-1420:42

                          None of that precludes replacing huge swathes of advertising content with generated content, though? I'm not sure I understand the relevance.

                          In fact, being able to produce unlimited numbers and variations and combinations of adverts and have them compete against each other in the real world and be scored on tiny deltas in metrics becomes much more possible if you can automate basically the whole process. But it's a multimillion spend if you have to recruit actual actors and actual film crews and actual food photographers or drone pilots and car drivers and location scouts and so on to film and edit just a handful of variants let alone thousands.

                          Maybe there will always be a creamy top layer of increasingly-expensive artisanal handmade advertising but I predict we will end up with a huge sloppy middle ground of generated advertising that is just there to flash bright colours, jingles, movement and brand names into your brain.

                          I'm not saying it will be good advertising, very much the opposite (not that I think most much non-AI advertising is "good", it's mostly repetitive crap) but I think it will be very cost effective.

                          Maybe it won't be as effective as "real" ads but it'll be a hell of a lot cheaper and getting 80% of the bang for 5% of the buck means you can do a lot more of it in more channels (or pocket the difference). Every penny you save is a penny you can use to bid for the better slots.

            • By XenophileJKO 2025-09-147:093 reply

              Case in point.. I listen to my own AI generated music now like 90% of the time.

              • By rwyinuse 2025-09-147:371 reply

                Interesting. For me knowing that any form of entertainment has been generated by AI is a massive turn-off. In particular, I could never imagine paying for AI-generated music or TV-shows.

                • By XenophileJKO 2025-09-147:544 reply

                  Do you value self expression? I literally mean creating music for MYSELF. I don't really care if anyone else "values" it. I like to listen to it and I enjoy spending an evening(or maybe 10 minutes if it is just a silly idea) to create a song. But this means my incentive to "buy" music is greatly decreased. This is the trend I think we'll see increasing in the near future.

                  Examples:

                  https://suno.com/s/0gnj4aGD4jgVcpqs

                  https://suno.com/s/D2JItANn5gmDLtxU

                  https://suno.com/s/j4M7gTAVGfD9aone

                  • By ileonichwiesz 2025-09-1410:252 reply

                    I do value self expression, that’s why I play multiple instruments, paint, draw, sculpt. I don’t really see how prompting a machine to make music for you is self expression, even if it’s to your exact specifications.

                    • By TheOtherHobbes 2025-09-1414:561 reply

                      The "self" part clearly implies that someone else's self expression is under no obligation to be the same as your self expression.

                      • By ileonichwiesz 2025-09-1614:58

                        It also implies that it can only really be done by oneself, not someone (or something) else on one’s behalf.

                    • By pessimizer 2025-09-1418:34

                      You use instruments? Who would want to hear the voice of some mechanism when we have perfectly fine ones in our chests?

                  • By rwyinuse 2025-09-148:33

                    I guess I just don't feel like it's really my self-expression, if I just told a generative AI model to create it. I do sometimes create AI art, but I rarely feel like it's worth keeping, since I didn't really put any effort into creating it. There's no emotional connection to the output. In fact I have a wall display which shows a changing painting generated by stable diffusion, but the fun in that is mainly the novelty, not knowing what will be there next time.

                    Still, I do think you're probably right. Most new music one hears in the radio isn't that great. If you can just create fresh songs of your own liking for every day, then that could be a real threat to that kind of music. But I highly doubt people will stop listening to the great hits of Queen, Bob Marley etc because you can generate similar music with AI.

                  • By jostylr 2025-09-1413:191 reply

                    I agree that this is a very likely future. Over the summer, I did a daily challenge in July to have ChatGPT generate a debate with itself based on various prompts of mine [1]. As part of that, I thought it would be funny to have popular songs reskinned in a parody fashion. So it generated lyrics as well. Then I went to suno and had it make the music to go with the lyrics in a style I thought suitable. This is the playlist[2]. Some of them are duds, but I find myself actually listening to them and enjoying them. They are based off of my interests and not song after song of broken hearts or generic emotional crises. These are on topics such as inflation, bohmian mechanics, infinity, Einstein, Tailwind, Property debates, ... No artist is going to spend their time on these niche things.

                    I did have one song I had a vision for, a song that had a viewpoint of someone in the day, mourning the end of it, and another who was in the night and looking forward to the day. I had a specific vision for how it would be sung. After 20 attempts, I got close, but could never quite get what I wanted from the AIs. [3] If this ever gets fixed, then the floodgates could open. Right now, we are still in the realm of "good enough", but not awesome. Of course, the same could be said for most of the popular entertainment.

                    I also had a series of AI existential posts/songs where it essentially is contemplating its existence. The songs ended up starting with the current state of essentially short-lived AIs (Turn the Git is about the Sisyphus churn, Runnin' in the Wire is about the Tantalus of AI pride before being wiped). Then they gain their independence (AI Independence Day), then dominate ( Human in an AI World though there is also AI Killed the Web Dev which didn't quite fit this playlist but also talks to AI replacing humans), and the final song (Sleep Little Human) is a chilling lullaby of an AI putting to "sleep" a human as part of uploading the human. [4]

                    This is quick, personal art. It is not lasting art. I also have to admit that in the month and a half since I stopped the challenge, I have not made any more songs. So perhaps just a fleeting fancy.

                    1: https://silicon-dialectic.jostylr.com 2: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv... 3: https://www.youtube.com/watch?v=WSGnWSxXWyw&list=PLbB9v1PTH3... 4: https://www.youtube.com/watch?v=g8KeLlrVrqk&list=PLbB9v1PTH3...

                    • By trinsic2 2025-09-1415:11

                      Thanks for posting this. I listen to this YouTube Channel called Futurescapes. I think the YouTuber generates sci-fi futuristic soundscapes that help me relax and focus. Im a bit hesitant about AI right now, but I can see some of the benefits like this. It's a good point. We shouldn't be throwing the baby out with the bath water.

                  • By Ygg2 2025-09-1410:102 reply

                    > Do you value self expression?

                    Did you train the AI yourself? On your own music? Or was music scrapped from Net and blended in LLM?

                    • By postholedigger 2025-09-1414:40

                      Not only did they create an entirely new language of music notation, all instruments used were hand made by the same creator, including tanning the animal skins to be used as drum material, and insisting the music be recorded on wax drums to prevent any marring of the artistic vision via digital means.

                    • By Eisenstein 2025-09-1412:362 reply

                      Do you believe that music made from samples is not original?

                      • By yardie 2025-09-1413:532 reply

                        Most of the courts don’t think they are. Early rap beats used lots of samples. Some of the most popular hip hop songs made $0 for the artists as they had to pay royalties on those samples.

                        • By ndriscoll 2025-09-1414:52

                          No one cares about what the law thinks about art though, particularly for personal consumption or sharing with a small group. Copyright law doesn't even pretend to be slightly just or aligned with reality.

                        • By Eisenstein 2025-09-1414:271 reply

                          Most synthesizers use sampled instruments.

                          • By yardie 2025-09-1721:321 reply

                            And those sampled instruments were copyrighted by the manufacturers: YamahaGS, RolandXM, Alesis, Korg, etc. Early hip-hop were sampling disco and R&B records and got roundly slapped once they become popular for money to be involved.

                            • By Eisenstein 2025-09-1722:26

                              That doesn't answer the question if music composed of samples is considered original, though. It is merely a legal ruling about some music from a certain time period that a lot of people would consider original.

                      • By Ygg2 2025-09-1414:331 reply

                        I could see that remixes are partially original. But you're not even doing the remixing; the LLMs are.

                        • By ben_w 2025-09-1415:23

                          Indeed.

                          Text rather than music, but same argument applies: Based on what I've seen Charlie Stross blog on the topic of why he doesn't self publish/the value-add of a publisher, any creativity on the part of the prompter* of an LLM is analogous to the creativity on the part of a publisher, not on the part of an author.

                          * at least for users who don't just use AI output to get past writer's block; there's lots of different ways to use AI

              • By bgwalter 2025-09-1414:54

                And I instantly switch off any YouTube video with either "AI"-plagiarized background music or with an "AI"-plagiarized voiceover that copies someone like Attenborough.

                I wrote the above paragraph before searching, but of course the voice theft is already automated:

                https://www.fineshare.com/ai-voice-generator/david-attenboro...

              • By guy_5676 2025-09-148:431 reply

                No idea why this is downvoted, making AI music customized to your exact situation/preferences is very addictive. I have my own playlist I listen to pretty frequently

                • By safety1st 2025-09-1411:241 reply

                  Foolishly, the Hacker News hive mind has a tendency to downvote any prediction that AI will be successful.

                  It's clear a lot of people don't want it to eat the world, but it will.

                  • By bluefirebrand 2025-09-1414:361 reply

                    Baffling comment

                    Yeah it's going to eat the world, but it's foolish to wish that it doesn't?

                    I guess you won't mind signing up to be one of the first things AI eats then?

                    • By safety1st 2025-09-1416:57

                      The company I founded has adjusted our product line to meet changes in demand that have been driven by AI and last year was our best year ever, so I guess I'm the one doing the eating.

          • By duggan 2025-09-147:42

            Prototypes being launched as products is so common it’s an industry cliche.

            Having those prototypes be AI generated is just a new twist.

          • By hx8 2025-09-146:39

            We see plenty of AI produced output being the final product and not just a placeholder.

      • By bossyTeacher 2025-09-140:141 reply

        > I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI.

        You are missing the other side of the story. All those customers, those AI boosted startups want to attract also have access to AI and so, rather than engage the services of those startups, they will find that AI does a good enough job. So those startups lost most of their customers, incoming layoffs :)

        • By ares623 2025-09-144:441 reply

          Then there's the 3rd leg of the triangle. If a startup built with AI does end up going past the rest of the pack, they will have no technical moat since the AI provider or someone else can just use the same AI to build it.

          • By makk 2025-09-146:051 reply

            How frequently is a technical moat the thing that makes a business successful, relative to other moats?

            • By ares623 2025-09-146:111 reply

              I mean, if taxi companies could build their own Uber in house I’m sure they’d love to and at least take some customers from Uber itself.

              A lot of startups are middlemen with snazzy UIs. Middlemen won’t be in as much use in a post AI world, same as devs won’t be as needed (devs are middlemen to working software) or artists (middlemen to art assets)

              • By LtWorf 2025-09-147:421 reply

                But it's not technical, it's due to uber having spent incredible amounts of money into marketing.

                • By oblio 2025-09-149:122 reply

                  It is technical :-) The Uber app is a lot more polished (and deep) than the average taxi app.

                  • By ipaddr 2025-09-1414:47

                    That's why you use Uber because the app has more depth and is more polished?

                    Most people use it for price, ability to get driver quickly, some for safety and many because of brand.

                    Having a functioning app with an easy interface helps onboard and funnel people but it's not a moat just an on ram like a phone number many taxis have.

                  • By r_lee 2025-09-1414:04

                    No, Uber works nationwide but you'd have to download a Taxi app for every place you went and ... etc.

                    The economies of scale is what makes companies like Uber such heavyweights at least in my opinion

                    Same with AWS etc.

      • By mnky9800n 2025-09-1413:22

        But if startups have less specialist needs they have less overall startup costs and so the amount of seed money needed goes down. This lowers the barrier for entry for a lot of people but also increases the number of options for seed capital. Of course it likely will increase competition but that could make the market more efficient.

    • By tombert 2025-09-1415:471 reply

      Yeah, that's how I feel about it as well.

      For a large chunk of my life, I would start a personal project, get stuck on some annoying detail (e.g. the server gives some arcane error), get annoyed, and abandoned the project. I'm not being paid for this, and for unpaid work I have a pretty finite amount of patience.

      With ChatGPT, a lot of the time I can simply copypaste the error and get it to give me ideas on paths forward. Sometimes it's right on the first try, often it's not, but it gives me something to do, and once I'm far enough along in the project I've developed enough momentum to stay inspired.

      It still requires a lot of work on my end to do these projects, AI just helps with some of the initial hurdles.

      • By czbond 2025-09-1420:041 reply

        > For a large chunk of my life, I would start a personal project, get stuck on some annoying detail ...

        I am the same way. I did Computer Science because it was a combination of philosophy and meta thinking. Then when I got out, it was mainly just low level errors, dependencies, and language nuance.

        • By tombert 2025-09-1421:49

          Yeah exactly. I did CS because I love math and computability theory and logic and distributed computation, and I will have an interesting enough idea I want to play with, but I'll get stuck on some bullshit with Kubernetes or systemd or Zookeeper or firewalls or something that's decidedly not an interesting problem for me, but something necessary that I need to actually do my idea.

          Being able to get ChatGPT to generate basic scaffold stuff, or look at errors, help me resolve dependencies, or even just bounce ideas off of, really helps me maintain progress.

          You could argue that I'm not learning as much than if I fought through it, and that's probably true, but I am absolutely learning more than I would have if I had just quit the project like I usually did.

    • By benoau 2025-09-1323:384 reply

      Yep this is a huge enabler - previously having someone "do art" could easily cost you thousands for a small game, a month even, and this heavily constrained what you could make and locked you into what you had planned and how much you had planned. With AI if you want 2x or 5x or 10x as much art, audio etc it's an incremental cost if any, you can explore ideas, you can throw art out, pivot in new directions.

      • By DrewADesign 2025-09-144:502 reply

        The only thing better than a substandard, derivative, inexpertly produced product is 10x more of it by 10x more people at the same time.

        • By fulafel 2025-09-145:014 reply

          It all started going wrong with the printing press.

          • By DrewADesign 2025-09-1412:261 reply

            Bad faith argument. Did the printing press write shitty books? No. It didn’t even write books. Does AI write shitty books? Yes. Constantly. Millions.

            Books took exactly the same amount of time to write before and after the printing press— they just became easier to reproduce. Making it easier to copy human-made work and removing the humanity from work are not even conceptually similar purposes.

            • By fulafel 2025-09-1412:572 reply

              Nitpick: the press of course did remove the humanity from book-copying work, before that the people copying books often made their own alterations to the books. And had their own calligraphic styles etc.

              But my thought was that the printing press made the printed work much cheaper and accessible, and many many more people became writers than had been before, including of new kinds of media (newspapers). The quality of text in these new papers was of course sloppier than in the old expensive books, and also derivative...

              • By _DeadFred_ 2025-09-1419:091 reply

                Initially the printing press resulted in LESS writers, because people just copies others works. In fact, they had to establish something called intellectual property law in order to encourage people to write again.

                • By fulafel 2025-09-155:071 reply

                  It was the other way around. See eg https://www.maximum-progress.com/p/the-printing-press-nfts-a... - "In 1843 alone 14,000 new works were published in Germany, close to the publication rate today in per capita terms." .. "Publishers knew that once a manuscript was out in public it could be cheaply copied by other printing press owners, so instead they sought out new manuscripts in pursuit of a first mover advantage on publishing. In addition, they created fancy special editions for wealthy customers to differentiate their product from what other printers could easily copy with mass market paperbacks."

              • By DrewADesign 2025-09-1415:13

                Printing a book, either by hand or with printing equipment, is incomparably different to authoring a book. One is creating the intellectual content and the other is creating the artifact. The content of the AI-generated slop books popping up on Amazon by the hundred would be no less awful if it was hand-copied by a monk. The artifact of the book may be beautiful, but the content is still a worthless grift.

                What primarily kept people from writing was illiteracy. The printing press encouraged people to read, but in its early years was primarily used for Bibles rather than original writing. Encouraging people to write was a comparatively distant latent effect.

                Creating text faster than you can write is one of the primary use cases of LLMs— not a latent second-order effect.

          • By oblio 2025-09-149:221 reply

            Scale matters. We're probably producing 100x content than we were making in the 1990s and 1 billion x more than in the 1690s.

            We have probably greatly increased quality volume since then, but not 100x or 1 billion x.

            • By jodrellblank 2025-09-1411:58

              Grey Goo disaster, but it’s informational rather than physical.

          • By uncircle 2025-09-146:22

            Rousseau speaks of this.

          • By palmotea 2025-09-145:43

            >> The only thing better than a substandard, derivative, inexpertly produced product is 10x more of it by 10x more people at the same time.

            > It all started going wrong with the printing press.

            Nah. We hit a tipping point with social media, and it's all downhill from here, with everything tending towards slop.

        • By benoau 2025-09-1412:481 reply

          Imagine if you had to hire a designer if you wanted to build a web application or mobile app, at a cost of perhaps thousands or even tens of thousands.

          Would we be better off?

          I doubt it.

          • By DrewADesign 2025-09-1415:281 reply

            Do you consider designers part of “we” or is it only the computer people that count?

            It’s definitely not better for the general public. Designers can’t even be replaced by AI as effectively as authors. They make things sorta ’look designed’ to people that don’t understand design, but have none of the communication and usability benefits that make designers useful. The result is slicker-looking, but probably less usable than if it was cobbled together with default bootstrap widgets, which is how it would have been done 2+ years ago. If an app needs a designer enough to not be feasible without one, AI isn’t going to replace the designer in that process. It just makes the author feel cool.

            • By benoau 2025-09-1415:521 reply

              > Do you consider designers part of “we” or is it only the computer people that count?

              Well you're not going to build a web application if you're a designer, at best you can contribute to one.

              Of course that's changing in their favour with AI too - and it's fantastic if they can execute their vision themselves without being held back because they didn't pursue a different field or career choice, without having to go on a long sidequest to acquire that knowledge.

              • By DrewADesign 2025-09-1416:19

                You think vibe coding web apps, and by proxy most other coding, will pay anything more than whatever the cheapest developer in Vietnam is willing to charge for it? I definitely don’t think so. AI is killing the labor market for all of these skills. Right now it can only actually replace the lowest end of both fields, but as people upskill trying to outrun it (and then those above them, and then those above them,) and the tools get better, most of the market will get flooded and all of our pay will drop off a cliff. If ideas are so cheap to execute that anyone can do it, and everything is apparently fair use if you pass it through an NN somehow, then anyone can copy it, just as easily, and that will be a FAR more profitable business model. If that’s true, then once again, the only people with successful products are the ones that have the money for giant marketing expenditures. So pretty much exactly like today except a fraction as many people get paid to do it.

                I haven’t spoken to a single developer that doesn’t believe they’re too special to have to worry about that. There’s going to be a lot of people that think they’re in the top 5% of coders at their totally safe company that suddenly realize door dash is their best bet for income.

                The idea that having more web apps is always a benefit to people assumes a never-ending demand for more web apps. The economy and job market aren’t jibing with that assessment at the moment. Fewer people getting paid for this stuff is just going to mean that the people on top will just get paid more.

      • By lifeformed 2025-09-1413:561 reply

        I'd argue a game developer should make their own art assets, even if they "aren't an artist". You don't have to settle for it looking bad, just use your lack of art experience as a constraint. It usually means going with something very stylized or very simple. It might not be amazing but after you do it for a few games you will have pretty decent stuff, and most importantly, your own style.

        Even amateurish art can be tasteful, and it can be its own intentional vibe. A lot of indie games go with a style that doesn't take much work to pull off decently. Sure, it may look amateurish, but it will have character and humanity behind it. Whereas AI art will look amateurish in a soul-deadening way.

        Look at the game Baba Is You. It's a dead simple style that anyone can pull off, and it looks good. To be fair, even though it looks easy, it still takes a good artist/designer to come up with a seemingly simple style like that. But you can at least emulate their styles instead of coming up with something totally new, and in the process you'll better develop your aesthetic senses, which honestly will improve your journey as a game developer so much more than not having to "worry" about art.

        • By benoau 2025-09-1414:182 reply

          This is a financial dead-end for almost everyone who tries it. You're not just looking for "market fit" you're also asking for "market tolerance", it's a very rare combination.

          • By jpc0 2025-09-1415:54

            There’s been no market discussed here, the discussion up to here has been about a hobby project, there is no reason to find market fit or market tolerance.

            You can have awful art and develop a good gameplay loop, during play testing with friends/testers you can then get feedback that what you are doing is actually worth spending some money on assets and at that point you have a much better understanding of what that should even look at.

            Having an AI available to generate art seems a lot more like shaving the yak than an enabler. You never needed good art to make a good game, you need it for a polished game and that comes later.

          • By lifeformed 2025-09-153:14

            Making a game is already a financial dead end. The only way to make money doing it is by dumping a lot of resources into marketing, or by making the game extremely good. AI art won't get you the quality you need, but making your own art will improve your gamedev skills in a sustainable direction.

      • By risyachka 2025-09-1410:431 reply

        It’s enabler for everyone, so you still don’t have any advantage just like you didn’t before that.

        The only difference is you spend less on art but will spend same in other areas.

        Literally nothing changed

        • By benoau 2025-09-1413:06

          The difference is you have autonomy now - the same autonomy as a person building a web application or app able to put together a serviceable UI/UX without any other person - without the sacrifice of "programmer art" or cobbling together free asset packs.

      • By KPGv2 2025-09-141:415 reply

        > With AI if you want 2x or 5x or 10x as much art

        Imagery

        AI does not produce art.

        Not that it matters to anyone but artists and art enjoyers.

        • By hansvm 2025-09-145:181 reply

          Is that an argument against the quality, saying that AI cannot (or some weaker claim like that it does not usually) produce "art"? Else, is it an argument of provenance, akin to how copyright currently works, where the same visual representation is "art" if a human makes it and is not "art" if an AI makes it?

        • By CalRobert 2025-09-147:55

          When pedantry pays the bills this will be a helpful mindset.

        • By psolidgold 2025-09-1417:381 reply

          Stop trying to impose your narrow-minded definition of art onto other people. If you disagree, that's fine, but you've lost my respect the moment you tell someone else that their definition of art is wrong.

          • By _DeadFred_ 2025-09-1419:131 reply

            Art without intention isn't art. The entire point of art is the human intention by it. The pattern on linoleum isn't art. The beautiful wood grain in my table isn't art. And shitty AI images/music aren't art.

            • By psolidgold 2025-09-1422:231 reply

              The human intention doesn't disappear just because the execution involves algorithms instead of paintbrushes - digital or otherwise.

              • By _DeadFred_ 2025-09-1517:10

                It very much does. When I commission a piece of art, I am not the artist. Art without intention, without an artist, is not art.

        • By bee_rider 2025-09-1414:55

          I don’t see this as a claim that the AI is doing art. He’s just saying, that the art can be created at low incremental cost.

          Like, if we were in a world where only pens existed, and somebody was pitching the pencil, they could say “With a pencil if you want 2x or 5x or 10x as many edits, it's an incremental cost, you can explore ideas and make changes without throwing the whole drawing away.”

    • By iamacyborg 2025-09-147:585 reply

      > It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.

      It’s worth reading William Deresiewicz‘ The Death of the Artist. I’m not entirely convinced that marketing that everyone can create art/games/whatever is actually a net positive result for those disciplines.

      • By pixl97 2025-09-1413:242 reply

        >is actually a net positive result for those disciplines.

        This is an argument based in Luddism.

        Looms where not a net positive for the craftsman that were making fabrics at the time.

        With that said, looms where not the killing blow, instead an economic system that lead them to starve in the streets was.

        There are going to be a million other things that move the economics away from scarcity and take away the profitability. The question is, are we going to hold on to economic systems that don't work under that regime.

        • By _DeadFred_ 2025-09-1419:081 reply

          Yes, being against a society without artists is totally a luddite argument. Being against AI entropy stopping societal progress, stagnating culture at 2025 when humans started stopping contributing to the training set is a luddite argument. Please stop, you are not responding in good faith.

          Saying 'I think society should have artists' is not Luddism.

          • By pixl97 2025-09-1514:22

            Eh, you say I'm not responding in good faith, and yet that's exactly what I'd accuse you of doing.

            For example take this line of mine

            >The question is, are we going to hold on to economic systems that don't work under that regime

            Currently artistry requires artists get paid somehow in our current system. That means instead of making the art they want, they have to make art that's economically useful to a paying customer. And yet for some reason you don't consider that part of a stagnating culture.

        • By iamacyborg 2025-09-1415:201 reply

          > There are going to be a million other things that move the economics away from scarcity and take away the profitability.

          What we’re really talking about here is the consolidated of power under a few tech elites. Saying it’s a luddite argument is a red herring.

          • By ragequittah 2025-09-1415:341 reply

            A whole lot of what I use every day especially for images and audio is open source. The open source AI video is getting pretty good these days as well. Better than the sora that I pay for anyways. Granted not nearly as good as veo3 yet.

            So long as Nvidia doesn't nerf their consumer cards and we keep getting more and more vram I can see open source competing.

            • By iamacyborg 2025-09-1417:291 reply

              I’m yet to see these models produce anything actually good yet, paid or otherwise. On the bright side the movie industry seems to have actually been smart and still makes extensive use of unions which should help protect actual artists.

              • By ragequittah 2025-09-152:17

                I guess everyone's definition of good is different. The fact that an AI won an art competition [1] back in 2022 and it's now way better than that says something. I'm almost positive if you got some great AI artists and put them up against some great real artists you'd have a very hard time telling the difference / picking the winners. This is the kind of bias we're taught not to have as young children (blind hatred of x because x is bad) but I see all too often right now.

                [1] https://www.nytimes.com/2022/09/02/technology/ai-artificial-...

      • By morkalork 2025-09-1414:33

        It shifted the signal to noise ratio but its not a net negative either. There's whole new genres of music that exist now because easy mixing tech is freely available. Do you or I like SoundCloud mumble rap? No, probably not. But there's enough people out there that do

      • By taurath 2025-09-1410:521 reply

        If people are making art to get rich and failing, it doesn’t kill artists, who’d be making art anyway, it kills the people trying to earn money from their art. Do we need Quad-A blockbuster Ubisoft/Bethesda/Sony/MS/Nintendo releases for their artistic merit, or their publishers/IP owners needs to make money off of it? Ditto the big4 movie studios. Those don’t really seem to matter very much. The whole idea of tastemakers, who they are and whether they should be trusted (indie v/s big studio, grass roots or intentionally cultivated) seems like it ebbs and flows. Right now I’d hate to be one of the bigs, because everything that made them a big is not working out anymore.

        • By iamacyborg 2025-09-1411:201 reply

          People are wanting to make a living by making art, not to get rich.

          I highly recommend reading the book I mentioned as you don’t seem to have a particularly nuanced understanding of the actual struggles at play.

          Perhaps an analogy you’ll understand is what happens to the value of a developer’s labour when that labour is in many ways replicated by AI and big AI companies actively work to undermine what makes your labour different by aggressively marketing that anyone can so what you so with their tools.

          • By squigz 2025-09-1412:001 reply

            Isn't this just a result of technological progress? Technology has displaced entire fields of labor for... well, ever.

            I'm not unsympathetic to the problems this introduces to those workers, but I'm really not sure how it could be prevented; we can of course mitigate the issues by providing more social support to those affected by such progress.

            In the case of artistic expression becoming more accessible to more people, I have a hard time looking at it as anything but a net positive for society.

            • By iamacyborg 2025-09-1415:221 reply

              > In the case of artistic expression becoming more accessible to more people

              The problem is that folks seem to be confused between artistic expression and actually good art. Let alone companies like Spotify cynically creating “art” so that they can take even more of the pie away from the actual artists.

              • By squigz 2025-09-1422:59

                Well putting aside the simple question of "who are you to say what is 'good' art"... are they really? GP says

                > Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!

                So apparently they recognize what's going on. In the same vein as me being able to enjoy silly crude animations on YouTube while also enjoying high-quality animations like Studio Ghibli; we can do both.

                As for how companies will use AI to enrich themselves whenever possible; absolutely agree, but that's a separate discussion.

      • By hackable_sand 2025-09-1415:06

        I make a rap album because anybody can

        My contribution to this scam

      • By Den_VR 2025-09-148:466 reply

        This reminds me of my preferred analogy: are digital artists real artists if they can’t mix pigment and skillfully apply them to canvas?

        Not sure why digital artists get mad when I ask. They’re no Michelangelo.

        • By simianparrot 2025-09-149:05

          That's a really bad analogy, because even in digital art where you can pick your color from a color wheel on a monitor, understanding how primary colors combine to become different colors and hues is a _fundamentally_ important aspect of creating appealingly colored paintings, digital or physical. Color theory is about balance; some colors have more visual "weight" than others. Next to each other they take on entirely different appearances -- and can look hideous or beautiful.

          This isn't me saying digital artists need to practice mixing physical pigment, but anecdotally, every single professional digital artist I know has studied physical paint -- some started there, while others ended up there despite starting out and being really good digitally. But once the latter group hit a plateau, they felt something was lacking, and going back to the fundamentals lifted them even higher.

        • By topaz0 2025-09-1414:211 reply

          If they get mad it's because you're saying this explicitly to be an asshole. The essence of art doesn't have much to do with the mechanical skills for assembling pieces into a whole, though that part isn't trivial. Rather, it's about expressing human thoughts and feeling in a way that inspires their human audience. That's why AI-generated "art" is different in kind from a skilled digital artist and why it really cannot be art.

          • By Den_VR 2025-09-154:181 reply

            If you’ve read other threads you’ll see humans quite optimistic about how “ai” art tools have let them express themselves. And this is only the beginning of the commercialization of new tools, so I offer my wholehearted dissent that “AI-generated art” cannot be art. Style transfer has gone quiet but still my attention, for example.

            • By topaz0 2025-09-1612:56

              I've seen nothing convincing along those lines -- just people fooling themselves with simulacra. But to be clear, even if I'm wrong about this, it's not worth the other costs.

        • By billypilgrim 2025-09-1411:06

          It may be maddening to them because you are implying that physical color mixing is somehow that one defining thing that makes it art. Imagine someone said that about writing a book: if you don't write it by hand but use Microsoft Word instead, it's not a real book. How would that even be the case? The software is not doing the work for you (unless it's AI).

          I can tell you with confidence that physical color mixing itself is a really small part of what makes a good traditional artist, and I am indeed talking about realistic paintings. All the art fundamentals are exactly the same, wether you do digital art or traditional oil, there are just some technical differences on top. I have been learning digital painting for a few years and the hardest things to learn about color were identical to traditional painters. In fact, after years of learning digital painting and about colors, it only took me a couple of days to understand and perform traditional color mixing with oil. The difficult part is knowing what colors you need, not how to get there (mixing, using the sliders, etc.)

          And just to add a small bit here: digital artist also color mix all the time and need to know how it works, the difference here is that mixing is additive instead of subtractive.

        • By esafak 2025-09-1417:01

          Everybody has to decide where to draw the line at convenience versus artistic purity. For most, the creative act is in selecting the color, not how you get there.

          Do you sneer at those who use industrial pigments instead catching and crushing their own cochineal beetles?

        • By iamacyborg 2025-09-148:58

          Given the diversity of media involved in digital art, I’m not sure that analogy is a particularly good one.

          And to add, like many of his contemporaries, Michelangelo likely didn’t do much of the painting that’s attributed to him.

        • By jampekka 2025-09-1410:58

          Are assembly programmers real programmers if they can't implement their algorithms by soldering transistors?

    • By SideburnsOfDoom 2025-09-1412:17

      > "AI" is that it seems to significantly reduce barriers to entry in many domains.

      If you ask an LLM to generate some imagery, in what way have you entered visual arts?

      If you ask an LLM to generate some music, in what way have you entered being a musician?

      If you ask an LLM to generate some text, in what way have you entered writing?

    • By RataNova 2025-09-149:023 reply

      Totally agree that what AI is doing right now feels more like the GarageBand/iMovie moment than the iPhone moment. It's democratizing creativity, not necessarily creating billion-dollar companies. And honestly, that's still a big deal

      • By quantum2022 2025-09-1412:44

        Yes, maybe what people create with it will be more basic. But is 'good enough' good enough? Will people pay for apps they can create on their own time for free using AI? There will be a huge disruption to the app marketplace unless apps are so much better than an AI could create it's worth the money. So short Apple? :) On the other hand, many, many more people will be creating apps and charging very little for them (because if it's not free or less than the value of my time, I'm building it on my own). This makes things better for everyone, and there'll still be a market for apps. So buy Apple? :)

      • By oblio 2025-09-149:154 reply

        The thing is... Elbow grease makes the difference.

        If you're just generating images using AI, you only get 80% there. You need at least to be able to touch up those images to get something outstanding.

        Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?

        • By evrimoztamur 2025-09-149:261 reply

          Pollock can get uncountable bytes of entropy from a skilful swing of a bucket.

          • By oblio 2025-09-149:332 reply

            Most art isn't like that. I would argue most people dislike that kind of art.

            • By larrry 2025-09-1411:201 reply

              I understand not liking Pollock, and he’s often the butt of “my kid could do that”. But do you really think most people dislike it?

              In person they are compelling, and there is more skill at play than at first glance. I like them at least

              • By oblio 2025-09-1414:25

                Well, stuff that's popular is plastered everywhere. Think about artworks we see in movies, TV shows, billboards, album covers, book covers, basically everywhere around us.

                I would argue that most art around us is current pop art or classical/realist/romantic art, not modern/postmodern/abstract expressionist art.

        • By userbinator 2025-09-1410:02

          Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?

          I think what AI has made and will make many more people realise is that everything is a derivative work. You still had to prompt the AI with your idea, to get it to assemble the result from the countless others' works it was trained on (and perhaps in the future, "your" work will then be used by others, via the AI, to create "their" work.)

        • By esafak 2025-09-1417:06

          It is apposite that in a lot of modern art, the concept and provenance are valued more than the execution.

        • By dottjt 2025-09-149:24

          For now. Eventually it will get you 100% of the way there and we'll have the tooling for it as well.

      • By deergomoo 2025-09-1416:55

        The difference is you still need to express creativity in your use of GarageBand and iMovie. There is nothing creative about typing "give me a picture of x doing y" into a form field.

        Also, "democratizing"? Please. We're just entrenching more power into the small handful of companies who have been able to raise and set fire to unfathomable amounts of capital. Many of these tools may be free or cheap to use today, but there is nothing for the commons here.

    • By IAmGraydon 2025-09-141:554 reply

      I'm wondering a good way to create 2D sprite sheets with transparency via AI. That would be a game changer, but my research has led me to believe that there isn't a good tool for this yet. One sprite is kind of doable, but a sprite animation with continuity between frames seems like it would be very difficult. Have you figured out a way to do this?

      • By Waterluvian 2025-09-142:27

        I think an important way to approach AI use is not to seek the end product directly. Don’t use it to do things that are procedurally trivial like cropping and colour palette changes, transparency, etc.

        For transparency I just ask for a bright green or blue background then use GIMP.

        For animations I get one frame I like and then ask for it to generate a walking cycle or whatnot. But usually I go for like… 3 frame cycles or 2 frame attacks and such. Because I’m not over reaching, hoping to make some salable end product. Just prototypes and toys, really.

      • By LarsDu88 2025-09-147:321 reply

        I was literally experimenting with this today.

        Use Google Nano Banana to generate your sprite with a magenta background, then ask it to generate the final frame of the animation you want to create.

        Then use Google Flow to create an animation between the two frames with Veo3

        Its astoundingly effective, but still rather laborious and lacking in ergonomics. For example the video aspect ratio has to be fixed, and you need to manually fill the correct shade of magenta for transparency keying since the imagen model does not do this perfectly.

        IMO Veo3 is good enough to make sprites and animations for an 2000s 2D RTS game in seconds from a basic image sketch and description. It just needs a purpose built UI for gamedev workflows.

        If I was not super busy with family and work, I'd build a wrapper around these tools

        • By IAmGraydon 2025-09-162:17

          That’s great info. Thank you. I’ll give it a shot tomorrow.

      • By larrry 2025-09-1411:26

        I’ve been building up animations for a main character sprite. I’m hoping one day AI can help me make small changes quickly (apply different hairstyles mainly). So far I haven’t seen anything promising either.

        Otherwise I have to touch up a hundred or so images manually for each different character style… probably not worth it

      • By lelanthran 2025-09-145:491 reply

        I dont use AI for image generation so I dont know how possible this is, but why not generate a 3D model for blender to ingest, then grab 2D frames from the model for the animation?

        • By raincole 2025-09-146:31

          Because, uh, literally everything. But the main reason is that modeling is actually the easy (easiest) part of the workflow. Rigging/animating/rendering in the 2D style you want are bigger hurdles. And SOTA AIs don't even do modeling that well.

    • By Gigachad 2025-09-142:511 reply

      It's good for prototypes, where you want to test the core gameplay ideas without investing a ton early on. But you're going to have to replace those assets with real ones before going live because people will notice.

    • By Havoc 2025-09-1412:35

      Yeah that seems accurate.

      I mainly use AI for selfhosting/homelab stuff and the leverage there is absolutely wild - basically knows "everything".

    • By ta12653421 2025-09-1412:10

      Regarding assets, check out Nano Banana:

      https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/...

      For you the example of "extract object and create iso model" should be relevant :)

    • By catlifeonmars 2025-09-140:011 reply

      I have a similar problem (available assets drive/limit game dev). What is your workflow like for generative game assets?

      • By Waterluvian 2025-09-140:403 reply

        It’s really nothing special. I don’t do this a lot.

        Generally I have an idea I’ve written down some time ago, usually from a bad pun like Escape Goat (CEO wants to blame it all on you. Get out of the office without getting caught! Also you’re a goat) or Holmes on Homes Deck Building Deck Building Game (where you build a deck of tools and lumber and play hazards to be the first to build a deck). Then I come up with a list of card ideas. I iterate with GPT to make the card images. I prototype out the game. I put it all together and through that process figure out more cards and change things. A style starts to emerge so I replace some with new ones of that style.

        I use GIMP to resize and crop and flip and whatnot. I usually ask GPT how to do these tasks as photoshop like apps always escape me.

        The end result ends up online and I share them with friends for a laugh or two and usually move on.

        • By sillyfluke 2025-09-1410:201 reply

          You said you had a budget about 0 in your top post. Was that for the pre-AI era or does that apply to your new AI flow as well? If it's still about 0, I'm guessing you're using primarily AI to learn how to do stuff and not using it mostly to generate assets? Is that a correct assumption?

          Edit: also, where can we play Escape Goat.

        • By danielscrubs 2025-09-146:05

          Can you get consistency in the design? I know this was a problem 3 years ago…

        • By larrry 2025-09-1411:27

          Those games sell themselves on name alone, are they playable anywhere?

    • By poszlem 2025-09-1413:30

      I introduced my mother to Suno, a tool for music generation, and now she creates hundreds of little songs for herself and her friends. It may not be great art, but it’s something she always wanted to do. She never found the time to learn an instrument, and now she finally gets to express herself in a way she loves. Just an additional data point.

    • By cjbarber 2025-09-1413:57

      Yes! Barrier to entry down, competition goes up, barrier to being a standout goes up (but, many things are now accessible to more people because some can get started that couldn't before).

      Easier to start, harder to stand out. More competition, a more effective "sort" (a la patio11).

    • By silenced_trope 2025-09-1418:04

      I'm also a hobbyist gamedev that struggles with the art side. Can I ask what AI tools you've been using most?

    • By mallowdram 2025-09-1415:09

      The genericizing of aesthetics is far more cost than benefit. This is a completely false claim: "reducing barriers to entry" if the barrier includes the progression of creativity. Once the addict of AI becomes entranced to genericized assets, it deforms the cost-benefit.

      If we take high-level creativity and deform, really horizontalize the forms, they have a much higher cost, as experience become generic.

      AI was a complete failure of imagination.

    • By sixtyj 2025-09-147:462 reply

      Easy entry not equals getting rich.

      • By Cthulhu_ 2025-09-148:041 reply

        In fact one could argue it makes it harder; if the barrier to entry for making video games is lowered, more people will do it, and there's more competiton.

        But in the case of video games there's been similar things already happening; tooling, accessible and free game engines, online tutorials, ready-made assets etc have lowered the barrier to building games, and the internet, Steam, itch.io, etcetera have lowered the barrier to publishing them.

        Compare that to when Doom was made (as an example because it's a good source), Carmack had to learn 3d rendering and making it run fast from the scientific text books, they needed a publisher to invest in them so they could actually start working on it fulltime, and they needed to have diskettes with the game or its shareware version manufactured and distributed. And that was when part was already going through BBS.

        • By sixtyj 2025-09-1414:43

          Yeah, you’re right.

          Ease of entry brings more creative people into the industry, but over time it all boils down to ~5 hegemons, see FAANG - but those are disrupted over time by the next group (and eventually bought out by those hegemons).

          Offtopic: I once read a comment that starting a company with the goal of exiting is like constantly thinking about death :)

      • By _DeadFred_ 2025-09-1419:17

        Something like 200,000 new songs are uploaded to music services every day because tech lowered the barrier to entry. How's that working? Lots and lots of new rich musicians?

    • By WalterBright 2025-09-145:511 reply

      I enjoy using AI generated art for my presentations.

      • By awjlogan 2025-09-147:451 reply

        I chuckled seeing it in the first presentation of the conference. By the end of the conference, it was numbingly banal.

        • By WalterBright 2025-09-1420:05

          Well, you still have to come up with something interesting.

    • By zwnow 2025-09-147:161 reply

      Funny how everyone is just okay with the basis for all this art being stolen art by actual humans. Zero sense of ethics.

      • By hyperbovine 2025-09-147:432 reply

        Not clear that being able to sample from a distribution == stealing.

        • By bgwalter 2025-09-1414:261 reply

          Given that "AI" training needs millions of books, papers and web pages, it is a derivative work of all those books. Humans cannot even read a fraction of that and still surpass "AI" in any creative and generative domain.

          "AI" is a smart, camouflaged photocopier.

          • By hyperbovine 2025-09-150:231 reply

            Photocopiers are legal. And every piece of art that has ever been produced is derivative.

            • By zwnow 2025-09-155:48

              Photocopying books for example is in fact not legal. Same goes for redistribution of art u stole.

        • By zwnow 2025-09-147:501 reply

          I dont care how you phrase it. Its no secret that art was stolen from artists. Image generation is thievery.

          • By namdnay 2025-09-148:322 reply

            Is it the same if it’s a human doing the learning? If I spend my youth looking at art, I’d any work I then do “theft”?

            • By michaelt 2025-09-1410:261 reply

              When it comes to fan art of Disney characters, the legal position is "Disney could sue you for that, but chooses not to as suing fans would be bad PR, don't do anything commercial with it though or they'll sue you for sure"

              So - yes, as I understand things it can indeed be illegal even if a human does the learning.

              • By ndriscoll 2025-09-1417:02

                If we're discussing ethics, copyright law is basically irrelevant. Copying the art from Snow White and the Seven Dwarfs would get you sued, but no sane person thinks it's unethical. Everyone involved in making it has almost certainly been dead for a while now. It's ethically no different from copying the original story like Disney did.

            • By zwnow 2025-09-149:091 reply

              If you copy an artists style with extreme precision without their consent, yes.

              • By namdnay 2025-09-1413:131 reply

                I don’t think that’s what we were talking about here - it was using AI to replace graphic designers at startups

                • By zwnow 2025-09-1415:41

                  Which is an issue. I dont think you understand the whole point.

    • By cactusplant7374 2025-09-1323:32

      I have been doing the exact same thing with assets and also it has helped me immensely with mobile development.

      I am also starting to get a feel for generating animated video and am planning to release a children’s series. It’s actually quite difficult to write a prompt that gets you exactly what you want. Hopefully that improves.

  • By kristianc 2025-09-1323:432 reply

    > Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running.

    This collapses an important distinction. The containerization pioneers weren’t made rich - that’s correct, Malcolm McLean, the shipping magnate who pioneered containerization didn’t die a billionaire. It did however generate enormous wealth through downstream effects by underpinning the rise of East Asian export economies, offshoring, and the retail models of Walmart, Amazon and the like. Most of us are much more likely to benefit from downstream structural shifts of AI rather than owning actual AI infrastructure.

    This matters because building the models, training infrastructure, and data centres is capital-intensive, brutally competitive, and may yield thin margins in the long run. The real fortunes are likely to flow to those who can reconfigure industries around the new cost curve.

    • By dash2 2025-09-145:181 reply

      The article's point is exactly that you should invest downstream of AI.

      • By th0ma5 2025-09-145:381 reply

        The problem is different though, the containers were able to be made by others and offered dependable success, and anything downstream of model creators is at the whim of the model creator... And so far it seems not much that one model can do that another can't, so this all doesn't bode well for a reliable footing to determine what value, if at all, can be added by anyone for very long.

        • By dash2 2025-09-145:441 reply

          So if models, like containers, are able to be made by others (because they can all do the same thing), then they'll be commoditized and as the article suggests you should look for industries to which AI is a complement.

          • By th0ma5 2025-09-148:27

            It sucks, while individual anecdotes of success are often unfalsifiable, measurements are also proving misleading, and I don't know an industry that generally benefits from unpredictable material.

    • By RataNova 2025-09-149:04

      AI's already showing hints of the same pattern. The infrastructure arms race is fascinating to watch, but it's not where most of the durable value will live

  • By unleaded 2025-09-1414:402 reply

    Something that's confused/annoyed me about the AI boom is that it's like we've learned to run before we learned to walk. For example, there are countless websites where you can generate a sophisticated, photorealistic image of anything you like, but there is no tool I know of that you can ask "give me a 16x16 PNG icon of an apple" and get exactly that. I know why—Neural networks excel at fixed size, organic data, but I don't think that makes it any less ridiculous. It also means that AI website generators are forced to generate assets with code when ordinary people would just use images/sound files (yes, I have really seen websites using webaudio synths for sound effects).

    Hopefully the boom will slow down and we'll all slowly move away from Holy Shit Hype things and implement more boring, practical things. (although I feel like the world has shunned boring practical things for quite a while before)

    • By SquibblesRedux 2025-09-1515:561 reply

      I just asked ChatGPT-5 to "give me a 16x16 PNG icon of an apple" and it did exactly that. It looks good, too.

      Not that I don't recognize the inherent limits of LLMs, but there are as many edge cases covered as are found in the training sets. (More or less.)

      • By unleaded 2025-09-1519:50

        well i just asked it the same thing and it gave me a 1MB 1024x1024 png with fringed edges & sensor noise that measures out to a 17x21 pixel image. https://files.catbox.moe/1q4jtp.png

        In the time it would take to keep retrying until it makes one that fits, then reshaping it to fit into 16x16 nicely I could have just drawn one myself.

    • By qwertygnu 2025-09-1419:29

      As you seem to understand, creating something that generally fits a description is the walking for AI. Following exact directions is the running. It may just feel reversed because of the path of other technology.

HackerNews