[dead]
In the old days - back before smartphones, back before widescreen monitors, back before broadband - the "Links" section was always a key part of any site. After spending time on a site, a visitor could find links to other pages - some of them on the same topic, some of them simply enjoyed by the creator of the site they were on. If one were to visualize the concept, they might well say that this formed a "web" of sorts.
The big publishers were the first to really reject the "Links" page. If it's not a link to our content, or the content of our sister publications, then why should we include it? Instead, they threw their resources into optimizing their placement on search engines. This took the "web" and turned it closer towards a hub-and-spoke system, as smaller sites withered and died.
Now, people have found a way to retrieve various pieces of information they're looking for that doesn't involve a search engine. It may not be perfect (gluey pizza, anyone?) but objectively, it's certainly more efficient than a list of places that have used the same words that a person is searching for, and honestly probably at least "nearly-as" reliable as said list, because the average Joe Sixpack always has, and always will, be a lot better at asking a question and getting an answer than he will be at finding an answer to his question within the confines of a larger story.
This devastates the large publishers' traffic.
I'd come up with a conclusion here, but I'm too distracted wondering where I placed my violin. It's really small, it could probably be anywhere...
I think the conclusion is that changing your business model in a reactive way to internet developments is a bad idea if you want to have a stable business. If you want to run your business that way, you better be on top of everything and you better be lucky. They rode the social media wave and lost, and now they are going to try to ride the AI wave because they don't have anything to fall back on. They are going to lose.
Legacy media grew fat off of TV and local news. Captive attention markets did not teach them how to entice people's attention, they took it for granted. They are not equipped to compete with youtube and tiktok and reddit and they will lose. Trending news from the AP wire is not unique or in depth enough for anyone to want to read more than the AI summary of your article.
What should they do? What they are good at, and what they were always good at: journalism. Write in-depth articles that take time to research and talent to write. Hire real journalists, pay them to find stories that take time to write, and publish those stories. People will pay for it.
> People will pay for it.
I would love it if it were true, but sadly, the data doesn't support this. A lot of local newspapers did real journalism relevant to their communities. However, the local newspapers were the hardest hit by the social media wave and few remain today. Fast forward to now, you cannot get any real local news easily.
The avg person never really valued real journalism to begin with and the hyper targeting/polarization of social media and closed echo chambers has made it worse.
There generally hasn't been a way to buy just the local news, so who knows. I emphasize "news", rather than "newspaper", here.
I gave up on the local papers because they contained more Reuters and New York Times wire stories than any actual local content. That was two decades back. I don't think they were willing to give up on the business model of being an aggregator.
This seems a common enough complaint that there is a Texas news company that simply called itself Local News Only, and there are a few other similar names: https://localnewsonly.com/
People get sick of it. Most people don't like living in a constant state of anger, ready to get into an argument all the time. We would rather have a shared notion of truth and a common bond. You can't predict the next 'thing' but you can usually count on it not being more of the same. Something new is going to take hold, and I would like it to involve substance and critique of narratives.
> Most people don't like living in a constant state of anger, ready to get into an argument all the time
They may not like it, but that does not mean they are motivated to break away from it. I do not think they are aware why they feel like that - they are more likely to blame the other people than the platform.
There is also an addictive element to it.
I don't think it is social media though. It started to go downhill for newspapers when they put their news on the internet for free subsidized by their papers.
> started to go downhill for newspapers when they put their news on the internet for free subsidized by their papers
To bolster this argument, the local papers that hard paywalled seem to have done just fine.
>People will pay for it.
I'm willing to pay, but not by individual subscriptions per news organization. I'm more interested in following journalists than news organizations.
Sounds like the Substack model?
Potentially, yes. However, the same problem I have with current subscription models also exists with Substack. I added up all the subscriptions necessary to bypass paywalls I encountered every month, and it came to roughly $3,000 a year. I'll have to do the same thing with Substack subscriptions. I expect they'll be like $50 a year for the basic subscription, so it would probably only be a few hundred to a thousand per year.
> ... the "Links" section was always a key part of any site. After spending time on a site, a visitor could find links to other pages - some of them on the same topic, some of them simply enjoyed by the creator of the site they were on.
Don't know how useful these are, but here are some links pages on a couple of websites I put together a while ago:
https://earthdirections.org/links/
Just personal non-commercial handcrafted sites. One day I'd like to figure out some tooling to manage / prune / update links, etc.
A recent article on HN was about small sites being destroyed in traffic, not large sites. And not just small, but small with essential human-written info.
The gemini web (smolweb) has no effective search engine, and therefore links also play a crutial role in content discovery...
Nah, Webrings were an extension of the link page ... but not the same thing.
The Link page was curated by the site operator and usually a linear list. IT's main goal was to say "Hey, this is cool, too".
A webring was more like a collective, whereas individual webring members did not necessarily know or agree with every other site in the ring. And it usually was not a list either, but more of a mini topical directory, often with a token-ring-style "Visit the next / random / prev site" navigation you could add to your own page. Webrings were already geared to increasing visitor numbers to your own page ("Others will link to me").
Oh, those were easier times.
What was the organisation of a webbing like? Did you have to email two people to arrange to insert yourself as a node at the same time to avoid breaking the ring? Or iframe'd in from a central point?
Webrings were usually a centralized and automated entity. You'd add your site to the index (either through a webform or by emailing the maintainer), then link to http:// web-ring.tld /cgi-bin/ring?site=currentsitename&action=next or something similar, which would then redirect to the actual next site in the ring.
In their heyday, there'd also be "start your own webring" sites, so you didn't need cgi-bin access on your GeoShitties or AngelFucker or TriPoop or xoom [1] site in order to start up a webring.
[1] The dry and square history books will claim that the most exciting thing about xoom was its large storage allocation (10mb at launch! 25 soon after! You could upload an entire three minute mp3 at 128CBR "CD-Quality" bitrate and still have tons of space left over for two-frame .gifs!) or its simple members.xoom. com/username URL, but the true soldiers of those bygone battledays will know it was xoom's resiliency to childish renaming-mockery.
Active webrings still exist surprisingly:
The publishers were just chasing traffic just like everyone else. Link pages were replaced by inline links which were preferred by both search engines and users. The goal was to provide relevant resources on relevant context rather in one big bucket dump no one's going to dig through anyway.
Early Google PageRank was notorious for how much additional trust a given page had based on many links back to it existed. It was why certain bloggers had massive ranks early on, because they would be in big webs of conversations with lots of high quality links out and back in.
Early SEO did weaponize that and broke it for everyone.
The ”not perfect” part really kind of ruins it for me. I can’t trust the LLM search’s answers and have to go find the source anyway, so what’s the point?
I’m seeing people in chats post stuff like “hey I didn’t know this word also means this!” when it really doesn’t, and invariably they have just asked an LLM and believed it.
You can't blindly trust sources, either. Or, sometimes, you ability to understand the sources correctly.
I think of LLMs as bookworm friends who know a little bit about everything and are a little too overconfident about the depth of their understanding. They tend to repeat what they have heard uncritically, just like so many other people do.
If you don't expect them to be the ultimate arbitrer of truth, they can be pretty useful.
Dictionary.com isn’t likely to just outright make up word meanings. There is such a thing as a trustworthy source, even if you can’t “blindly” trust it. You can still trust it and quote it and cite it. You can’t do any of those things so far with an LLM.
[dead]
You're gloating about the hardship which editors, journalists, writers, our informational institutions are facing because... sites stopped having a Links page in 1998? What the fuck, man.
Google's damned if they do and damned if the don't here:
- If they don't make search AI centric, they're going to get lapped by AI-first competitors like Perplexity, OpenAI, etc. We saw many people here predict Google's pending demise from this. - If they do make search centric, they're unfairly consuming they world's content and hoarding the user traffic to themselves.
Since no reasonable company is just going to stand by and willing let itself be obsoleted, Google's obviously going to go for option 2. But had they for some reason stood down, then they would have been supplanted by an AI competitor and the headline would read "News Sites Are Getting Crushed by Perplexity" - just a few years later.
The one way forward for them would have been to maintain their quality, but they decided to cash in on their monopoly instead. Peak short-termis.
Seems to my untrained eyes like Google's AI search is actually the best on the market, no? Seems like a lot of HN users have trained themselves not to type queries into the search prompt anymore and then complain about the quality of a product they don't use.
Every once in a while I bother not ignoring a Google AI overview, then I waste some time fact-checking it and find out it's wrong. Most recently about a python library (where it hallucinated a function that doesn't exist, complete with documentation and usage examples) and breaking news (where it authoritatively said [non-culture war, non-controversial, local] thing doesn't happen, above a dog-bites-man story from a conventional news source about how thing happened again)
My experience with Gemini in AI studio mirrors what the AI overview shows. An hallucinated libraries and their internal reasoning dialogue reinforcing the hallucination and saying "the user don't know how to search on pipy".
> Every once in a while
Pretty much what I said, no? You don't use the product and when you do, do it through a filter[1] where you only remember the bugs. Do you use other AI search products and find that they don't show this behavior?
[1] I mean, come on: framing it as "bother not ignoring" is just a dead giveaway that you aren't doing a product evaluation in good faith!
I am doing the review in good faith though--by default, I scroll past to the first result, then if it seems unsuitable and I'm desperate enough I check the LLM thingy. If it were providing any value it would sometimes be both novel and correct.
Usually it's non-novel (correctly-harmelessly but unhelpfully, restating the web search results). When it's novel it's because it's wrong.
I would remember the situation where reading the LLM thingy added any value if it ever happened. The weird little UI thing they do where they only show the LLM result if you wait for it to render makes this very easy, I have to scroll up to even consider it.
I think you're also showing some bias.
People are now changing how they search and gather information to use AI. You're automatically discounting that person's experience because they avoid using AI for valid reasons. A lot of times Google AI is outright wrong, not even close to correct. It makes sense to not rely on it and only going back to see if it's improved.
I don't "ignore AI". I just haven't changed how I search and gather information because it's clearly not accurate yet. I still have to fact check it which negates the benefits(time saving). Occasionally I go back to see if it's improved and usually it hasn't. That's not bias.
The model that's doing AI Summary for search results - that presumably needs to be fast and cheap because of the scale - is still sufficiently bad as to give people a bad taste. Presumably they're frantically working to scale their better models for this use case. If you could get Gemini Pro on every search result the experience would be effectively perfect (in the sense of better error rates than what a non-specialist educated human reading the top results and summarizing them would achieve). That's years away from a scaling/cost/speed perspective.
One funny thing about Google summaries is “copy text” merges all of the links into a giant blob which gets interpreted as a single extremely long broken link. Not a great sign for attention to detail if they don’t even copy their own pasta (it’s been like this for months)
I've been pleasantly surprised at the quality of the answers, but they've been wrong enough that I'll never not double check them anyway.
In the US (to start) there's now a flavor of Gemini 2.5 to power Search experiences like AI Mode and AI Overviews. Should be sufficiently good at this point.
source: https://blog.google/products/search/google-search-ai-mode-up...
Possibly their AI search - I don't know, I switched to Kagi to get a search engine that actually did what I asked instead of just trying to put as many ads in front of me as it could.
I tried to like Kagi but the UI is awful and the results were often way off from what was relevant. Then the limitations were pretty lame.
What's wrong with the UI, what limitations have you experienced? I've found my searches relevant and haven't noticed any limitations.
The paywall. I'm not against paying, but their free tier limitations are way too low.
The UI is just horrible and a huge waste of space. I had to use a user style to make it a non-headache to use.
Kagi routinely lacks results that every other search engine I use get no problem. I can't give any examples but I found myself going to page 5+ more often on Kagi than any other service.
If the paywall bothers you, then Kagi just isn't for you, the whole point of it is to have a business model where the users are customers, not the product.
I have no problem with the UI. It's snappy, clean, doesn't have a bunch of cruft, it's easy to find results, etc. It feels like Google of yore.
Same thing with the search results, I find them really excellent. In my experience, there's much less SEO spam. If you search for a programming language function, you get links to the proper documentation, not some trash "geeksforgeeks" site or whatever. I can't remember if I've ever had to go past page one.
Obviously YMMV on that, it's entirely possible it's different depending on what you're searching for. But I'm very happy paying for Kagi, and will continue to do so.
Kagi is okay. Decided to try it after all the hearsay on HN and it was severely underwhelming.
Having reluctantly used both, Bing's Copilot seems a lot more grounded on current search results below it versus Google's Gemini seems a lot more likely to conduct its own searches from a different query than what was asked, so also a lot more likely to hallucinate things or to provide answers that seem way different from the rest of the search page.
In terms of "best on the market" for AI search, I know that I am much more likely to trust the one that seems more like a direct summary of the stuff the search engine is traditionally responding with (and presumably has been well tuned in the last several decades) versus the one more likely to make stuff up or to leave the realm of what you are actually asking for some other search it thinks is better for you.
Though admittedly that's a very personal judgment call; some people want the assistant to search for "what they really mean" rather than "what they asked for". It's also a lot of gut vibes from how these AIs write about their research and some of that can be hallucinations and lies and "prompt optimization" as much or more than any sort of "best on the market" criteria.
I take it you’re the rebellious teenager then?
Against the biggest corporations? Sure, I'd much rather fulfill the role of overly critical teenager then defensive protective parent when it comes to rapacious companies who deserve none of our support.
That’s a cheap argument. Even with high quality results (which I still think Google is the best at), LLMs are infinitely easier to use.
For anything important I always ask LLMs for links and follow them. I think this will probably just create a strong incentive to cover important things and move away from clickbait.
It's probably a win for everyone in the long run although it means news sites will have to change. That's going to be painful for them but it's not like they're angels either.
I'm surprised the links work for you at all. 90+% of citations for non trivial information (i.e. not in a text book but definitely in the literature) I've gotten from LLMs have been very convincing hallucinations. The year and journal volume will match, the author will be someone who plausibly would have written on the topic, but the articles don't exist and never did. It's a tremendous waste of time and energy compared to old fashioned library search tools.
And what happens when you follow them?
In my experience, the answers tend to be sourced from fringe little blogs that I would never trust in a Google search.
Google at least attempts to rank them by quality, while LLM web search seems to click on the closest match regardless of the (lack of) quality.
Huh that's strange to hear. The HN I remember would have always said the opposite (the small web tends to be higher quality) as do I.
One thing I did once with great success was asking chatgpt something like "I'm trying to find information about X, but when I Google it I just get results about the app named after X. Can you suggest a better query?"
X was some tehnical thing I didn't know a lot about so it gave me some more words to narrow down the query that I would not have known about myself. And that really helped me find the information I needed.
It also threw in some book tips for good measure.
So yeah I can highly recommend this workflow.
> I think this will probably just create a strong incentive to cover important things and move away from clickbait.
But clickbait is how they make money...
That's like saying, "Oh, Apple will just have to move away from selling the iPhone and start selling hamburgers instead."
I mean, sure, but they're not going to like it, and it's going to come with a lot of lost revenue and profits.
I find myself regularly copying URLs, sending it to Gemini, and asking it to answer what I want to get out of the article.
I'm not wasting my time scrolling through a mile of website and 88,000 ads to find the answer to the headline.
Those adverts and clickbait will infect llms soon enough, just be far harder to block.
Yes, unfortunately for those saying AIs will only get better, advertising is a major reason we should expect them to get worse.
Ironically, I wonder if it would inspire a slew of downstream services that use LLMs to clean advertising out of the mainstream LLM responses.
With the huge usage that LLM APIs are getting in all sorts of industries, they cannot be going away, and they're cheap.
If consumer AI chatbots get enshittified, you can just grab some open source bring-your-api-keys front-end, and chat away for peanuts without ads or anything anti-user.
I use https://github.com/sigoden/aichat , but there are GUIs too.
Plus, anyone enterprising can just write a web front-end and sell it as "the ad-free AI chatbot, only $10/mo, usage limits apply".
But what if the AI output contains ads, not the UI or whatever...
They could simply restore the search quality they had in 2010. No one wants these "AI" summaries except for people looking to get promoted for "having an impact" inside Google.
What Google is doing right now is sabotage the search moat they do have. They are throwing it all away because of some "AI" rainmakers inside the company.
That's impossible unless the web reverted back to 2010, when walled gardens weren't prevalent, making your own blog was common, doable and often done by those without programming experience, forums were alive and well, and people wanted to share things on the web rather than group chats.
There are plenty of blogs, plenty of obvious low quality spam to block, plenty of features to enable allowlist and blocklists. To think for a second that the Google search experience couldn't be made significantly better at the snap of a finger by Google is to live in a fantasy world.
Sure, sure, except for this minor issue that the argument I was responding to didn't mention revenue, they talked about the state of the internet. So why again are you responding to my counter with a straw man?
It's perfectly possible if they start downranking sites full of ads.
But an ad company will never do that.
> No one wants these "AI" summaries
Not true, I use them all the time. They have links available for when I want further information, which is not very often.
I never use them. Especially when they can be completely wrong (and the problem is how will you know that it's wrong?): https://news.ycombinator.com/item?id=44142113
you are linking to “AI responses can make a mistake” post???! Google’s top 86 search results are ads :)
Then what explains people doing millions of web searches on perplexity/chatgpt/claude?
Simply untrue. I don't want it back. I use ChatGPT's voice transcribing to do 99% of my searches today.
Google does need to adapt or die
they are losing more and more search to “AI.” my 12-year old never uses Google and couple of times I asked her to “Google it” she literally rolled on the floor laughing and called me a “boomer” :)
I wonder if "boomer" is going to become a generic term for "my parents' generation".
Already has been for a while now. Also just anyone that isn’t hip to the latest trend of the day.
100%
We are getting to watch The Innovator's Dilemma play out, yet again. The downward trajectory of Google's utility has only been worsening over the past 10 years-- but only in the last 3-4 have mainstream audiences started to notice.
The first part of that statement is valid but the second one isn't.
If anything, most of big tech has shown exceptional humility against new threats
Instagram incorporating stories (Snapchat)
YouTube incorporating Shorts (tiktok)
Google search incorporating AI Mode (perplexity et al)
This is in stark contrast to Kodak and the likes who scoffed at digital camera and phone cameras as distraction. They were sure that their ways were superior, ultimately leading to their demise.
Maybe you misunderstood the scope that Google is a search advertising company first and foremost? Alphabet ignores (yes, they essentially invented transformers, etc.., but actual productive efforts likely correlate to predicted TAM or protecting status quo, answering to shareholders while waiting to acquire threats) a market that will eventually usurp their cash cow of first party search ads, because the new market isn't initially as lucrative due to market size. There is also the consideration of cannibalizing their high margin search ads market with an error prone and resource intensive tech that cannot immediately be monetized in a second price auction (both from inventory and bidder participant perspectives). A $10 billion market for Google would be under 3% of revenue, but if the market grows 10x, it is much more attractive, but now the incumbent may be trailing the nascent companies who refined their offerings (without risk of cannibalizing their own offerings) while said market was growing. We are currently at the stage where Google is incorporating Gemini responses and alienating publishers (by not sending monetizable clicks while using their content) while still focusing on monetization via their traditional ad products elsewhere on the SERPs (text search ads, shopping ads). Keep in mind, they also control 3rd party display ads via DoubleClick and Adsense-- but inventory on 3rd party sites will drop and Google will lose their 30%+ cut if users don't leave the SERPs.
Dozens of major news publications have covered the decline of Google's organic search quality decline and emphasis on monetization (ignoring incorrect infoboxes and AI generated answers). See articles such as https://www.theatlantic.com/technology/archive/2023/09/googl... and a collection even posted here on HN https://news.ycombinator.com/item?id=30348460 . This has played into reasons why people have shifted away from Google. Their results are focused solely on maximizing Google's earnings per mille, as leaked (https://www.wsj.com/tech/u-s-urges-breakup-of-google-ad-busi...) where the ads team has guanxi over search quality. Once Amit Singhal and Matt Cutts left their roles, the focus on monetization over useful SERPs becomes much more evident.
I never really bought the idea of any AI company killing Google. They have too much momentum to really be seriously impacted, too many people who only use them exclusively and will continue to do so their whole lives on the name brand alone. They might risk a lack of "growth" but that only really matters to shareholders, not to end users.
Yahoo had 90%+ of the search market and they lost it in a few years to google because they were unable to innovate. I don't think anyone saw that coming. Everyone was building "portals" (remember those? AOL.com? I think verizon.com was one at some point with news and weather) to try and compete with Yahoo's dominance in search. It can happen again. LLM Chat is certainly an existential threat to the googs. Part of the OpenAI lore is that google originally viewed it as a threat to their search/advertising revenue model and defunded it on that basis.
The fact that people are willing to pay for LLM and use it over search seems to indicate that Google's free product isn't as good, and llm chat is better "Enough" that people are willing to pay for it.
The major flaw in your argument is that Yahoo is still around. They still have tons of traffic, some of the most in the world, just behind Reddit. They are not constantly growing, yes, but that is exactly my point. They have a satisfied userbase who will use them for life as does Google. Neither is going anywhere any time soon. Both make billions of dollars annually.
Yahoo's peak (inflation adjusted value was ~$325 billion and the last sale of their assets was $4.5 billion in 2017 which is about 1% of it's peak value. I can't imagine google investors wanting to risk 99% of their investment.
Regardless, they are far from dead. Multiple billions of dollars is still doing fantastic for any business, investors and shareholders be damned.
It's funny most people are saying Google will win the AI wars, though that is precisely what will cannibalize their current business model, which had a much bigger moat than frontier LLMs, apparently.
You think we wont start seeing ads or paid for refs/links in those AI responses? Not defending Google here, when they turned that feature on I posted to some friends "another nail in the coffin for the web as we know it" or something to that effect.
Eventually open models will be able to do the same, so why would anyone use ad-ridden service? The first LLM provider who turns on ads on their responses will disappear in a brink.
Data won't be open and free for scraping in the future. And news worthy sites will ask for $$.
So nope, open models won't be a threat in the future.
It will take a long time until an average person has the resources to run models of similar quality (and speed) as Google and OpenAI can provide today.
It took under a year last time.
People use, and pay for, ad-ridden services all the time. I mean, just look at Cable TV and the direction all the streaming services are going.
Provided there are no identical alternatives without ads.
Because it's going to be 100x better than open models.
Is this really that bad for Google? Do Perplexity and OpenAI use paid SERP API under the hood? Google doesn't have to make money from ads on search, if its paid search.
Spare us the "woe is me" for they literally invented replacing the publishers. Yesterday its infoboxes, today its shitty AI summaries. Which is still the case, so good riddance.
What is infobox?
A simple tale of how capitalism leads to unintended anti-social consequences through market mechanisms that no one participant can control.