Most websites can't tell you about them. But I can.
==============================
The Windows ReadMe - #005
==============================
“We can’t write about them. We’ll get in trouble.”
That’s the attitude I had about YouTube downloaders when I ran How-To Geek as Editor-in-Chief. We self-censored to protect ourselves. But I’m not dancing for Google ad revenue anymore.
This ReadMe file is about incredibly useful free YouTube downloaders that I recommend. But it’s also about so many other truths people don’t normally share:
Why YouTube downloaders are ethical and you shouldn’t apologize for using them.
Why Google secretly needs YouTube downloaders.
Why toothless terms of services like YouTube’s are no better than the EULAs we’ve been ignoring for decades.
And how Google has used its ad network (now ruled an illegal monopoly) to privilege its own services ahead of competitors.
But yes, this is also a list of seriously useful free YouTube downloaders. The web is full of spammy ones, and I’ll show you the real ones.
==============================
This week’s tip
==============================
Since I’m not writing to optimize this list for Google, I can just give you the answer!
Here are the best YouTube downloaders -- based on my personal experience:
The best YouTube downloader for Windows is Stacher. It’s free, open-source, and simple. It’s an easy-to-use graphical application that does the setup for you.
The best YouTube downloader for the command line is yt-dlp. Use it if you want to get your hands dirty! (Stacher is cool because it provides a graphical interface and does all the hard work for you.)
The best YouTube download for Mac and Linux? Also Stacher! It’s cross-platform.
The best YouTube downloader on the web is Cobalt.tools -- or at least it used to be. It looks like Google is blocking it right now. Until it comes back, I recommend other tools. (Edit: Apparently there are still Cobalt instances that work — see this comment! Thanks, ZedK.)
The best YouTube downloader for Android is NewPipe. This third-party YouTube app has a built-in download tool.
Use any of these and you’ll get a video file you can back up, archive, and do whatever you want with. It’s yours to preserve.
When you install an application, you often click through a long end user license agreement. If people had to read each agreement in full, society would grind to a halt.
Even companies often don’t read their own EULAs. When Apple launched Safari for Windows, it launched it with a EULA that said people couldn’t install it on Windows. The message? Even companies like Apple don’t care what their own legal boilerplate says. So why should we care?
So yes: YouTube’s terms of service may or may not say you can’t download videos from it. I haven’t checked. Have you read it in full? Have you checked the terms of service for every product you’ve used to confirm you’re in compliance? No one has -- that’s the point.
YouTube has become part of the plumbing of the modern web. It hosts everything from city council meetings to recorded live-streams of important family events. If a video is important to you -- or you want to have a copy for legal reasons -- you should download it. And, to do that, you’ll need a YouTube downloader.
Using a YouTube downloader is like printing a web page to a PDF or saving an image file for later -- you get an offline copy you can archive. Just like with anything else on the web, a YouTube video may be taken down by its creator in the future. And you may need your offline copy.
Google needs YouTube downloaders. They perform a valuable role: If it were impossible to download YouTube videos, many organizations would abandon hosting their videos on YouTube for a platform that offered more user flexibility. Or they’d need to host a separate download link and put it in their YouTube descriptions. But organizations don’t need to jump through hoops -- they just let people use YouTube downloaders.
Google could lock down YouTube harder. Services like Netflix use DRM-protected streams to stop downloads. Google could make it much harder to download videos. But Google benefits from setting up a gray market ecosystem of often-inconvenient download tools. The ecosystem of YouTube downloaders and Google’s tacit approval of them has helped cement YouTube’s dominance.
When I ran How-To Geek as Editor-in-Chief -- and when I was a writer -- we went out of our way to avoid writing about YouTube downloaders. And we weren’t the only publication that avoided touching them, despite reader interest.
So many publications have long been dependent on Google ad revenue -- in fact, Google’s ad network was recently ruled an illegal monopoly in the U.S. And Google had a very interesting provision in its rules: Google could revoke ads if you messed with its other businesses.
This wasn’t just theoretical. Back in 2012, GHacks shared that it had Google AdSense ads removed from its entire website for “Google Product Abuse” because the website wrote about a YouTube downloader. Google required the offensive YouTube downloader article removed.
The message was that Google was serious, and that messing with Google’s YouTube business in any way was grounds for Google putting you out of business.
Google has now covered its tracks better -- there’s nothing about “Google Product Abuse” in its current AdSense policies. But the anti-downloader rules appear to have started as a way to protect its own products.
Google has been walking a line for over a decade now: YouTube lets you use downloaders, but Google makes them inconvenient to find and annoying to use. Google tries to stop your favorite websites from writing about them. Google breaks tricks they depend on.
If you want to find a way to download an important video, you’ll find it -- that’s an important escape hatch and means YouTube retains its dominance as an online engine of culture.
But Google loves making YouTube downloads just annoying enough that you won’t bother unless you really want to do it.
Also: When Google itself is training its AI on content against the wishes of publishers, why should we feel bad about downloading backup copies of videos that are important to us?
We shouldn’t. Download the video you want. Back it up somewhere safe.
==============================
Something I'm proud of this week
==============================
Microsoft was pitching Windows Recall as the shiny AI feature to carry its Copilot+ PC brand, but no one talks about Recall anymore. The launch was too messy, the feature was too delayed, and the search experience never became as useful as Microsoft promised.
Now, Microsoft’s headline AI feature for Copilot+ PCs has become Click To Do. I dove into how this awkwardly named AI feature works for PCWorld.
Seriously, what a weird name: Haven’t we always been clicking to do things?
==============================
Insights from Thurrott.com
==============================
Google is bringing a search app to Windows -- it’s the return of Google Desktop, but with more AI this time! Also, in more AI-related Google news, Gemini is popping up in Chrome browsers -- no subscription needed.
In Windows news, Consumer Reports is calling on Microsoft to extend support for Windows 10. And Notepad will let you use AI features without spending AI credits.
For Thurrott Premium subscribers, Paul’s been trying out the iPad as a laptop and thinking about the future of computing. He also launched a newsletter that’s not about news -- and isn’t a letter. (Excellent.)
==============================
EULAs and a time machine
==============================
Back in 2012, I wrote this piece about ridiculous EULA clauses for MakeUseOf.
(Yes, I just linked an Archive.org backup of a piece I wrote 13 years ago. I don’t know whether MakeUseOf’s terms of service allowed Archive.org to save a backup copy, but I’m glad they did save copy. Backups are important.)
Looking back at it, my favorite ridiculous EULA clause was the "special consideration" in PC Pitstop's EULA. It said that the first person who noticed this line in the EULA could email the company and receive a financial reward.
It took four months for someone to notice the line and claim a $1000 prize. No one reads EULAs, even when they have something positive to say!
==== Command Prompt ====
C:\> net send * "Have a great weekend!"
The claim that Google secretly wants YouTube downloaders to work doesn't hold up. Their focus is on delivering videos across a vast range of devices without breaking playback(and even that is blurring[0]), not enabling downloads.
If you dive into the yt-dlp source code, you see the insane complexity of calculations needed to download a video. There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up. Google frequently rejects download attempts, blocks certain devices or access methods, and breaks techniques that yt-dlp relies on.
Half the battle is working around attempts by Google to make ads unblockable, and the other half is working around their attempts to shut down downloaders. The idea of a "gray market ecosystem" they tacitly approve ignores how aggressively they tweak their systems to make downloading as unreliable as possible. If Google wanted downloaders to thrive, they wouldn't make developers jump through these hoops. Just look at the yt-dlp issue tracker overflowing with reports of broken functionality. There are no secret nods, handshakes, or other winks, as Google begins to care less and less about compatibility, the doors will close. For example, there is already a secret header used for authenticating that you are using the Google version of Chrome browser [1] [2] that will probably be expanded.
[0] Ask HN: Does anyone else notice YouTube causing 100% CPU usage and stattering? https://news.ycombinator.com/item?id=45301499
[1] Chrome's hidden X-Browser-Validation header reverse engineered https://news.ycombinator.com/item?id=44527739
[2] https://github.com/dsekz/chrome-x-browser-validation-header
> If you dive into the yt-dlp source code, you see the insane complexity of calculations needed to download a video. There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up. Google frequently rejects download attempts, blocks certain devices or access methods, and breaks techniques that yt-dlp relies on.
This just made me incredibly grateful for the people who do this kind of work. I have no idea who writes all the uBlock Origin filters either, but blessed be the angels, long may their stay in heaven be.
I'm pretty confident I could figure it out eventually but let's be honest, the chance that I'd ever actually invest that much time and energy is approximates zero close enough that we can just say it's flat nil.
Maybe Santa Claus needs to make some donations tonight. ho ho ho
As the web devolves further, the only viable long-term solution will be allow lists instead of block lists. There is too much hostility online—from websites that want to track you and monetize your data and attention, SEO scams and generated content, and an ever-increasing army of bots—that it's becoming infeasible to maintain rules to filter all of it out. It's much easier to write rules for traffic you approve of, although they will have to be more personal than block lists.
This is more or less what I already do with uBlock/uMatrix. By default, I filter out ALL third party content on every website, and manually allow CDNs and other legitimate third party domains. I still use DNS blacklists however so that mobile devices where this can't be easily done benefit from some protection against the most common offenders (Google Analytics, Facebook Pixel, etc.)
I’m not sure why everyone keeps repeating this. The fight is lost. Your data is being collected by the websites you visit and handed to Facebook via a proxy container. You will never see a different domain, it’s invisible to the end user.
Care to elaborate on the mechanisms at play? If what you claim is true, all websites would already serve ads from their own domain. The main issue I can see with this approach is that there would be an obvious incentive for webmasters to vastly overstate ad impressions to generate revenue.
As far as I understand, the objective is completely different. Ads are shown on platforms owned by Meta, and the Conversions API runs on the merchant's website (server-side), and reports interactions such as purchases back to Facebook.
This is quite different from websites monetizing traffic through and trackers placed on their own webpages. Those can still be reliably blocked by preventing websites from loading third party content.
[dead]
I also don't buy this argument about YouTube depending on downloaders:
> They perform a valuable role: If it were impossible to download YouTube videos, many organizations would abandon hosting their videos on YouTube for a platform that offered more user flexibility. Or they’d need to host a separate download link and put it in their YouTube descriptions. But organizations don’t need to jump through hoops -- they just let people use YouTube downloaders.
No, organizations simply use YouTube because it's free, extremely convenient, has been very stable enough over the past couple decades to depend on, and the organization does not have the resources to setup an alternative.
Also, I'm guessing such organizations represent a vanishly small segment of YouTube's uploaders.
I don't think people appreciate how much YouTube has created a market. "Youtuber" is a valid (if often derided) job these days, where creators can earn a living wage and maintain whole media companies. Preserving that monetization portal is key to YouTube and its content creators.
> and the organization does not have the resources to setup an alternative.
Can confirm at least one tech news website argued this point and tore down their own video hosting servers in favor of using Youtube links/embeds. Old videos on tweakers.net are simply not accessible anymore, that content is gone now
This was well after HTML5 was widely supported. As a website owner myself, I don't understand what's so hard now that we can write 1 HTML tag and have an embedded video on the page. They made it sound like they need to employ an expensive developer to continuously work on improving this and fixing bugs whereas from my POV you're pretty much there with running ffmpeg at a few quality settings upon uploading (there are maybe 3 articles with a video per day, so any old server can handle this) and having a quality selector below the video. Can't imagine what about this would have changed in the past decade in a way that requires extra development work. At most you re-evaluate every 5 years which quality levels ffmpeg should generate and change an integer in a config file...
Alas, little as I understand it, this tiny amount of extra effort, even when the development and setup work is already in the past(!), is apparently indeed a driving force in centralizing to Youtube for for-profits
> As a website owner myself, I don't understand what's so hard now that we can write 1 HTML tag and have an embedded video on the page.
You acknowledge that it's not that simple:
> running ffmpeg at a few quality settings upon uploading (there are maybe 3 articles with a video per day, so any old server can handle this)
Can any old server really handle that? And can it handle the resulting storage of not only the highest-quality copy but also all the other copies added on top? My $5 Linode ("any old server") does not have the storage space for that. You can switch your argument to "storage is cheap these days," but now you're telling people to upgrade their servers and not actually claiming it's a one-click process anymore.
I use Vimeo as a CDN and pay $240 per year for it ($20/month, 4x more than I spend on the Linode that hosts a dozen different websites). If Vimeo were to shut down tomorrow, I'd be pretty out of luck finding anyone offering pricing even close to that-- for example, ScaleEngine charges a minimum of $25 per month and doesn't even include storage and bandwidth in their account fee. Dailymotion Pro offers a similar service to Vimeo these days, but their $9/month plan wouldn't have enough storage for my catalog, and their next cheapest price is $84/month. If you actually go to build out your own solution with professional hosting, it's not gonna be a whole lot cheaper.
Obviously, large corporations can probably afford to do their own hosting-- and if push came to shove, many of them probably would, or would find one of those more expensive partner options. But again, you're no longer arguing "it's just an HTML tag." You're now arguing they should spend hundreds or thousands per year on something that may be incidental to their business.
Here's me hosting a bunch of different bitrates of a high quality video, which I encoded on a 2016 laptop. http://lelandbatey.com/projects/REDLINE-intro/
The server is $30/month hosted by OVH, which comes with 2TB of storage. The throughout on the dedicated server is 1gbps. Unlimited transfer is included (and I've gone through many dozens of TB of traffic in a month).
People paying for managed services have no concept of bandwidth costs, so they probably think what you just described is impossible.
Bandwidth these days can be less than .25/m at a 100g commit in US/EU, and OVH is pushing dozens of tb/s.
Big ups on keeping independent.
No lol nobody is reading the numbers. Vimeo is $20 / mo. Vimeo + $5 Linode server = $25 / mo, cheaper than the $30 / mo OVH server. The quoted ScaleEngine is $25 / mo, which ($25 + $5 = $30) the same as the OVH server.
Y'all just have two different budgets. For one person $30 / mo is reasonable for the other it's expensive.
But the core claim, that $5 / mo hosts a lot of non-video content but not much video content, holds.
You misread the bandwidth cost part of my comment.
A $28/mo (Australian) vimeo subscription, or the "Advanced" $91/mo plan include the same 2TB bandwidth/month for viewers of your videos.
If you upload a 100MB video and it gets 20000 views the whole way through, you are now in the "contact sales" category of pricing.
This is why Youtube has a monopoly, because you've been badly tricked into thinking this pricing is fair and that 2TB is in any way shape or form adequate.
Tbh, the $5 claim was in response to me but I never said any VPS would have the storage capacity to host a catalogue. I said any server. Call it my self-hoster's bias but I really did picture a hardware server with a hard drive in it, not a virtual access tier with artificial limits
But yeah okay, not any server variant can do this and the cloud premium is real. You'd need so spend like 5€/month on real hard drives if you want, say, 4TB of video storage on top of your existing website (the vimeo and dailymotion price points mentioned suggest that their catalogue is above 1 but below 2 TB). The 5€/month estimate is based on the currently most-viewed 4TB hard drive model in the tweakers pricewatch (some 100€), a modest average longevity of 5 years, triple redundancy, and that you would otherwise be running on smaller drives anyway for your normal website and images so there's no (significant) additional labor or electricity costs for storage (as in, you just buy a different variant of the same setup, not need to install additional ones)
~~Likely much less than .25/m if that’s mbps. The issue is you’d have no shortage of money at that scale - I run one of the two main Arch Linux package mirrors in my country and while it’s admittedly a quite niche and small distro in comparison, I’m nowhere close enough to saturate 1gbit on normal days, let alone my 10gbit link~~
It’s a trade off I suppose - you can very well host your own streaming solution, and for the same price you can get a great single node, but if you want good TTFB and nodes with close proximity to many regions you may as well pay for a managed solution as the price for multiple VPS/VM stacks up quickly when you have a low budget
Edit: I think I missed your point about bandwidth pricing lol, but the second still stands
Yeah, currently hosting LLHLS edge nodes in US + EU and caching CDN worldwide. The base cost grows if you have an audience of e.g. 2000 live viewers for a 2mbps stream = 4gbps.
Could be a lot cheaper and less need for global distribution if low latency weren't a requirement. And by low latency I mean the video stream you watch is ~2s behind reality just like Youtube, Twitch, Kick, etc. If your use case is VOD or can tolerate 10s latency streaming, single PoP works fine.
The point is that if I chose Vimeo or AWS/GCP/Azure for this, at their bandwidth pricing my (in my opinion modest) usage would be costing tens of thousands of dollars monthly, which it most certainly does not.
Managed service pricing and the perception of it needs to change, because it feels like a sham once you actually do anything yourself.
I'm on mobile, but what player did you use on your website?
Does it handle buffer?
Fwiw, the browser's built-in player does buffering. You don't need to custom-code that, you can just use <video>. The browser also exposes via Javascript when it estimates that the download speed and buffer size is sufficient such that you can start playback without interruption: https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaEl...
Not the person above but they're using Video.js 7.10.2 <http://videojs.com/>
Doesn't cloudflare and amazon have this now? Pretty sure CF is developing a closed source player- but theres plenty of FOSS ones (rip the one jellyfin uses out of it- at worst).
And, theres plenty of tutorials on using ffmpeg based tools to make the files. And yes, "oh no, I need to learn something new for my video workflow."
> And yes, "oh no, I need to learn something new for my video workflow."
That's rude and uncalled for. I didn't say I was unwilling to learn something new. I said the economics don't work out for the solution someone else proposed. And I also disputed the statement, "we can write 1 HTML tag and have an embedded video on the page." Now you've moved the goalpost to "learn something new" (which actually means "design and deploy an entire new system").
Just use CloudFlare R2 object storage with free bandwidth. It's specifically cleared for use as a video hoster.
Have you tried cloudflare r2?
It’s easy to set up a backend video hosting system. But unless you are running (and checking!) a strong client-side observability system, you’ll never see all the problems that people are having with it. And they won’t tell you either, they’ll just leave.
Reddit struggles to provide a video player that is up to YouTube’s par. Do you have more resources than Reddit? Better programmers?
Is there someone in the world for whom this demo https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... does not play? Because that's what I use and am not aware of issues with it
> Reddit struggles to provide a video player that is up to YouTube’s par. Do you have more resources than Reddit? Better programmers?
It's hard to say whether MDN and I have/am better programmer(s) and resource(s) than reddit without having any examples or notions of what issues reddit has run into.
If you mean achieving feature-parity, like automatic subtitles and dubbing and control menus to select languages and format those subtitles etc., that's a whole different ball game of course. The site I was speaking of doesn't support that today either (they don't dub/sub), at best you get automatically generated Dutch subtitles from yt now, i.e. shit subtitles (worse than English autogen and we all know how well those deal with jargon and noise)
You're linking to a page with a 5 second 1MB video on it. Yes, it's easy to use the <video> element to serve a video file no larger than a picture. No, that does not mean you have a system that will allow thousands of users to watch an 11 min HD video during their subway ride that starts instantly and never pauses, stutters, or locks up.
I can't speak to Dutch websites but in the U.S., a news website will usually feel obligated to provide subtitles on their videos to avoid lawsuits under the ADA.
Oh that's interesting! The US is portrayed here as this free for all country (limited unemployment money, health services, PTO...) but then subtitles are mandatory? That's cool! I presume we don't have such a law since the news sites I frequent don't seem to offer that for most videos (not counting youtube's autogenerated attempt for the few sites that outsource video hosting to google)
As for that video being small and not receiving thousands of simultaneous views: sure, but buying sufficient bandwidth is not a "hire better programmers" problem. You don't need to beat Reddit's skills and resources to provide smoother video playback. Probably the opposite actually: smaller scale should be easier to handle than reddit scale, and they already had that all set up
It is actually pretty easy to provide video. It's hard to provide video to a lot of people.
Reddit and Youtube have just a massive number of people visiting and trying to watch video at all time. It requires an enormous amount of bandwidth to serve up that video.
Youtube goes through heroic efforts to make videos instantly available and to apply high quality compression on videos that become popular.
If you don't have a huge viewership or dynamic content then yeah, it's actually pretty easy to setup and run videos sites (infowars has managed it). Target h264 and aac audio with a set number of resolutions and bitrates and viola, you've got something that's pretty competitive on the cheap that can play on pretty much any device.
It's not optimal for bandwidth, for that you need to start sniffing client capabilities. However, it'll get the job done while being pretty much universally playable.
Thing with Infowars is, they got a lot of rich people and probably Russia paying the bills. Video hosting still is damn expensive if you are not one of the top dogs.
> apply high quality compression on videos that become popular
Do they put a different amount of compression effort in if the video isn't (expected to become) popular?
I don't know what the Youtube compression queue looks like.
I'd not be shocked if they do more aggressive compression for well known creators.
For nobodies (like myself) the compression is very basic. Whatever I send ends up compressed with VP9 which, I believe, youtube has a bunch of hardware that can do that really fast.
Yeah organizations don't use YouTube for file access, that's just not a good way to operate a video department in a business. Also the quality is terrible and adding another set of reencodes will make it even worse.
I've seen public schools and public courts use YouTube to host their videos. I don't quite follow why they would care so much about either using downloaders themselves or their users having access to downloaders that they'd switch providers. To me it is unlikely but plausible.
But even if they did - I don't see why Google would care about these organizations. I expect anyone doing this is not expecting to get any views from the algorithm, is not bringing in many views or monetizing their videos.
To steelman it though, maybe Google cares if their monopoly benefits from nobody even knowing other video sites exist.
I never understood why do they not limit downloading data to the speed at which you could be possibly watching it. Yesterday I downloaded a 15hour show in like 20 minutes. There is no way I could have downloaded that much data in a legit way through their website/player
Im glad I wasn't blocked or throttled, but it seems like it'd be trivial to block someone like me
Am I missing something? It does sort of feel like they're allowing it
EDIT: Spooky skeletons.. Youtube suddenly as of today forces a "Sign in to confirm you’re not a bot" on both the website and yt-dl .. So maybe I've been fingerprinted and blacklisted somehow
You could have been a legit viewer... clicking to skip over segments of the video, presumably trying to find where you left off last time, or for some scene you remember, or the climax of the video... whatever.
Youtube does try to throttle the data speeds, when that first happened, youtube-dl stopped being useful and everyone upgraded their python versions and started using yt-dlp instead.
If you click to skip over, even clicking every minute, you're still not grabbing the whole thing, right? Whereas downloading is grabbing every second.
You are actually. Watch one second, the player buffers the next minute of video, then you skip ahead 1 minute. Process repeats.
Depending on the player and how they cache it. Yes, if google monitor every byte to which client had downloaded, but that just seems like ultra micro managing, and have no idea how many players will it break. Youtube seems like one of those site, should allow people to download or make them a public utility on IPFS or something like that.
They still want the YouTube experience to be smooth, to allow users to skip small parts of videos without waiting for it to load every time, to be able to watch multiple videos at the same time, to be able to leave video paused until it loads, etc., which limiting downloading data would hinder. I assume blocking downloads is just not worth destroyinf user experience.
Also they allow downloads for premium subs maybe it’s more efficient to not check that status every time.
There is an official download option inside the app. If they limit the download speed to the watching time, it won't be useful.
The official download option doesn't download it to your filesystem as a file. It just lets you watch the video offline in the official app/website. Just tested it now.
Meaning the video file exists in your file system somewhere, so downloading at a higher speed than possibly viewing the video is an existing functionality in the app.
If you're on iOS no way to access it, and I think on Android it's in protected storage as well.
This option is only available to premium users afaik
I think they are, yt-dlp just circumvents it
As some people already said, skipping section. Also you can increase video speeds, I normally watch youtube at 2x speed but I think you can go up to 5x.
Written by somebody who hasn't taken 1 look at yt-dlp source code or issues. Google regularly pushes updates that "coincidentally" break downloaders. The obfuscation and things they do to e.g. break a download by introducing some breaking code or dynamic calculation required only part way through the video is not normal. They are not serving a bunch of video files or chunks, you need a "client" that handles all these things to download a video. At this point, if you assert that Google doesn't want to secretly stop it, you are either extremely naive, ignorant, or a Google employee.
I think GP is agreeing with you
I have YT Premium and if Google bans yt-dlp, I will cancel my subscription. I pay them not to do that.
you show them who’s boss, premium user
Seems quite naive to think they'd be affected in any way by the tiny intersection of users that are both yt-dlp users and premium subscribers boycotting them...
I think it is not about making a change, it is putting money where your mouth is.
To buy premium to support creators.
Once yt becomes hostile the deal between me and yt is off.
If every user of yt-dlp did as I do, then it would have exactly the effect that it needs to have. If yt-dlp is used by a small minority of users, why would Google be antagonistic to it? And if it's used by sizeable portion of users, then they would care.
> If yt-dlp is used by a small minority of users, why would Google be antagonistic to it?
The concern is likely that if they let it become too easy the small minority becomes a large majority and the ad business becomes unsustainable.
Consider the ease and adoption of newsgroups in the early 90s vs Napster/Limewire later and the effect that had on revenues.
Primarily because they contractually promised the music industry they'd do everything they can to prevent tools that allow the downloading of copyrighted music from the service.
Why don’t creators both publish to YouTube but also publish somewhere else for archival or public access reasons, to help keep content available for outside walled gardens? Is it just not important to them? Is it hosting costs? Missing out on ad revenue?
Where else should they be publishing to? And who is going to pay for this service?
Don’t forget - most “content creators” are not technical - self hosting is not an option.
And even if it were - it costs money.
I just mean some kind of public service like one of those archive sites. So they would place it into YouTube for revenue but also these other places so there’s a way to get the videos without Google being a dictatorial overlord.
There's no incentive for them to do so. It reduces their ad revenue, while costing more money to host it. That said, if you are a creator and you do want to do it, Peertube is a good option because it uses torrent technology to reduce your hosting costs.
For a lot of creators, YouTube is the internet
Youtube pays them per (ad) view, and also recommends the video to more people based on how many people click on it. So giving people another way to watch it will decrease their revenue and audience.
LTT kinda do, but they're the exception, not the norm
The argument the article is making is that if they really wanted YouTube downloaders to stop working, they'd switch to Encrypted Media Extensions. Do you think that's not plausible?
Many smart devices that have youtube functionality(tvs, refrigerators, consoles, cable boxes, etc), have limited or no ability to support that functionality in hardware, or even if they do, it might not be exposed.
Once those devices get phased out, it is very likely they will move to Encrypted Media Extensions or something similar, I believe I saw an issue ticket on yt-dlp's repo indicating they are already experimenting with such, as certain formats are DRM protected. Lookup all the stuff going on with SABR which if I remember right is either related to DRM or what they may use to support DRM.
Here has to be at least some benefit Google thinks it gets from youtube downloaders, because for instance there have been various lawsuits going after companies that provide a website to do youtube downloading by the RIAA and co, but Google has studiously avoided endorsing their legal arguments.
for example I think feature length films that YouTube sells (or rents) already use this encryption.
That’s why authors should pony up and pay for the encryption feature and rest should be free to download. YouTube could embed ads this way too.
That's a wildly imaginative fever dream you're having. There is no timeline in which content creators would pay YouTube to encrypt their video content.
Here's a thought: what if paying a fixed amount to encrypt your video would grant you a much higher commission for the ads shown?
Anything that's had an official YouTube app for the past nine years does, because it's been a hard requirement for a YouTube port that long.
It's much more likely YouTube just doesn't want to mess with it's caching apparatus unless it really has to. And I think they will eventually, but the numbers just don't quite add up yet.
Using DRM would make it illegal for YouTubers to use Creative-Commons-licensed content in their videos, such as Kevin MacLeod's music or many images from Wikipedia.
When you upload a video to YouTube, you agree that you own the copyright or are otherwise able to grant YouTube a license to do whatever they want with it [0]:
> If you choose to upload Content, you must not submit to the Service any Content that does not comply with this Agreement (including the YouTube Community Guidelines) or the law. For example, the Content you submit must not include third-party intellectual property (such as copyrighted material) unless you have permission from that party or are otherwise legally entitled to do so. [...]
> By providing Content to the Service, you grant to YouTube a worldwide, non-exclusive, royalty-free, sublicensable and transferable license to use that Content (including to reproduce, distribute, prepare derivative works, display and perform it) in connection with the Service and YouTube's (and its successors' and Affiliates') business, including for the purpose of promoting and redistributing part or all of the Service.
If you include others' work with anything stronger than CC0, that's not a license you can grant. So you'll always be in trouble in principle, regardless of whether or how YouTube decides to exercise that license. In practice, I wouldn't be surprised if the copyright owner could get away with a takedown if they wanted to.
Yes, this absolutely does not shield YouTube from liability from third parties, since the copyright holder of third-party content included in the video is not a party to the agreement. That's why they have a copyright notice and takedown procedure in the first place, and also the reason for numerous lawsuits against YouTube in the past, some of which they have lost.
To date, many Creative Commons licenses do in fact amount to "permission from that party", but if they start using DRM, those licenses would cease to grant YouTube permission.
You may not be very familiar with Creative Commons licensing. For example, CC BY-SA 4.0 would prohibit YouTube from using DRM:
> No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
(https://creativecommons.org/licenses/by-sa/4.0/legalcode.en)
Most of the CC licenses include such language and have since the first versions.
> if they really wanted YouTube downloaders to stop working
Wrong question leads to the wrong answer.
The right one is "how much of the ad revenue would be lost if". For now it's cheaper to spend bazillions on a whack-a-mole.
I miss the system where, when I was watching a flash video in Firefox, that video was already present on my hard drive as an .flv file in /tmp, and I could just copy it somewhere.
I remember that was the case indeed.
To be fair, the article doesn't say Google "secretly wants" downloaders to work. It says they need downloaders to work, despite wanting to make them as annoying as possible to use. The argument isn't so much about Google's feelings as it is about whether the entire internet would continue making YouTube the video hosting site to use if downloaders were actually (effectively) blocked.
I don’t think companies are asking “can people download this video” but rather “can people watch this video” - downloaders seems like an afterthought or non issue.
You conveniently side-stepped the argument that YouTube already knows how to serve DRM-ized videos, and it's widely deployed in its Movies & TV offering, available on the web and other clients. They chose not to escalate on all videos, probably for multiple reasons. It's credible that one reason could be that it wants the downloaders to keep working; they wouldn't want those to suddenly gain the ability to download DRM-ized videos (software that does this exist but it's not as well maintained and circulated).
It seems more credible to me that they would cut off a sizable portion of their viewers by forcing widevine DRM.
Or is it something different you are thinking about?
What benefits does DRM even provide for public, ad-supported content that you don't need to log for in order to watch it?
Does DRM cryptography offer solutions against ad blocking, or downloading videos you have legitimate access to?
Sorry that I'm too lazy to research this, but I'd appreciate if you elaborate more on this.
And also, I think they're playing the long game and will be fine to put up a login wall and aggressively block scraping and also force ID. Like Instagram.
Would be glad if I'm wrong, but I don't think so. They just haven't reached a sufficient level of monopolization for this and at the same time, the number of people watching YouTube without at least being logged in is probably already dwindling.
So they're not waiting anymore to be profitable, they already are, through ads and data collection.
But they have plenty of headroom left to truly start boiling the frog, and become a closed platform.
Why don't they just deploy a per-platform .exe equivalent per video?
While I do agree (mostly, I've never had a download NOT work, on the rare occasion I grab one), they haven't made it impossible to download videos, so that is a win IMO.
Your view from a distance, where you rarely download Youtube videos, is common for now, and we still live in a very fortunate time. The problems are short lived, so over long periods, they tend to average out, and you are unlikely to notice them. Even active users will rarely notice a problem, so it is understandable for your use case, it would seem perfect.
Looking closely, at least for yt-dlp, you would see it tries multiple methods to grab available formats, tabulates the working ones, and picks from them. Those methods are constantly being peeled away, though some are occasionally added or fixed. The net trend is clear. The ability to download is eroding. There have been moments when you might seriously consider that downloading, at least without a complicated setup(PO-Tokens, widevine keys, or something else), is just going to stop working.
As time goes on, even for those rare times you want to grab a video, direct downloading may no longer work. You might have to resort to other methods, like screen recording through software or an actual camera, for as long as your devices will let you do even that.
Right!
I very rarely download YouTube videos but simply having done it a few times over the years, and even watching the text fly by in the terminal with yt-dlp, everything you’ve said is obvious.
Screen recording indeed might fail—Apple lets devs block it, so even screen recording the iPhone Mirroring app can result in an all-black recording.
How long until YouTube only plays on authorized devices with screens optimized for anti-camera recording? Silver lining, could birth a new creative industry of storytelling, like courtroom sketch artists with more Mr. Beast.
> (mostly, I've never had a download NOT work, on the rare occasion I grab one)
A lot of the reason for that is because yt-dlp explicitly makes it easy for you to update it, so I would imagine that many frontends will do so automatically - something which is becoming more necessary as time goes on, as YouTube and yt-dip play cat and mouse with each other.
Unfortunately, lately, yt-dip has had to disable by default the downloading of certain formats that it was formerly able to access by pretending to be the YouTube iOS client, because they were erroring too often. There are alternatives, of course, but those ones were pretty good.
A lot of what you see in yt-dlp is because of the immense amount of work that the developers put in in order to keep it working. Despite that it now allows for downloading from many more sites than it originally was developed for, they're still not going to give up YouTube support (as long as it still allows DRM-free versions) without a fight.
Once YouTube moves to completely DRM'd videos, however, that may have to be when yt-dlp retires support for YouTube, because yt-dlp very deliberately does not bypass DRM. I'd imagine the name would change at that point.
> mostly, I've never had a download NOT work
Well, how about thanks the people who's maintaining the downloader to make it possible?
> they haven't made it impossible to download videos, so that is a win IMO.
At some point you can just fire up OBS Studio and do a screen rip, then cut the ads out manually and put it on Torrent/ED2k.
Will you still think it's a win then?
> there is already a secret header used for authenticating that you are using the Google version of Chrome browser
Google needs to be broken up already.
Especially given that YT frequently blocks yt-dlp and bans users who workaround by using the --cookie flag
Google is not a side here if you don't want people to download your video do not put it on the internet.
This is starting to look like some quality llm benchmark!
And ever updating with that!
But I think the article's point isn't that Google wants downloaders to work, it's that they tolerate just enough friction to keep power users from revolting, without officially endorsing anything
I think this “power user” ship has sailed already. See Google locking down Android, for example. They’re an established monopoly at this point and if the last Chrome antitrust case shows us something is that they won’t be penalized for any of their anti-consumer actions.
"If you dive into the yt-dlp source code, you see the insane complexity of calculations needed to download a video. "
Indeed the complexity is insane
https://news.ycombinator.com/item?id=45256043
But what is meant by "a video". Is this referring to the common case or an edge/corner case. Does "a" mean one particular video or all videos
"There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up."
True, but is this code required for all YouTube videos
The majority of YT videos are non-commercial, unpromoted with low view counts. These are simple to download
For example, the current yt-dlp project contains approximately 218 YT IDs. A 2024 version contained approximately 201 YT IDs. These are often for testing edge cases
The example 1,525-character shell script below outputs download URLs for almost all the YT IDs found in yt-dlp. No Python needed
By comparison the yt-dlp project is 15,679,182 characters, approximately
The curl binary is used in the example only because it's popular, not because I use it. I use simpler, more flexible software than curl
I have been using tiny shell script to download YT videos for over 15 years. I have been downloading videos from googlevideo.com for even longer, before Google acquired YouTube.^1 Surprisingly (or not), when YT changes something that requires updating the script (and this has only happened to me about 5 times or less in 15 years) I have generally been able to fix the shell script faster than yt-dl(p) fixes its Python program (same for NewPipe/NewPipeSB)
I prefer non-commercial videos that are not promoted. The ones with relatively low view counts. For more popular videos, I listen to the audio file first before downloading the video file. After listening to the audio, I may decide to skip the video. Also I am not overly concerned about throttling
1. The original Google Video made a distinction between commercial and non-commercial(free) videos. The later were always easy to download, and no sign-in/log-in was required. This might be a more plausible theory why YT has always allowed downloads for non-commercial videos
# custom C filters to make scripts faster, easier to write
# yy030 filters URLs from stdin
# yy082 filters various strings from stdin,
# e.g., f == print format descriptions, v == print YT IDs
# x is a YouTube ID
# script accepts YT ID on stdin
#/bin/sh
read x;
y=https://www.youtube.com/youtubei/v1/player?prettyPrint=false
curl -K/dev/stdin $y <<eof|yy030|if test $# -gt 0;then egrep itag=$1;else yy082 f|uniq;fi;
silent
#verbose
ipv4
http1.0
tlsv1.3
tcp-nodelay
resolve www.youtube.com:443:142.251.215.238
user-agent "com.google.ios.youtube/19.45.4 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)"
header "content-type: application/json"
header "X-Youtube-Client-Name: 5"
header "X-Youtube-Client-Version: 19.45.4"
header "X-Goog-Visitor-Id: CgtpN1NtNlFnajBsRSjy1bjGBjIKCgJVUxIEGgAgIw=="
cookie "PREF=hl=en&tz=UTC; SOCS=CAI; GPS=1; YSC=4sueFctSML0; __Secure-ROLLOUT_TOKEN=CJO64Zqggdaw7gEQiZW-9r3mjwMYiZW-9r3mjwM%=; VISITOR_INFO1_LIVE=i7Sm6Qgj0lE; VISITOR_PRIVACY_METADATA=CgJVUxIEGgAgIw=="
data "{\"context\": {\"client\": {\"clientName\": \"IOS\", \"clientVersion\": \"19.45.4\", \"deviceMake\": \"Apple\", \"deviceModel\": \"iPhone16,2\", \"userAgent\": \"com.google.ios.youtube/19.45.4 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)\", \"osName\": \"iPhone\", \"osVersion\": \"18.1.0.22B83\", \"hl\": \"en\", \"timeZone\": \"UTC\", \"utcOffsetMinutes\": 0}}, \"videoId\": \"$x\", \"playbackContext\": {\"contentPlaybackContext\": {\"html5Preference\": \"HTML5_PREF_WANTS\", \"signatureTimestamp\": 20347}}, \"contentCheckOk\": true, \"racyCheckOk\": true}"
eof
Or they want to show the advertisers they do EVERYTHING.... while in reality they don't. Not that their ad system isn't total braindead.
One of the things that drives me crazy about YouTube is that if a video gets taken down, it shows up as a "This video is no longer available" with no further metadata. I am far, far more uptight about no knowing which video was removed than I am about the fact that it is no longer available.
I have put serious thought into creating a tool that would automatically yt-dlp every video I open to a giant hard drive and append a simple index with the title, channel, thumbnail and date.
In general, I think people are way too casual about media of all kinds silently disappearing when you're not looking.
I had a Bash script that parsed my browser history, and for every YouTube video it would run yt-dlp with "--write-info-json --write-subtitles --download-archive=already-downloaded.db" flags. Creating it was the easy part, but keeping it running has presented some challenges. For example, Google started rate limiting my IP quickly, so I had to offload this process to a NAS, where it could keep running for hours overnight, persistently downloading stuff at near dialup speeds. Then I was running out of storage quickly, so I had to add video filtering, and I planned to add basic garbage collection. And of course I had to have youtube-dl (and later yt-dlp) updated at all times.
In the end, I decided it is not worth it. In the scenario you described, I would take the video link/ID and paste it into Bing and Yandex. There is large chance they still have that page cached in their index.
FWIW if you are going to create your own tool, my advice will be to make it a browser extension, and try to pull the video straight from YouTube's <video> element.
> This video is no longer available.
This is why I recommend everybody to stay AWAY from Youtube Music. I migrated my curated playlists from Spotify a few years ago, and to my surprise now I have dozens of songs that are no longer available and Youtube doesn’t offer a way to at least let me know which song it was. Indeed, I was a paying user and Youtube caused intentional and irretrievable data loss.
After a decade of paying for Youtube Premium I have unsubscribed and have vowed never to give them any more money whatsoever.
> In general, I think people are way too casual about media of all kinds silently disappearing when you're not looking.
I used to be obsessed with this.
The way I saw it was the universe took billions of years of concerted effort to generate a random number that represents a unique file such as an interesting video or image. It would be such a shame if all that effort was invalidated due to bullshit YouTube reasons or copyright nonsense or link rot or whatever.
So I started hoarding this data. I started buying hardware and designing a home data center with ZFS and hundreds of terabytes to hold it all. I started downloading things I never actually gave a shit about just because they were rare and I wanted to preserve them.
I think getting married cured me of this. Now it's all moments that will be lost to time, like tears in the rain.
I still have a bit of this, but I try to be realistic that hoarding too much shit is a waste. I simply try to filter whether that is something really worth keeping. Then I save it if it’s something I’m very interested in or if it has any sentimental value to me personally.
Agreed, I can't describe the sadness that some of my most treasured nostalgic videos were lost before I knew about yt-dlp, and I can't find out what their titles even were. For example on spotify when music gets removed, if it's in your playlist it just shows it greyed out and unplayable.
Someone at google please give us the ability to see titles!
Not by default. The normal spotify setting is to simply silently remove them. You have to turn on the “show unplayable tracks” preference to know that your playlists have been altered without your consent.
Off the top of my head, did you try to access the URLs via archive.org? That way, at least you'll get the titles.
I want a web cache that runs on my network transparently caching everything that goes through it. It would be a LRU cache but with the ability to easily mark some resource as archived such that it never gets deleted. A browser extension could be used to do this marking. Unfortunately client-side js makes this very difficult or even impossible to do.
We really dropped the ball when it came to running random js from websites. The number of people who truly run only free software these days is close to zero. I used to block all js about 10 years ago but it was a losing battle and ended up being essentially an opt out from society.
You might find https://findyoutubevideo.thetechrobo.ca/ to be helpful! It was posted before on HN here: https://news.ycombinator.com/item?id=38228481
The lack of even basic metadata when a video disappears is maddening... like a black hole where context used to be
I've always wondered why we don't see any platforms just remove the media while leaving the metadata, comments, ratings, etc intact. Like, is there some legal requirement that the idea itself has to be hard to find, or is it okay to just remove the media and let people keep discussing it?
It implies the service is lacking something, that it's deficient in specific tangible things you want but they don't have.
A generic 404 for something you don't even know exists won't leave a `video_title` sized hole in your heart and chip on your shoulder, and won't give competitors opportunities to serve your needs instead.
Legal requirement - probably not. Econophysical constraint - betcha. They mostly don't care about the discussion, or the content, or the idea, they care about keeping your eyeballs within a given rectangle until a bell rings.
Name itself might be what is legally required to be taken down.
You can't even tell what channel removed videos were from. Very frustrating.
https://archivebox.io/ could be a solution for that
> Google needs YouTube downloaders. They perform a valuable role: If it were impossible to download YouTube videos, many organizations would abandon hosting their videos on YouTube for a platform that offered more user flexibility. Or they’d need to host a separate download link and put it in their YouTube descriptions. But organizations don’t need to jump through hoops -- they just let people use YouTube downloaders.
I don't think I believe this, as much as I'd like to. How many organizations would really consider this a critical need? My guess is, not enough for Google to care.
Also, if you upload a video to YouTube you can download it from YouTube Studio at any time, so that doesn't add up at all.
YouTube just doesn't make this available via API, but you've always been able to manually from YouTube Studio download your uploaded videos.
That sounds brutal if you have 5 years of daily uploads or something like that. At some point, if you want your entire catalog, that becomes a very sucky process.
Just use Google Takeout. It will create a series of archive files for you to download.
Surely you would have the originals locally and not rely on youtube to archive yours uploads?
People who do this are a minority. The vast majority have the masters only available on YouTube.
I hope your “surely” was in jest.
If you talk about the vast majority of corporate videos (and written material) they mostly don’t care about access after 18 months or so. And in fact may well actively want to scrub them because info is no longer accurate.
But I think the point is more about passive tolerance than active support