Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

Google has resumed work on JPEG XL support in Chromium after removing it three years ago. A developer says the current version is feature complete and under review.
XINSTALL BY CLICKING THE DOWNLOAD FILE
Three years ago, Google removed JPEG XL support from Chrome, stating there wasn’t enough interest at the time. That position has now changed.
In a recent note to developers, a Chrome team representative confirmed that work has restarted to bring JPEG XL to Chromium and said Google “would ship it in Chrome” once long-term maintenance and the usual launch requirements are met.
The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents. There has been continuous demand from developers and users who ask for its return.
Before Google ships the feature in Chrome, the company wants the integration to be secure and supported over time.
A developer has submitted new code that reintroduces JPEG XL to Chromium. This version is marked as feature complete. The developer said it also “includes animation support,” which earlier implementations did not offer. The code passes most of Chrome’s automated testing, but it remains under review and is not available to users.
The featured image is taken from an unlisted developer demo created for testing purposes.
JPEG XL is a newer image format intended as a replacement for traditional JPEG files. It can reduce file size without loss in visual quality. This may help web pages load faster and reduce data usage. More details are available on the official JPEG XL website.
Google has not provided a timeline for JPEG XL support in Chrome. Users cannot enable the format today, but development has restarted after years without progress.
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
While being a big supporter of JPEG-XL on HN, I just want to note AV2 is coming out soon, which should further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio )
But JPEG-XL is being quite widely used now, from PDF, medical images, camera lossless, as well as being evaluated in different stage of cinema / artist workflow production. Hopefully the rust decoder will be ready soon.
And from the wording, it seems to imply Google Chrome will officially support anything from AOM.
AVIF/AV1 is a codec that encodes both lossy and lossless files very slowly. JXL is significantly faster than AVIF. But AVIF provides better image quality than JXL even at lower settings. However, AV2 will require much more power and system resources for a small bandwidth gain.
> But AVIF provides better image quality than JXL even at lower settings.
I don't think that's strictly true.
The conventional reporting has been that JXL works better at regular web sizes, but AVIF starts to edge out at very low quality settings.
However, the quality per size between the two is so close that there are comparisons showing JXL winning even where AVIF is supposed to out perform JXL. (e.g. https://tonisagrista.com/blog/2023/jpegxl-vs-avif/)
Even at the point where AVIF should shine: when low bandwidth is important, JXL supports progressive decoding (AVIF is still trying to add this) so the user will see the images sooner with JXL rather than AVIF.
---
There is one part where AVIF does beat JXL hands down, and that's animation (which makes sense considering AVIF comes from the modern AV1 video codec). However, any time you would want an animation in a file, you're better off just using a video codec anyway.
To be fair, those comparison image size aren't small enough. Had it been 30 - 50% of those tested size AVIF should have the advantage.
But then the question is should we even be presenting this level of quality. Or is it enough. I guess that is a different set of questions.
JPEG-XL is both a lossy and lossless codec. It is already being used in Camera DNG format, making the RAW image smaller.
While lossy codec is hard to compare and up for debate. JPEG-XL is actually better as a lossless codec in terms of compression ratio and compression complexity. There is only one other codec that beats it but it is not open source.
HALIC is by far the best lossless codec in terms of speed/compression ratio. If lossy mode were similarly available, we might not be discussing all these issues. I think he stopped developing HALIC for a long time due to lack of interest.
Its developer is also developing HALAC (High Availability Lossless Audio Compression). He recently released the source code for the first version of HALAC. And I don't think anyone cared.
HALIC (High Availability Lossless Image Compression)
It has both lossy and lossless modes.
Good to hear.
I sure hope they came up with a good, clear system to distinguish them.
As in, a clear way to detect whether a given file is lossy or lossless?
I was thinking that too, but on the other hand, even a lossless file can't guarantee that its contents aren't the result of going through a lossy intermediate format, such as a screenshot created from a JPEG.
There is some sort of tag, jxlinfo can tell you if a file is "lossy" or "(possibly) lossless".
Presumably you can look at the file and tell which mode is used, though why would you care to know from the filename?
I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
There are so many reasons why it's almost hard to know where to begin. But it's basically the same reason why it's helpful for some documents to end in .docx and others to end in .xlsx. It tells you what kind of data is inside.
And at least for me, for standard 24-bit RGB images, the distinction between lossy and lossless is much more important than between TIFF and PNG, or between JPG and HEIC. Knowing whether an image is degraded or not is the #1 important fact about an image for me, before anything else. It says so much about what the file is for and not for -- how I should or shouldn't edit it, what kind of format and compression level is suitable for saving after editing, etc.
After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
There's a good reason Microsoft Office documents aren't all just something like .msox, with an internal tag indicating whether they're a text document or a spreadsheet or a presentation. File extensions carry semantic meaning around the type of data they contain, and it's good practice to choose extensions that communicate the most important conceptual distinctions.
> Knowing whether an image is degraded or not is the #1 important fact about an image for me
But how can you know that from the fact that it's currently losslessly encoded? People take screenshots of JPEGs all the time.
> After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
That is a useful distinction in my view, and there's some precedent for solutions, such as how Office files containing macros having an "m" added to their file extension.
Obviously nothing prevents people from taking PNG screenshots of JPEGs. You can make a PNG out of an out-of-focus camera image too. But at least I know the format itself isn't adding any additional degradation over whatever the source was.
And in my case I'm usually dealing with a known workflow. I know where the files originally come from, whether .raw or .ai or whatever. It's very useful to know that every .jpg file is meant for final distribution, whereas every .png file is part of an intermediate workflow where I know quality won't be lost. When they all have the same extension, it's easy to get confused about which stage a certain file belongs to, and accidentally mix up assets.
>I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
Unfortunately we have been through this discussion and author of JPEG-XL strongly disagree with this. I understand where they are coming from, but for me I agree with you it would have been easier to have the two separated in naming and extensions.
But JPEG has a lossless mode as well. How do you distinguish between the two now?
This is an arbitrary distinction, for example then why do mp3 and ogg (vorbis) have different extensions? They're both lossy audio formats, so by that requirement, the extension should be the same.
Otherwise, we should distinguish between bitrates with different extensions, eg mp3128, mp3192, etc.
In theory JPEG has a lossless mode (in the standard), but it's not supported by most applications (not even libjpeg) so it might as well not exist. I've certainly never come across a lossless JPEG file in the wild.
Filenames also of course try to indicate technical compatibility as to what applications can open them, which is why .mp3 and .ogg are different -- although these days, extensions like .mkv and .mp4 tell you nothing about what's in them, or whether your video player can play a specific file.
At the end of the day it's just trying to achieve a good balance. Obviously including the specific bitrate in a file extension goes too far.
Legacy. It’s how things used to be done. Just like Unix permissions, shared filesystem, drive letters in the file system root, prefixing urls with the protocol, including security designators in the protocol name…
Be careful to ascribe reason to established common practices; it can lead to tunnel vision. Computing is filled with standards which are nothing more than “whatever the first guy came up with”.
https://en.wikipedia.org/wiki/Appeal_to_tradition
Just because metadata is useful doesn’t mean it needs to live in the filename.
If the alternative was putting the information in some hypothetical file attribute with similar or greater level of support/availability (like for filtering across various search engines and file managers) then I'd agree there's no reason to keep it in the file extension in particular, but I feel the alternative here is just not really having it available in such a way at all (instead just an internal tag particular to the JXL format).
Well yeah, you can turn any lossless format lossy by introducing an intermediate step that discards some amount of information. You can't practically turn a lossy format into a lossless format by introducing a lossless intermediate step.
Although, if you're purely speaking perceptually, magic like RAISR comes pretty close.
pngquant does the lossy conversion, not the PNG format.
Surely something close to perceptually lossless is sufficient for most use cases?
Think of all the use cases where the output is going to be ingested by another machine. You don't know that "perceptually lossless" as designed for normal human eyeballs on normal screens in normal lighting environments is going to contain all the information an ML system will use. You want to preserve data as long as possible, until you make an active choice to throw it away. Even the system designer may not know whether it's appropriate to throw that information away, for example if they're designing digital archival systems and having to consider future users who aren't available to provide requirements.
> AV2 .... further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio
But at what cost? From this the en/decoding speed (links below) is much higher for those advanced video codecs, so for various lower powered devices they wouldn't be very suitable?
Also, can we expect "near max potential" with AV2/near future or is it an ever-unachievable goal that shouldn't stop adding "non-max" codecs?
https://res.cloudinary.com/cloudinary-marketing/image/upload...
https://cloudinary.com/blog/time_for_next_gen_codecs_to_deth...
Fwiw, JPEG XL takes around 2.5x the time to decode as an equivalent AVIF, and has worse compression https://jakearchibald.com/2025/present-and-future-of-progres...
Interesting, looks like another opportunity for Chrome to avoid the Safari mistake
> slow. There's some suggestion that the Apple implementation is running on a single core, so maybe there's room for improvement.
Though their own old attempt was even worse
> of the old behind-a-flag Chromium JPEG XL decoder, and it's over 500% slower (6x) to decode than AVIF.
Here are the direct links:
blink-dev mailing list
https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...
Tracking Bug (reopened)
Yeah note that Google only said they're now open to the possibility, as long as it is written in Rust (rightly so).
The patch at the end of that thread uses a C++ implementation so it is a dead end.
Rick specifically said commitment for long term maintenance and meeting usual standards for shipping. The implementation was abandoned in favor of a new one using rust, so not necessarily a dead end.
I meant the C++ patch is a dead end; not JPEG XL support in general. Seems like there's a Rust library that will have to be used instead.
My introduction to JPEG-XL was by 2kliksphillip on YouTube, he has a few really good analyses on this topic, including this video: https://youtu.be/FlWjf8asI4Y