When you set up a new camera, or even go to take a picture on some smartphones, you’re presented with a key choice: JPG or RAW?
Locked, proprietary formats are the norm.
When you set up a new camera, or even go to take a picture on some smartphones, you’re presented with a key choice: JPG or RAW?
JPGs are ready to post just about anywhere, while RAWs yield an unfinished file filled with extra data that allows for much richer post-processing. That option for a RAW file (and even the generic name, RAW) has been standardized across the camera industry — but despite that, the camera world has never actually settled on one standardized RAW format.
Most cameras capture RAW files in proprietary formats, like Canon’s CR3, Nikon’s NEF, and Sony’s ARW. The result is a world of compatibility issues. Photo editing software needs to specifically support not just each manufacturer’s file type but also make changes for each new camera that shoots it. That creates pain for app developers and early camera adopters who want to know that their preferred software will just work.
Adobe tried to solve this problem years ago with a universal RAW format, DNG (Digital Negative), which it open-sourced for anyone to use. A handful of camera manufacturers have since adopted DNG as their RAW format. But the largest names in the space still use their own proprietary files. And there’s no sign of that changing anytime soon.
Some smaller camera manufacturers have been using the DNG format for years, while others like Sigma have adopted it more recently. The whole point of DNG is to offer flexibility, ease of use, and ideally, a little more futureproofing — since the format is open to use and royalty-free.
DNG was created in 2004 by Thomas Knoll, one of the co-creators of Photoshop, and was based on the even older TIFF image specification. DNG is capable of holding additional camera metadata embedded within it. While other RAW formats are usually coupled with an XMP sidecar file for storing metadata, DNG is slightly streamlined since it’s just one file that allows nondestructive metadata edits within it.
DNG is nearly old enough to legally drink (in the US)
Regardless of which camera brand you use, a RAW processing workflow is mostly the same: you take a picture, import it to a computer, open and process the file using editing software, and then export a “finished” file in a universal format (like JPG) for sharing or printing. Where things often get the messiest is with software compatibility.
You can’t use just any software or photo app to edit a RAW file — you generally need specialized apps that support your specific format. Widely used consumer apps like Apple’s Photos and Google Photos have some RAW support, but it’s frankly a bit janky. It’s best to use software like Adobe’s Creative Cloud, Capture One, Photo Mechanic, or Darktable.
Some camera manufacturers offer their own software. But ask most photographers, and they’ll likely steer you toward the third-party apps.
And there’s the big advantage for DNG. Because it’s an open standard, there’s wider third-party app support. That makes it a more turnkey solution for camera makers. It therefore makes sense that smaller manufacturers (Pentax, Ricoh, Leica, etc.) or ones with closer ties to Adobe (Apple) use it.
Larger camera companies know that app developers will rush to support their latest features, letting them stick with their own RAW formats. A proprietary RAW format offers tighter control over the image pipeline direct from a manufacturer’s camera, from the point of capture to the files you’re editing on your computer.
That’s the sentiment I got from multiple camera company reps who gave me feedback when I asked them why they stuck with their proprietary format. Here are some of the answers I got when asked why they go with proprietary options:
Sony: Michael Bubolo and Ryoko Noguchi of Sony’s product teams tell The Verge that Sony Alpha cameras use a proprietary ARW format “to maximize performance based on device characteristics such as the image sensor and image processing engine.”
Panasonic: “A proprietary format enables better optimization and supports unique camera functions,” said Masanori Koyama of Panasonic’s Lumix division.
Sigma: “Adding proprietary data allows the camera information to be given more accurately to the developing software. However, it is less versatile,” Sigma said in a statement provided by US PR rep Jack Howard. (Some of its cameras use DNG, while others use the proprietary X3F format.)
Canon: “Canon uses a propriety RAW format because it allows our proprietary information to be added to RAW without being restricted by the standardization, and data can be handled freely, enabling optimum processing during image development,” said Drew MacCallum, senior manager of product planning at Canon USA.
Pentax: “The advantage of proprietary formats is that they can evolve on their own; the disadvantage is that they may not be supported by third-party applications,” said Shigeru Wakashiro, General Manager of Product Planning for Ricoh Imaging Company (which owns Pentax). Of course, Pentax is the lone manufacturer that gives users a choice between a proprietary PEF file and DNG. The Ricoh executive added, “The disadvantage of using DNG is that if all manufacturers use the DNG format, it will be difficult to manage the format separately for each manufacturer.”
Nikon: Did not answer The Verge’s questions by time of publication.
Fujifilm: Did not answer The Verge’s questions by time of publication.
Canon: CR3 (previously CR2 and CRW)
Pentax: PEF (option for DNG)
Leica: DNG (RWL on some point-and-shoots)
Sigma: DNG (previously X3F)
Apple: DNG (Apple ProRAW)
Sony also says its format lets it offer unique features in its own editing software: “We can maximize its performance to achieve even higher image quality and enhance image details through features such as Composite RAW and Pixel Shift Multi.”
Sony’s software for processing ARW RAW files is called Imaging Edge. Like most first-party software from camera manufacturers, it’s terrible and unintuitive to use — and should be saved for situations like a high-resolution multishot mode where it’s the only method to use a camera’s proprietary feature. The same goes for other first-party apps like Canon Digital Photo Professional and Nikon NX Studio.
The only other time it may be necessary to use those apps is if you buy a camera as soon as it’s released and its RAW files aren’t yet supported by Adobe or other software makers. That moment of friction is when a proprietary RAW format is most annoying, primarily affecting early adopters. It’s a restriction that severely affects camera reviewers and YouTubers, who often can’t try out RAW files in any initial hands-on testing.
Getting that software support out to users as soon as possible takes a bunch of testing and work, and it’s not always completed as quickly as new camera owners would like. “For new cameras, this means making sure that we add support for new or changed behaviors in RAW file formats,” Eric Chan, a digital imaging fellow at Adobe, tells The Verge. “For example, new compression modes, capture modes such as High Dynamic Range and more. In addition, measuring each new device sensor for characteristics such as color and noise.”
If all that isn’t done before a new camera is released and people start taking pictures, the interim choice becomes: shoot JPGs (an inferior format) or temporarily use the camera maker’s software (an inferior workflow).
Even if multiple brands of cameras use the same off-the-shelf sensor — Nikon, Pentax, Leica, and others use sensors manufactured by Sony — the image processing pipeline and fine-tuning is all proprietary. It’s what gives brands their signature style, like the color science that Fujifilm is known for. But that doesn’t mean it’s impossible to do all of that with an open format like DNG.
“I have yet to hear a good reason for using proprietary RAW formats. The underlying data is the same. If a manufacturer comes up with additional data that isn’t included in the DNG standard, the format is extensible enough that a camera manufacturer can throw it in there, anyway.” That’s what Ben Sandofsky, developer at Lux Optics, makers of Halide, told me. So maybe some camera brands are set in their ways and like having full control. Ultimately, we’re sort of at their mercy and whether they choose to be more guarded with a proprietary format or use an open one like DNG.
I wish it weren’t like this, but ultimately, it’s mostly fine. At least, for now. As long as the camera brands continue to work closely with companies like Adobe, we can likely trudge along just fine with this status quo. As much as I’d personally prefer to see all cameras offer a common format like DNG, so at the very least you never have to worry about incompatibilities, it’s unlikely mainstays like Canon CR3 and Nikon NEF files will ever go away.
That means early adopters are stuck hoping their software is updated on time — and anyone with old gear needs to hope their format doesn’t go out of style.
Ultimately, RAW formats aren't that complex, and camera firmware is mostly developed in countries that don't have strong open source software traditions.
Look at the decoders for each format that darktable supports here: https://github.com/darktable-org/rawspeed/tree/develop/src/l...
It's some binary parsing, reading metadata, maybe doing some decompression-- a thousand lines of C++ on average for each format. These aren't complex codecs like HEVC and only reach JPEG complexity by embedding them as thumbnails!
Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
Photographers rarely care, so it doesn't appreciably impact sales. Raw processing software packages have generally good support available soon after new cameras are released.
I think that might be why a lot of camera makers don't care to use DNG - it's easier to make their own format and easy enough for others to reverse engineer it.
One thing that open source libraries do tend to miss is that very important extra metadata - for example, Phase One IIQ files have an embedded sensor profile or full on black frame that is not yet encoded into the raw data like it typically is for a NEF or DNG from many cameras. It does seem rawspeed handles this from a quick scan of the code.
It can get more tricky - Sinar digital backs have an extra dark frame file (and flat frame!) that is not part of the RAW files, and that is not handled by any open source library to my knowledge - though I did write a basic converter myself to handle it: https://github.com/mgolub2/iatodng_rs
I'm not sure how DNG would be able to handle having both a dark and flat frame without resorting to applying them to the raw data and saving only the processed (still unbayered) data.
> One thing that open source libraries do tend to miss is that very important extra metadata - for example, Phase One IIQ files have an embedded sensor profile or full on black frame that is not yet encoded into the raw data like it typically is for a NEF or DNG from many cameras.
In astronomy/astrophotography the FITS format[1] is commonly used, which supports all these things and is, as the name suggests, extremely flexible. I wonder why it never caught on in regular photography.
Oh interesting! This seems like it would be a good fit ;)
Especially for really old setups that had RGB color wheels and multiple exposures, exactly like a multispectral astro image might. Phase one also has a multispectral capture system for cultural heritage, which just shoots individual IIQs to my knowledge… It would work great too for multiple pixel shift shots.
Possibly, the engineers just didn’t know about it when they were asked to write the firmware? It’s funny, I think most RAW formats are just weird TIFFs to some degree, so why not use this instead.
Yes. TIFF would "fit the bil" here. It deals with multspectral satellite images. It supports 32 and 64 bits floats and 16bits integers.
TIFF is almost an multidimensional array serialization format. Obviously is centered on images but it can have many layers. Usualy RGBA but they can be have other interpretations. It supports some level of streamed writting and random access over HTTP or other ranged protocols.
> Possibly, the engineers just didn’t know about it when they were asked to write the firmware?
Considering how often I witnessed engineers trying to build something to solve a problem instead of sitting down and researching if someone else did that already, and likely better, I really wouldn’t be surprised if that is the answer to most questions in this thread.
Maybe? I’m not familiar enough with DNG to say - possibly that wasn’t a thing when Phase One first started using IIQ? I doubt it was around when Sinar was - in fact the last two (esprit 65 & S30|45 ) Sinar backs do use DNG as an option!
But can I see the difference in that applied metadata?
It took a long time for Canon CR3 raw format to be supported by darktable because, although the format itself had been reverse engineered, there was a fear from the developers that it was covered by a patent and that they risked a lawsuit by integrating it in DT. IIRC, they had attempted to contact Cabon legal to obtain some sort of waiver, without success.
I'm fact I'm not sure how that saga ended and CR3 support was finally added a few years after the release of the Canon mirrorless cameras that output CR3.
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), [..]
Technically speaking, implementing DNG would be another development activity on top of a RAW export, because RAW also has a purpose in development and tuning of the camera and its firmware.
It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development.
It just "happens" to be also available to select for the end-user after product-launch. Supporting DNG would mean adding an extra feature and then hiding the RAW-option again.
I can imagine it's hard to make this a priority in a project plan, since most of the objectives are already achieved by saving in RAW
First of all, it does not "just happen" to be selectable. RAW contains information that is not available in a JPG or PNG , but which is crucial to a lot of serious photographers.
Second, the native raw images do include a ton of adjustments in brightness, contrast and color correction. All of which gets lost when you open the image file with apps provided from other companies than the camera vendor. Eg. open a Nikon-raw in NC Software and then in Lightroom. Big difference. Adobe has some profiles that get near the original result, but the Nikon raw standards often are better.
So DNG would absolutely be an advantage because then at least these color corrections could natively be implemented and not get lost in the process.
Noone is disputing the advantage of RAW. I tried to provide the view from a pure development perspective, looking at a feature backlog.
It "just happens" to be selectable because it is a byproduct of the internal development: The existing RAW format is used internally during development and tuning of the product, and is implemented to work with vendor-internal processes and tools.
Supporting DNG would require a SEPARATE development, and it would still not replace a proprietary RAW-format in the internal toolchain.
(because the DNG patent-license comes with rights for Adobe as well as an option to revoke the license)
just encoding to DNG wouldnt do that. nikons software has its own profiles and stuff that it applies, and they would need to publish how that works. But that is generic, you could(nikon permitting) apply their processing to a photo taken by a canon. it has nothing to do specifically with the NEF format
most people who shoot RAW don't care for the in camera picture adjustments so don't care if RAW shows up looking what it did in the camera because we apply our own edits anyways, if we need something like that we shot jpeg
> It just "happens" to be also available to select for the end-user after product-launch
RAW (any format) is an essential requirement for many photographers. You just can't get the same results out of a jpeg.
None of this is disputed (or relevant) in this conversation
I disagree. Bufferoverflow frames raw formats as something that's really only there for R&D purposes, and it's more or less just an afterthought that it's available to photographers. In reality, Narretz points out, getting access to the raw sensor data is a key feature to many photographers; it's an essential aspect of the product from a user perspective.
Since you disagree: where in this thread did anyone state the opposite of what you just wrote, who said that RAW is NOT a key feature to many photographers?
Here:
> It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development. It just "happens" to be also available to select for the end-user after product-launch.
Nothing here states that a RAW format is NOT a key feature to many photographers. This is a straw-man argument.
It says that it "just happens" to be available to customers and the main reason it exists is for R&D. That's what I disagree with.
The whole post shapes the context, even the whole sentence helps already: It just "happens" to be also available to select for the end-user after product-launch. Supporting DNG would mean adding an extra feature and then hiding the RAW-option again.
--> Even if DNG-support would be adopted as a feature for the end-user, the proprietary RAW would still need to be maintained because it has a core-purpose during development of the product. The utilization AFTER that is the product-feature
None of this negates the value of RAW for photographers, this is completely beside the topic
Hence I, the person who wrote it (!), keeps clarifying the intended interpretation by (re)iterating that noone disputes the value of RAW for photographers.
It is up to you now to ingest new information and adjust your interpretation, a process I'm afraid I can't help any further with.
Good luck ¯\_(ツ)_/¯
> It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development.
This is what I was thinking, that there are potentially so many RAW formats because there are so many sensors with potentially different output data. There should be a way to standardize this though.
Yeah, but it's not standardised because its output is so close to "bare metal", it's wrapped into a standardised format a few steps later when a JPG/HEIC/... is created.
Supporting DNG means that those few steps later it should be standardised into ANOTHER RAW-equivalent. A format which happens to be patented and comes with a license and legal implications.
Among them the right for Adobe to every method you used to make this conversion from your proprietary bare-metal sensor-data. This is not trivial, because if you're a vendor working on sensor-tech you wouldn't want to be required to share all your processing with Adobe for free...
I have no knowledge of DNG, what I was suggesting is that someone should devise a some kind of extensible, self-describing format that can be used in place of RAW without losing any sensor data as with JPEG/HEIC/etc.
Ah I see.
Well, DNG ("Digital Negative") is such a format, defined and patented by Adobe, but with a license allowing free use under certain conditions.
The conditions are somewhat required to make sure that Adobe remains in control of the format, but at the same time they create a commitment and legal implications for anyone adopting it.
> It is supposed to be raw data from the sensor with some additional metrics streamed in
...and what do you think DNG is?
A patented format where Adobe standardized the exact syntax for each parameter, with mandatory and optional elements to be compliant, and (!) a patent license with some non-trivial implications which is also only valid if the implementation is compliant.
In a development environment, this format competes with an already-implemented proprietary RAW-format which already works and can be improved upon without involvement of a legal department or 3rd party.
In my personal opinion, considering a file format as something that is patentable is where you've (ie your country) has gone wrong here.
It doesn't seem to reward innovation, it seems to reward anti-competitive practices.
> it seems to reward anti-competitive practices.
That is the intended purpose of a patent. From WIPO [1]:
> The patent owner has the exclusive right to prevent or stop others from commercially exploiting the patented invention for a limited period within the country or region in which the patent was granted. In other words, patent protection means that the invention cannot be commercially made, used, distributed, imported or sold by others without the patent owner's consent.
This is not correct. Both the subhead of the article and the DNG format's Wikipedia Page state that DNG is open and not subject to IP licensing.
While having two file formats to deal with in software development definitely "competes" with the simplicity of just having one, patents and licensing aren't the reason they're not choosing Adobe DNG.
The fact that both your sources are NOT the actual DNG license text should be sufficient to humble yourself from "This is not correct" to at least a question.
--> Your information source is incomplete. Please refer to the license of DNG [0].
The patent rights are only granted:
1. When used to make compliant implementations to the specification,
2. Adobe has the right to license at no cost every method used to create this DNG from the manufacturer, and
3. Adobe reserves the right to revoke the rights "in the event that such licensee or its affiliates brings any patent action against Adobe or its affiliates related to the reading or writing of files that comply with the DNG Specification"
--
None of this is trivial to a large company.
First of all, it requires involvement of a legal department for clearance,
Second, you are in risk of violation of the patent as soon as you are not compliant to the specification,
Third, you may have to open every IP to Adobe at no charge which is required in order to create a DNG from your sensor (which can be a significant risk and burden if you develop your own sensor) and
Fourth, in case the aforementioned IP is repurposed by Adobe and you take legal action, your patent-license for DNG is revoked.
--
--> If you are a vendor with a working RAW implementation and all the necessary tools for it in place, it's hard to make a case on why you should go through all that just to implement another specification.
[0] https://helpx.adobe.com/camera-raw/digital-negative.html#dng
None of this is terrifying and seems overblown. I read the patent grant you linked to. It makes sense that one would not grant the right to make incompatible versions. That would confuse the user. Also, the right of revocation only applies if the DNG implementor tries to sue Adobe. Why would they do that?
Occam's razor here suggests that the camera manufacturers' answers are correct, especially since they are all the same. DNG doesn't let them store what they want to and change it at will -- and this is true of any standardized file format and not true of any proprietary format.
> None of this is terrifying and seems overblown. I read the patent grant [..]
Considering that you entered this discussion instantly claiming that others are wrong without having even read the license in question makes this conversation rather..."open-ended"
> Also, the right of revocation only applies if the DNG implementor tries to sue Adobe. Why would they do that?
As I wrote above, Adobe reserves the right to use every patent that happens to be used to create this DNG from your design at no cost, and will revoke your license if you disagree i.e. with what they do with it.
> Occam's razor here suggests [..]
Or, as I suggested, it's simply hard to make a case in favor of developing and maintaining DNG with all that burden if you anyway have to support RAW
That's fair. It's certainly not "open source" in that way that term is usually used. I still think that's not the primary issue and that the manufacturers are being honest about their preference for proprietary formats. But I see that Adobe legal concerns hanging over their heads isn't an advantage, for sure.
Also...
> granted by Adobe to individuals and organizations that desire to develop, market, and/or distribute hardware and software that reads and/or writes image files compliant with the DNG Specification.
If I use it for something it's not images because I want to create a DNG file that's a DNG file and a Gameboy ROM at the same time. Or if I'm a security researcher testing non compliant files. Or if I'm not a great developer or haven't had enough time to make my library perfectly compliant with the specification... Will I be sued for breaking the license?
The fatal scenario for a camera vendor would be to transition your customers to DNG over some years, then a dispute arises which causes Adobe to revoke your patent license, and suddenly all your past products are in violation of Adobe's DNG patent.
You not only have to remove DNG-support on those products, but due to warranty-law in many countries have to provide an equivalent feature to the customer (--> develop a converter application again, but this time for products you already closed development for years ago).
Alternative would be to settle with Adobe to spare the cost for all that. So Adobe has all the cards in this game.
Now: Why bother transitioning your customers to DNG...?
What? Number two would make most companies run the other way. “Whatever you use to create a DNG, secret sauce or algorithm or processing from your sensor data, Adobe can license” - you act like it’s no big deal but it’s often the closely guarded color science or such things.
You can argue that maybe those things shouldn’t be considered trade secrets or whatever. But there’s just a bit more to it than that.
A file format containing a subset of the image sensor data needed for tuning an image sensor. It's user focused rather than camera developer focused.
Neither DNG nor various vendor-specific raw formats are meant for tuning an image sensor. They can be used for that in some specific cases, but it's not what they are for. They're for taking photos and providing the user with less opinionated data so they can do the processing of their photos the way they want rather than rely on predefined processing pipeline implemented in the camera.
Despite the name, this is rarely a pure raw stream of data coming from the sensor. It's usually close enough for practical photographic purposes though.
Our proprietary format was a header followed by a raw sensor dump.
Despite this, people eventually used it for photographic purposes.
So is DNG in the implementation I've worked on (data as outputted by the sensor wrapped in a TIFF structure), but whether what the sensor outputs is actually "raw" can be debatable and is sensor- and configuration-dependent.
Absolutely true. Our sensors would dump their registers in the first few lines so at least you knew what settings were used for that frame.
not really, RAW is NOT just a raw sensor dump straight from the hw into a file.
its a full fledged format, that contains the extensive metadata already in the exif formats including vendor blocks etc, and then its the sensor readout, which is relatively similar between nearly all sensors, theres certainly not many types, considering you can express the bayer pattern. This can all be expressed in DNG, and would NOT need to be an "extra" on top of "raw".
and indeed, some camera vendors do in fact do this.
> Technically speaking, implementing DNG would be another development activity on top of a RAW export,
What are you talking about? Canon could implement DNG instead of CR3. It's not that hard. Both of these formats are referred to as "RAW".
Just as I wrote. CR3 is used by Canon also during development and tuning of their sensors and cameras.
DNG would not replace CR3, because CR3 would still be needed before launch, and Canon has no incentive to change their entire internal toolchain to comply to Adobes DNG specification.
Especially not because the DNG format is patented and allows Adobe to revoke the license in case of dispute...
Fujifilm lossy compressed raw still isn't supported after many years [1].
[1] https://github.com/darktable-org/rawspeed/issues/366
And in my experience there has been lots of bugs with Fujifilm raws in darktable:
[2] https://github.com/darktable-org/rawspeed/issues/354
[3] https://github.com/darktable-org/darktable/issues/18073
However, Fujifilm lossless compressed raw actually does a decent job keeping the file sizes down (about 50% to 60% the file size of uncompressed) while maintaining decent write speed during burst shooting.
It's really strange to me that a lossy compressed format could be called "raw". Does that just mean that it hasn't been e.g. gamma-corrected before the compression was applied? (Is it even a good idea to do lossy compression before such correction?)
All raw means is scene-referred data. The idea that somehow raw means "raw" data from the sensor is an often repeated idea, but unfortunately is completely nonsense. Modern sensors do on-chip noise reduction, they can be programmed to give data in all kind of formats and with different processing done to it. The same sensor used in different cameras can have different ISO. The same sensor used in different cameras can produce different RAW files even at the same ISO. Not just in the sense of a different file format, in the sense of different data in the file, from the exact same sensor, but programmed differently.
Source? There are lossless and lossy transformations. In many scientific contexts raw implies no lossy transformations in terms of information.
This isn't a scientific context, it's a marketing one.
"scene-referred data", as opposed to... something consciously edited?
No, scene-referred vs. display-referred. These are standard terms.
https://www.color.org/scene-referred.xalter
https://ninedegreesbelow.com/photography/display-referred-sc...
When my father bought his first digital camera, he insisted on raw format access.
Inexplicably I didn't understand at the time why he (Bryce Bayer) wanted this. He was modest about his work.
I do now!
What an awesome dad! Lived during the golden age of photography and retired before its demise.
I always thought camera RAW formats were optimize continuous shooting rates. About being able to linearly write an image as fast as possible.
I don't know the details of DNG but even the slightest complication could be a no-go for some manufacturers.
The main reason people shoot raw is to have more creative control over the final product.
A simple example is white balance. The sensor doesn't know anything about it, but typical postprocessing makes both a 2700K incandescent and a 5700K strobe look white. A photographer might prefer to make the incandescent lights look more yellow. There's a white balance setting in the camera to do that when taking the picture, but it's a lot easier to get it perfect later in front of a large color-calibrated display than in the field.
Another example is dealing with a scene containing a lot of dynamic range, such as direct sunlight and dark shadows. The camera's sensor can capture a greater range of brightness than a computer screen can display or a printer can represent, so a photographer might prefer to delay decisions about what's dark grey with some details and what's clipped to black.
?? This was not asked.
Everything you said is supported by regular image formats. You can adjust white balance of any photo and you think image formats are only limited to 16-bit and sRGB?
That’s not why we use RAW. It’s partly because (1) if you used Adobe RGB or Rec. 709 on a JPEG, a lot of people would screw it up, (2) you get a little extra raw data from the pre-filtering of Bayer, X-Trans, etc. data, (3) it’s less development work for camera manufacturers, and (4) partly historical.
> Everything you said is supported by regular image formats. You can adjust white balance of any photo and you think image formats are only limited to 16-bit and sRGB?
No - the non-RAW image formats offered were traditionally JPG and 8-bit TIFF. Neither of those are suitable for good quality post-capture edits, irrespective of their colour space (in fact, too-wide a colour space is likely to make the initial capture worse because of the limited 8-bit-per-colour range).
These days there is HEIF/similar formats, which may be good enough. But support in 3rd party tools (including Adobe) is no better than RAW yet, i.e., you need to go through a conversion step. So...
Also don't forget one of the promises of RAW: That RAW developers will continue to evolve, so that you'll be able to generate a better conversion down the line than now. Granted, given the maturity of developers the pace of innovation has slowed down a lot compared to 20 years ago, but there are still incremental improvements happening.
Another advantage of RAW is non-destructive editing, at least in developers that support it and are more than import plugins for traditional editors. I rarely have to touch Photoshop these days.
Try and adjust shadows and highlights in a jpg vs a raw file and see what happens. There is no data there in the jpg just black and white blown out. Raw file you can brighten the shadows and find moth man standing there with a little extra sensor noise.
Are you adjusting an 8-bit JPG (probably) or a 12-bit JPG (rare)?
Try adjusting a 8-bit RAW file and you will have the same problem.
You are conflating format and bitrate.
Yes and no. Your point about bitrate being important is correct, but you're still largely wrong.
The actual main thing about RAW is that the transforms for white balance, gamma, brightness, colour space, etc. haven't yet been applied and baked into the file. With JPEG, at least some of those transforms have already been applied, which then limits how mucn you can do as opposed to starting with the untransformed sensor data.
You could definitely do much more with a 12-bit JPEG than you could with an 8-bit JPEG, but still not as much as you can do starting from RAW data.
the absolute main thing is debayering, and yeah, then colorspace transformations etc
what format can i a change the white balance of the image on other then RAW in software, for all the years i have used digital cameras i can't think of one...
The bottleneck is usually in SD card write speeds, however. Sport photographers often skip raw and only use JPG because the files are smaller and as a result, one can take more photos in one burst.
For raw at high frame rates, high end cameras don't use SD cards but things like CFexpress which absolutely can keep up (and there are also various compressed RAW formats these days which apply a degree of lossy compression to reduce file size).
As I understand it, the reason some professional sports photographers don't shoot RAW (or it's less important) is more because they are in an environment where publishing quickly is important, so upload speeds matter and there isn't really much time to postprocess.
Canon’s “sport professional” camera has lower resolution than the “second tier” cameras. It has a higher frame rate and CFExpress and SDXC2 so bandwidth isn’t an issue. Last I checked you could burst 40 or 50 frames (at 20ish fps) before filling the internal buffer.
You can definitely do more than that these days. My Nikon Z8 can do 30fps with 1xCFExpress, and the flagship Z9 can do 120fps because it has 2xCFExpress and alternates writes. On the Sony side they have a closer differentiation to what you describe, the flagship (A1 II) does only 30fps compared to the next-best (A9 III) which does 120fps, while the prosumer (A7 RV) only does 10fps.
I don't know Canon well, but 120fps w/ dual CFExpress + 800-1200 frames buffer is fairly standard on top-end professional sports/wildlife mirrorless cameras these days.
Z8 and Z9 can do the same, Z9 can do 120fps in jpeg mode 11MP, similar to your Z8. Z8/Z9 will do 30fps in jpeg fullresolution, 20fps in RAW. how long it can do it without slowing down depends on your card, and whether you use uncompressed raw, lossless raw, lossy raw etc
Yes, you are correct. The spec sheet isn't particularly clear on this so I got it wrong taking it at face value, but on the Rolling Shutter Project it shows the sensor readout speed is the limit, which both use the same sensor. For the higher speed continuous (which is only on the Z9 AFAIK, 30fps is the max I see for CH on my Z8) it uses the video pipeline rather than the standard photo pipeline, which is resolution limited and not in RAW. The Z9 /does/ support alternate write mode, however the Z8 only supports split format or overflow with two cards since Z8 is CFExpress + SD and Z9 is 2x CFExpress. I am not sure how this affects write-out speeds but presumably improves it.
Personally I only shoot at 6fps in continuous for birds because anything faster is usually unnecessary (except for hummingbirds) and just creates more exposures to review/delete in post. I generally preference doing quiet single exposure (Qs) when doing wildlife to avoid any sounds, although since switching to the Z8 it's not really an issue since mirrorless is effectively silent in all modes at fairly open apertures.
the video pipeline is used yes, but it would support doing that at higher fps than 20, as it is able to do it in the 12bit mode for 8k60 NRAW video.
I really wish they had raw precapture on the Z8, but i doubt they will do it
also, you can do 120fps on your Z8, but it reduces to 11MP, this goes for both Z8 and Z9. It also uses video mode for this, but goes to show sensor readout is not the issue :)
The reason it requires reduction to 11MP and to use the video pipeline instead of the Expeed pipeline is due to sensor readout time. I don't /fully/ understand it, but the sensor readout data on the Rolling Shutter Project shows a Nikon Z8 would max out at ~22fps in full-frame RAW, so no doubt they reduced this to 20fps to give them a margin of error. Reducing to the DX frame size reduces the amount of data required to be read from the sensor, which changes the time it takes to do a readout. The Z8/Z9 are 45MP cameras, so just doing a naive bit of maths, you could expect ~80fps if you applied the same pipeline at the DX frame size, but given that there is already a video pipeline and they needed it to be capable of 120FPS, they already had their answer.
both video and photo pipeline is expeed. it does not go into DX mode for 11MP, its full sensor size. its unclear(atleast to me) if it does some form of line skipping or if its oversampled fully.
but the video mode supports full 8k60 atleast, so only a very tiny crop.
It's definitely not straightforward, there was no 8k originally AFAIK, that came through a firmware update and as far as I know is a totally different format that makes use of ticoRAW as a base (https://www.dpreview.com/news/9624409613/nikon-is-licensing-...), so RAW video isn't recorded in the same format as RAW photos (N-RAW vs NEF).
It can only do 8k60 though, not 8k120, so obviously the video pipeline and the C120 pipeline aren't identical.
When you say it doesn't go into DX mode for 11MP, you're correct for C120, but for C60 it /does/ go into DX mode (which captures a 19MP image). How this differs between C60 and C120, I'm not entirely sure in the camera internals. I had thought the resolution reduction is from cropping, but confirmed in the manual that when you enable C120, it's an 11MP photo but is full frame (no cropping).
Obviously this stuff is complex (maybe overly complex) and I haven't delved into it super deeply since I don't need it for my type of photography (and I never do video).
it did initially offer 8k60 in the ticoraw.
but this suggests that the limitations are not in sensor readout, but processing/saving. Its speculated that its due to heat problems if doing faster than 20fps full raw
I believe this might have been the case in the past, where (a) sensor resolutions were lower - so the raw images less bulky, (b) camera CPUs were slower - so you would like to take them out of the equation.
These days, the bottleneck for achieving continuous shooting rate is probably writting to the sd card (which is the standard for the consumer/pro-sumer models).
It is always written into a memory buffer first, which could be like 256 megabytes... it tooks time to fill it up, once it is filled, memory card speed becomes a bottleneck. So, actually, writing only jpegs would trigger the slowdown later, so you could take more frames before the buffer fills up
This was my guess too, get the raw bayer data from the sensor in one go + some metadata. Then as the sensors and cameras evolve they are just accumulating different formats?
DNG is a TIFF file, just like most proprietary raw formats.
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
I think this is being too generous.
DNG is just an offshoot of TIFF. Having written a basic DNG parser having never read up on TIFFs before, it really isn’t that hard.
As far as experimental features, there’s room in the spec for injecting your own stuff, similar to MakerNote in EXIF if I recall.
If you are planning to do experimental stuff, I’d say what Apple pulled off with ProRAW is the most innovative thing that a camera manufacturer has done in forever. They worked with Adobe to get it into the spec. All of these camera manufacturers have similar working relationships with Adobe, so there’s really no excuse. And if you can’t wait that long, again, MakerNote it.
In my opinion, custom RAW formats are a case study in “Not Invented Here” syndrome.
The contents are simple. How to interpret the contents, is not simple. That is why you see internet advice advocating for keeping old raw files around, because Lightroom and Photoshop sometimes gets updates which can cram out better results from old raw files.
(Edit: I mean, if you want to get a basic debayered RGB image from a raw, that's not too hard. But if you want to cram out the most, there are a lot of devils in a lot of details. Things like estimating how many green pixels are not actually green, but light-spill from what should have been red pixels is just the beginning.)
Yet that's processing level stuff, not format stuff. Even unlikely that the manufacturer made the best possible result from the sensor input as is.
These formats aren't complex because they really are supposed to be raw (-:
But yeah, it would be preferable to have them use the digital negative (DNG) format, but why bother when the community makes the work for them? Reminds me of how Bethesda does things.
Traditional Nikon NEF is pretty simple. It's just a tiff. Lossy compression is just gamma-encoding with a LUT (stored in the file). I think most traditional raws are principally similar. More complex compression schemes like ticoraw are fairly recent.
What's complex is the metadata. All the cameras have different AF, WB and exposure systems.
yeah metadata really is a mess
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
I am a weirdo and always liked and used Pentax (now Ricoh) they do support the DNG format.
Pentax/Ricoh really is a hidden gem. I love my "dinosaur" K1 Mark II and my GR IIIx goes EVERYWHERE I go.
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features
I've worked with medical imaging systems from the largest imaging companies in the world -- GE, Siemens, etc. -- all of which use a standardized image format/protocol/etc. called DICOM. DICOM has standardized fields for the vast majority of information you would need to record for medical imaging - patient ID, study ID, image # if it's an image sequence, etc. - as well as metadata about where it came from, like the vendor ID of the machine that did the scan (the CT scanner, MRI, X-ray, etc). There are also arbitrary fields for vendor-specific information that doesn't have a defined field in the specification.
All of these fields have clear purposes and definitions and all are available to every DICOM reader/writer, and yet the company I worked for had a huge table of re-mappings because some scanners, for some reason, would put the patient ID in the vendor field, or the vendor ID in the scanner name field, and so on. There's no reason for this, there's no complication that might cause this; it's all standard fields that everything supports.
These are manufacturers who, while using the standard that everyone else uses, deliberately screw things up in ways that their own hardware and software can silently compensate for but which other vendors then have to work around in order to inter-operate.
In other words cameras absolutely could emit DNG instead, but aside from the arguments that you've made, I have every confidence that manufacturers would fuck it up on purpose just to make it harder for other vendors' software to inter-operate, which would mean that instead of software having to explicitly support e.g. Canon's RAW format, and being able to say "we don't yet support this new format", software would "support" DNG but it would be completely broken for some random cameras because the software developer hasn't had the chance to implement idiotic workarounds for these specific broken images yet.
Why would you need coordination with Adobe? Their software already reads DNG files just fine.
Raw decoding is not as simple as you might think.
It’s the best place to add “signature steps.” Things like noise reduction, chromatic aberration correction, and one-step HDR processing.
I used to work for a camera manufacturer, and our Raw decoder was an extremely intense pipeline step. It was treated as one of the biggest secrets in the company.
Third-party deinterlacers could not exactly match ours, although they could get very good results.
Well, it is obvious that between a RAW file and the final image there are a lot of complex processing steps. But that is independent of the file format used. DNG isn't so much different, just documented. And while the manufacturers converter might give the best results, the photographers rather use the image processing programs from Adobe or their competition which use their own RAW converters anyway.
Yeah, they could do it with DNG (I suppose), but they don't really have any reason to do so (in their minds). Personally, I like open stuff, but they did not share my mindset, and I respected their posture.
If a camera company sells me a camera, I consider it a definite disadvantage, if I cannot open the Raw files until the software companies have updated their products. This is also a great way of forcing customers into the subscription models like Adobe offers. So as a customer, I do criticize that they don't support more open formats.
But this is talking about proprietary RAW image formats, which should be the data from the sensor with minimal processing. The entire point of RAW images is that you are skipping the majority of the ISP and applying those blocks in the processing software. Even the de-mosiacing step is done in the processor. There is really no reason why this is proprietary. This doesn't stop the camera company from applying their proprietary processing on the JPEG output that utilizes the full ISP.
Not really.
If you claim to support a particular format, then you're responsible for supporting that format, and there's no reason why a company would do that, if they have no intentions of supporting anyone other than themselves from accessing the data.
"Not supporting" != "Not allowing"
They may not be thrilled by third parties reverse-engineering and accessing their proprietary formats, and can't necessarily stop them, but they are under no obligation to help them to do it, and they are free to change the rules, at their own whim.
Think of Apple, regularly borking cracking systems. It may not be deliberate. They may have just introduced some new data that cracked the crack, but there's no duty to support the crackers.
Anecdotally, using Darktable, I could never get as good of a demosaicing result as using the straight-out-of-camera JPEGs from my Fujifilm GFX 100S. In challenging scenarios such as fine diagonal lines, Darktable's algorithms such as LMMSE would add a lot of false colour to the image.
However, modern deep learning-based joint demosaicing and denoising algorithms handily outperform Darktable's classical algorithms.
Last thing I want my pictures touching is some deep learning based thingy.
Raw decoding is an algorithm, not a container format. The issue is every is coming up with their own proprietary containers for identical data that just represents sensor readings.
It's more than just a file format.
The issue is that companies want control of the demosaicing stage, and the container format is part of that strategy.
If a file format is a corporate proprietary one, then there's no expectation that they should provide services that do not directly benefit them, or that expose internal corporate trade secrets, in service to an open format.
If they have their own format, then they don't have to lose any sleep over stuff that doesn't interest or benefit them.
By definition, a RAW container contains sensor data, and nothing more. Are you saying that Adobe is using their proprietary algorithms to render proprietary RAW formats in Lightroom?
I don’t know about Adobe. I never worked for them.
pretty sure they would lose a lot of sleep if no third party application could open their raw
You'd be surprised.
They lost sleep over having images from their devices looking bad.
They wanted ultimate control of their images, and they didn't trust third-party pipelines to render them well.
so you think they'd be all happy if nobody could open the raw files in adobe software?
Yup.
Not kidding. These folks are serious control freaks. They are the most anal people I've ever met, when it comes to image Quality.
Not publicly. It’s not difficult to figure out, but I make it a point, not to post stuff that would show up in their search algorithms.
But it was a pretty major one, and I ran their host image pipeline software team.
[Edited to Add] It was one of the “no comment” companies. They won’t discuss their Raw format in detail, and neither will I, even though it has been many years, since I left that company, and it’s likely that my knowledge is dated.
> They won’t discuss their Raw format in detail
Can you share the reason for that?
It seems to me that long ago, camera companies thought they would charge money for their proprietary conversion software. It has been obvious for nearly as long that nobody is going to pay for it, and delayed compatibility with the software people actually want to use will only slow down sales of new models.
With that reasoning long-dead, is there some other competitive advantage they perceive to keeping details of the raw format secret?
The main reason is that image Quality is the main coefficient of their corporation. They felt that it was a competitive advantage, and sort of a "secret ingredient," like you will hear from master chefs.
They feel that their images have a "corporate fingerprint," and are always concerned that images not get out, that don't demonstrate that.
This often resulted in difficulty, getting sample images.
Also, for things like chromatic aberration correction, you could add metadata that describes the lens that took the picture, and use that to inform the correction algorithm.
In many cases, a lens that displays chromatic aberration is an embarrassment. It's one of those "dirty little secrets," that camera manufacturers don't want to admit exists.
As they started producing cheaper lenses, with less glass, they would get more ChrAb, and they didn't want people to see that.
Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy." If you can apply it before, you can minimize data loss. Same with noise reduction.
Many folks here, would absolutely freak, if they saw the complexity of our deBayer filter. It was a pretty massive bit of code.
Thanks for the explanation. I have to question how reality-based that thinking is. I do not, of course expect you to defend it.
It seems to me that nearly all photographers who are particularly concerned with image quality shoot raw and use third-party processing software. Perhaps that's a decision not rooted firmly in reality, but it would take a massive effort focused on software UX to get very many to switch to first-party software.
> Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy."
Are you saying that they're baking chromatic aberration corrections into the raw files themselves so that third-party software can't detect it? I know the trend lately is to tolerate more software-correctable flaws in lenses today because it allows for gains elsewhere (often sharpness or size, not just price), but I'm used to seeing those corrections as a step in the raw development pipeline which software can toggle.
I think we're getting into that stuff that I don't want to elaborate on. They would probably get cranky I have said what I've said, but that's pretty common knowledge.
If the third-party stuff has access to the raw Bayer format, they can do pretty much anything. They may not have the actual manufacturer data on lenses, but they may be able to do a lot.
Also, 50MP, lossless-compressed (or uncompressed) 16-bit-per-channel images tend to be big. It takes a lot to process them; especially if you have time constraints (like video). Remember that these devices have their own, low-power processors, and they need to handle the data. If we wrote host software to provide matching processing, we needed to mimic what the device firmware did. You don't necessarily have that issue, with third-party pipelines, as no one expects them to match.
Thanks for sharing what you could. I wasn't really thinking about video; the storage requirements to work with raw video are indeed big.
I am very skeptical that chromatic aberration can be applied before a demosaic and then the result can be stored in a Bayer array again. There seems to be no advantage in storing the result of chromatic aberration correction in a raw Bayer array, which has less information, than a full array with the three RGB values per pixel. Perhaps I am not understanding it correctly?
It's not stored. It's applied to the raw Bayer data, every time, before demosaicing. Same with noise reduction.
What you can store, is metadata that informs these "first step" filters, like lens data, and maybe other sensor readings.
One of the advantages to proprietary data storage, is that you can have company-proprietary filters, that produce a "signature" effect. Third-party filters may get close to it (and may actually get "better" results), but it won't be the same, and won't look like what you see in the viewfinder.
You've made it pretty clear, thank you.
That was my suspicion initially. In fact, when I read about mass DNG adoption, my first thought was "but how would it work for this company?" (admittedly I don't know much about DNG, but intuitively I had my doubts).
And then I saw your comment.
It might be the non-mosaic one.
See my comment below.
One problem is that you cannot have a universal format that is both truly raw and doesn't embed camera specific information. Camera sensors from different companies (and different generations) don't have the same color (or if you prefer, spectral) responses with both their Bayer filter layer and the underlying physical sensor. If you have truly raw numbers, you need the specific spectral response information to interpret them; if you don't need spectral response information, you don't actually have truly raw numbers. People very much want raw numbers for various reasons, and also camera companies are not really enthused about disclosing the spectral response characteristics of their sensors (although people obviously reverse engineer them anyway).
> Camera sensors from different companies (and different generations) don't have the same color (or if you prefer, spectral) responses with both their Bayer filter layer and the underlying physical sensor
This is all accommodated for in the DNG spec. The camera manufacturers specify the necessary matrix transforms to get into the XYZ colorspace, along with a linearization table.
If they really think the spectral sensitivity is some valuable IP, they are delusional. It should take one Macbeth chart, a spreadsheet, and one afternoon to reverse engineer this stuff.
Given that third party libraries have figured this stuff out, seems they have failed while only making things more difficult for users.
What does RAW really mean then? Couldn't they simply redefine what RAW means to create a standard that can include proprietary technology? Like why not define it as including a spectral response?
There is no 'RAW' format as such. In practice, 'RAW' is a jargon term for "camera specific format with basically raw sensor readings and various additional information". Typically the various RAW formats don't embed the spectral information, just a camera model identifier, because why waste space on stuff the camera makers already know and will put in their (usually maker specific) processing software.
(Eg Nikon's format is 'NEF', Canon's is 'CR3', and so on, named after the file extensions.)
I don't know if DNG can contain (optional) spectral response information, but camera makers were traditionally not enthused about sharing such information, or for that matter other information they put in their various raw formats. Nikon famously 'encrypted' some NEF information at one point (which was promptly broken by third party tools).