What an unprocessed photo looks like

2025-12-2822:352510409maurycyz.com

2025-12-27 — 2026-01-21 (Photography) Here's a photo of a Christmas tree, as seen by my camera's sensor: "Raw" image with the 14-bit ADC values mapped to 0-255 RGB. The image isn't even black-and…

(Photography)

Here's a photo of a Christmas tree, as seen by my camera's sensor:

"Raw" image with the 14-bit ADC values mapped to 0-255 RGB.

The image isn't even black-and-white, it's gray-and-gray: While the sensor's ADC can theoretically output values from 0 to 16382, the data doesn't cover that whole range.

Histogram of raw image

The real range of ADC values from 2110 to 13600. Let's set those values as white and black in the image:

Vnew = (Vold - Black)/(White - Black)
Vnew = (Vold - 2110)/(13600 - 2110)

Progress

Much better, but those Christmas lights are still rather monochromatic.

A camera sensor isn't able to see color: It only measures the total brightness hitting each pixel. In a color camera, the pixels are covered by a grid of alternating red, green and blue color filters:

Christmas light crop

Here's the image with every pixel colored the same as the filter that's on top of it:

Bayer matrix overlay

This version is more colorful, but each pixel only has one third of its full RGB color. To fix this, I averaged the RGB values each pixel with its neighbors:

Demosaicing results

... and for the rest of the image:

Demosaiced tree

Most of the image is very dark because monitors don't have as much dynamic range as the human eye: Even if you are using an OLED, the screen still reflects light which limits how black it can get.

There's also another, sneakier factor contributing to this...

True brightness gradient

... our perception of brightness is non-linear.

If you were to naively quantize brightness — a requirement to store it on a computer — most of the avalable numbers will be wasted on nearly identical shades-of-white. Because this is a very inefficient use of memory, most color spaces assign extra bins to darker colors. This is what a linear gradient of sRGB colors looks like:

sRGB gradient

If linear data is displayed directly, the midtones will be much darker than they should be.

Whatever the cause, the darkness can be fixed by applying a non-linear curve to each channel to brighten up the dark areas... but this doesn't quite work out:

ohno

Some of this green cast is caused by the camera sensor being intrinsically more sensitive to green light, but some of it is my fault: There are twice as many green pixels in the filter matrix. When I averaged RGB values to demosaic the image, it boosted the green channel even higher.

This needs to be fixed with white-balancing: Multiplying each color channel by a constant to equalize their brightnesses... however, because image's brightness values are now non-linear, I have to go back a step to do this.

This is the linear image from the demosacing step but with the RGB values temporarily scaled up so I can see something:

After some playing around, I was able to get the image looking like this:

Banishing the green

... and after re-applying the curve:

Finally: A decent photo.

This is really just the bare minimum needed for digital photography: I haven't done any color calibration, the white balance isn't perfect, the black point is too high, there's lots of noise that needs to be cleaned up...

Applying the curve to each color channel desaturated the highlights. This effect looks rather good — and is what we've come to expect from film — but it has also de-yellowed the star. It's possible to separate each pixel's luminance and curve that while preserving color.

On its own, this would turn the LED Christmas lights into an overstaturated mess, but combining both methods can produce nice results.

For comparison, here's the image my camera produced from the same data:

"in camera" JPEG.

Far from being an "unedited" photo: there's a huge amount of work that has gone into making this image.

There's nothing happening when you adjust the contrast or white balance in editing software that the camera doesn't do under the hood. The tweaked image isn't "faker" than the original: they are different renditions of the same data.

Replicating human perception is hard, and it's made harder when constrained to the limitations of display technology or printed images. You don't have to be ashamed about adjusting a photo when automated algorithms make the wrong call.


Read the original article

Comments

  • By barishnamazov 2025-12-2823:2716 reply

    I love posts that peel back the abstraction layer of "images." It really highlights that modern photography is just signal processing with better marketing.

    A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data. In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.

    Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.

    • By shagie 2025-12-291:183 reply

      > A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data.

      From the classic file format "ppm" (portable pixel map) the ppm to pgm (portable grayscale map) man page:

      https://linux.die.net/man/1/ppmtopgm

          The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.
      
      You'll note the relatively high value of green there, making up nearly 60% of the luminosity of the resulting grayscale image.

      I also love the quote in there...

         Quote
      
         Cold-hearted orb that rules the night
         Removes the colors from our sight
         Red is gray, and yellow white
         But we decide which is right
         And which is a quantization error.
      
      (context for the original - https://www.youtube.com/watch?v=VNC54BKv3mc )

      • By skrebbel 2025-12-2910:252 reply

        > The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.

        Seriously. We can trust linux man pages to use the same 1-letter variable name for 2 different things in a tiny formula, can't we?

        • By bayindirh 2025-12-307:11

          In my copy the exact line is as follows:

              The quantization formula ppmtopgm uses is y = .299 r + .587 g + .114 b.
          
          so, it's either fixed, or it's a typo to begin with.

        • By yencabulator 2025-12-2923:581 reply

          Yes, let's blame the documentation for a program from 1989 on a kernel from 1991.

          • By fragmede 2025-12-300:39

            Well, we can't blame that on an LLM in any case

      • By boltzmann-brain 2025-12-293:372 reply

        Funnily enough that's not the only mistake he made in that article. His final image is noticeably different from the camera's output image because he rescaled the values in the first step. That's why the dark areas look so crushed, eg around the firewood carrier on the lower left or around the cat, and similarly with highlights, e.g. the specular highlights on the ornaments.

        After that, the next most important problem is the fact he operates in the wrong color space, where he's boosting raw RGB channels rather than luminance. That means that some objects appear much too saturated.

        So his photo isn't "unprocessed", it's just incorrectly processed.

        • By tpmoney 2025-12-295:571 reply

          I didn’t read the article as implying that the final image the author arrived at was “unprocessed”. The point seemed to be that the first image was “unprocessed” but that the “unprocessed” image isn’t useful as a “photo”. You only get a proper “picture” Of something after you do quite a bit of processing.

          • By integralid 2025-12-296:301 reply

            Definitely what the author means:

            >There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.

            • By viraptor 2025-12-297:472 reply

              That's not how I read it. As in, this is an incidental comment. But the unprocessed version is the raw values from the sensors visible in the first picture, the processed are both the camera photo and his attempt at the end.

              • By eloisius 2025-12-2911:261 reply

                This whole post read like and in-depth response to people that claim things like “I don’t do any processing to my photos” or feel some kind of purist shame about doing so. It’s a weird chip some amateur photographers have on their shoulders, but even pros “process” their photos and have done so all the way back until the beginning of photography.

                • By Edman274 2025-12-2918:592 reply

                  Is it fair to recognize that there is a category difference between the processing that happens by default on every cell phone camera today, and the time and labor intensive processing performed by professionals in the time of film? What's happening today is like if you took your film to a developer and then the negatives came back with someone having airbrushed out the wrinkles and evened out skin tones. I think that photographers back in the day would have made a point of saying "hey, I didn't take my film to a lab where an artist goes in and changes stuff."

                  • By fragmede 2025-12-300:44

                    Kent state massacre pole picture is a point of controversy in this area, but may be more relevant then ever.

                    https://petapixel.com/2012/08/29/the-kent-state-massacre-pho...

                  • By eloisius 2025-12-301:441 reply

                    It’s fair to recognize. Personally I do not like the aesthetic decisions that Apple makes, so if I’m taking pictures on my phone I use camera apps that’s give me more control (Halide, Leica Lux). I also have reservations about cloning away power lines or using AI in-painting. But to your example, if you got your film scanned or printed, in all likelihood someone did go in and change some stuff. Color correction and touching the contrast etc is routine at development labs. There is no tenable purist stance because there is no “traditional” amount of processing.

                    Some things are just so far outside the bounds of normal, and yet are still world-class photography. Just look at someone like Antoine d’Agata who shot an entire book using an iPhone accessory FLIR camera.

                    • By names_are_hard 2026-01-0110:42

                      I would argue that there's a qualitative difference between processing that aims to get the image to the point where it's a closer rendition of how the human eye would have perceived the subject (the stuff described in TFA) vs processing that explicitly tries to make the image further from the in-person experience (removing power lines, people from the background, etc)

              • By svara 2025-12-2911:181 reply

                But mapping raw values to screen pixel brightness already entails an implicit transform, so arguably there is no such thing as an unprocessed photo (that you can look at).

                Conversely the output of standard transforms applied to a raw Bayer sensor output might reasonably be called the "unprocessed image", since that is what the intended output of the measurement device is.

                • By Edman274 2025-12-2919:02

                  Would you consider all food in existence to be "processed", because ultimately all food is chopped up by your teeth or broken down by your saliva and stomach acid? If some descriptor applies to every single member of a set, why use the descriptor at all? It carries no semantic value.

        • By seba_dos1 2025-12-2914:55

          You do need to rescale the values as the first step, but not exactly the described way (you need to subtract the data pedestal in order to get linear values).

      • By akx 2025-12-297:49

        If someone's curious about those particular constants, they're the PAL Y' matrix coefficients: https://en.wikipedia.org/wiki/Y%E2%80%B2UV#SDTV_with_BT.470

    • By delecti 2025-12-2823:522 reply

      I have a related anecdote.

      When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.

      • By isoprophlex 2025-12-297:431 reply

        I wouldn't worry about it too much, looking at ads is always a shitty experience. Correctly grayscaled or not.

        • By wolvoleo 2025-12-2914:442 reply

          True, though in the case of the Kindle they're not really intrusive (only appearing when it's off) and the price to remove them is pretty reasonable ($10 to remove them forever IIRC).

          As far as ads go that's not bad IMO)

          • By marxisttemp 2025-12-2916:421 reply

            The price of an ad-free original kindle experience was $409. The $10 is on top of the price the user paid for the device.

            • By delecti 2025-12-2919:401 reply

              Lets not distort the past. The ads were introduced a few years later with the Kindle Keyboard, which launched with an MSRP of $140 for the base model, or $115 with ads. That was a substantial discount on a product which was already cheap when it released.

              All for ads which are only visible when you aren't using the device anyway. Don't like them? Then buy other devices, pay to have them removed, get a cover to hide them, or just store it with the screen facing down when you aren't using it.

              • By wolvoleo 2025-12-304:40

                Yes and here in Europe they were introduced even later, with kindle 4 IIRC.

          • By account42 2026-01-0916:05

            Sure, and piss doesn't taste quite as bad as shit yet I still don't want it in my food.

      • By barishnamazov 2025-12-290:052 reply

        I don't think Kindle ads were available in my region in 2015 because I don't remember seeing these back then, but you're a lucky one to fix this classic mistake :-)

        I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.

        • By kccqzy 2025-12-290:391 reply

          I remember using some photo editing software (Aperture I think) that would allow you to customize the different coefficients and there were even presets that give different names to different coefficients. Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.

          • By acomjean 2025-12-294:442 reply

            >Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.

            I went to a photoshop conference. There was a session on converting color to black and white. Basically at the end the presenter said you try a bunch of ways and pick the one that looks best.

            (people there were really looking for the “one true way”)

            I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..

            • By jnovek 2025-12-2914:481 reply

              This is actually a real bother to me with digital — I can never get a digital photo to follow the same B&W sensitivity curve as I had with film so I can never digitally reproduce what I “saw” when I took the photo.

              • By marssaxman 2025-12-2916:49

                Film still exists, and the hardware is cheap now!

                I am shooting a lot of 120-format Ilford HP5+ these days. It's a different pace, a different way of thinking about the craft.

            • By Grimm665 2025-12-2920:01

              > I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..

              Dark skies and dramatic clouds!

              https://i.ibb.co/0RQmbBhJ/05.jpg

              (shot on Rollei Superpan with a red filter and developed at home)

        • By reactordev 2025-12-290:212 reply

          If you really want that old school NTSC look: 0.3R + 0.59G + 0.11B

          This is the coefficients I use regularly.

          • By ycombiredd 2025-12-291:312 reply

            Interesting that the "NTSC" look you describe is essentially rounded versions of the coefficients quoted in the comment mentioning ppm2pgm. I don't know the lineage of the values you used of course, but I found it interesting nonetheless. I imagine we'll never know, but it would be cool to be able to trace the path that lead to their formula, as well as the path to you arriving at yours

            • By zinekeller 2025-12-291:492 reply

              The NTSC color coefficients are the grandfather of all luminance coefficients.

              It is necessary that it was precisely defined because of the requirements of backwards-compatible color transmission (YIQ is the common abbreviation for the NTSC color space, I being ~reddish and Q being ~blueish), basically they treated B&W (technically monochrome) pictures like how B&W film and videotubes treated them: great in green, average in red, and poorly in blue.

              A bit unrelated: pre-color transition, the makeups used are actually slightly greenish too (which appears nicely in monochrome).

              • By shagie 2025-12-292:211 reply

                To the "the grandfather of all luminance coefficients" ... https://www.earlytelevision.org/pdf/ntsc_signal_specificatio... from 1953.

                Page 5 has:

                    Eq' = 0.41 (Eb' - Ey') + 0.48 (Er' - Ey')
                    Ei' = -0.27(Eb' - Ey') + 0.74 (Er' - Ey')
                    Ey' = 0.30Er' + 0.59Eg' + 0.11Eb'
                
                The last equation are those coefficients.

                • By zinekeller 2025-12-292:321 reply

                  I was actually researching why PAL YUV has the same(-ish) coefficients, while forgetting that PAL is essentially a refinement of the NTSC color standard (PAL stands for phase-alternating line, which solves much of NTSC's color drift issues early in its life).

                  • By adrian_b 2025-12-2910:461 reply

                    It is the choice of the 3 primary colors and of the white point which determines the coefficients.

                    PAL and SECAM use different color primaries than the original NTSC, and a different white, which lead to different coefficients.

                    However, the original color primaries and white used by NTSC had become obsolete very quickly so they no longer corresponded with what the TV sets could actually reproduce.

                    Eventually even for NTSC a set of primary colors was used that was close to that of PAL/SECAM, which was much later standardized by SMPTE in 1987. The NTSC broadcast signal continued to use the original formula, for backwards compatibility, but the equipment processed the colors according to the updated primaries.

                    In 1990, Rec. 709 has standardized a set of primaries intermediate between those of PAL/SECAM and of SMPTE, which was later also adopted by sRGB.

                    • By zinekeller 2025-12-2912:481 reply

                      Worse, "NTSC" is not a single standard, Japan deviated it too much that the primaries are defined by their own ARIB (notably ~9000 K white point).

                      ... okay, technically PAL and SECAM too, but only in audio (analogue Zweikanalton versus digital NICAM), bandwidth placement (channel plan and relative placement of audio and video signals, and, uhm, teletext) and, uhm, teletext standard (French Antiope versus Britain's Teletext and Fastext).

                      • By zinekeller 2025-12-2912:561 reply

                        (this is just a rant)

                        Honestly, the weird 16-239 (on 8-bit) color range and 60000/1001 fps limitations stem from the original NTSC standard, which considering both the Japanese NTSC adaptation and European standards do not have is rather frustating nowadays. Both the HDVS and HD-MAC standards define it in precise ways (exactly 60 fps for HDVS and 0-255 color range for HD-MAC*) but America being America...

                        * I know that HD-MAC is analog(ue), but it has an explicit digital step for transmission and it uses the whole 8 bits for the conversion!

                        • By reactordev 2025-12-2914:291 reply

                          Ya’ll are a gold mine. Thank you. I only knew it from my forays into computer graphics and making things look right on (now older) LCD TV’s.

                          I pulled it from some old academia papers about why you can’t just max(uv.rgb) to do greyscale nor can you do float val = uv.r

                          This further gets funky when we have BGR vs RGB and have to swivel the bytes beforehand.

                          Thanks for adding clarity and history to where those weights came from, why they exist at all, and the decision tree that got us there.

                          People don’t realize how many man hours went into those early decisions.

                          • By shagie 2025-12-2915:141 reply

                            > People don’t realize how many man hours went into those early decisions.

                            In my "trying to hunt down the earliest reference for the coefficients" I came across "Television standards and practice; selected papers from the Proceedings of the National television system committee and its panels" at https://archive.org/details/televisionstanda00natirich/mode/... which you may enjoy. The "problem" in trying to find the NTSC color values is that the collection of papers is from 1943... and color TV didn't become available until the 50s (there is some mention of color but I couldn't find it) - most of the questions of color are phrased with "should".

                            • By reactordev 2025-12-2916:15

                              This is why I love graphics and game engines. It's this focal point of computer science, art, color theory, physics, practical implications for other systems around the globe, and humanities.

                              I kept a journal as a teenager when I started and later digitized it when I was in my 20s. The biggest impact was mostly SIGGRAPH papers that are now available online such as "Color Gamut Transform Pairs" (https://www.researchgate.net/publication/233784968_Color_Gam...).

                              I bought all the GPU Gems books, all the ShaderX books (shout out to Wolfgang Engel, his books helped me tremendously), and all the GPU pro books. Most of these are available online now but I had sagging bookshelves full of this stuff in my 20s.

                              Now in my late 40s, I live like an old japanese man with minimalism and very little clutter. All my readings are digital, iPad-consumable. All my work is online, cloud based or VDI or ssh away. I still enjoy learning but I feel like because I don't have a prestigious degree in the subject, it's better to let others teach it. I'm just glad I was able to build something with that knowledge and release it into the world.

              • By ycombiredd 2025-12-292:391 reply

                Cool. I could have been clearer in my post; as I understand it actual NTSC circuitry used different coefficients for RGBx and RGBy values, and I didn't take time to look up the official standard. My specific pondering was based on an assumption that neither the ppm2pgm formula nor the parent's "NTSC" formula were exact equivalents to NTSC, and my "ADHD" thoughts wondered about the provenance of how each poster came to use their respective approximations. While I write this, I realize that my actual ponderings are less interesting than the responses generated because of them, so thanks everyone for your insightful responses.

                • By reactordev 2025-12-293:26

                  There are no stupid questions, only stupid answers. It’s questions that help us understand and knowledge is power.

            • By reactordev 2025-12-291:42

              I’m sure it has its roots in amiga or TV broadcasting. ppm2pgm is old school too so we all tended to use the same defaults.

              Like q3_sqrt

          • By JKCalhoun 2025-12-2915:38

            Yep, used in the early MacOS color picker as well when displaying greyscale from RGB values. The three weights (which of course add to 1.0) clearly show a preference for the green channel for luminosity (as was discussed in the article).

    • By liampulles 2025-12-298:103 reply

      The bit about the green over-representation in camera color filters is partially correct. Human color sensitivity varies a lot from individual to individual (and not just amongst individuals with color blindness), but general statistics indicate we are most sensitive to red light.

      The main reason is that green does indeed overwhelmingly contribute to perceptual luminance (over 70% in sRGB once gamma corrected: https://www.w3.org/TR/WCAG20/#relativeluminancedef) and modern demosaicking algorithms will rely on both derived luminance and chroma information to get a good result (and increasingly spatial information, e.g. "is this region of the image a vertical edge").

      Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.

      • By kuschku 2025-12-2914:062 reply

        > Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.

        Considering you usually shoot RAW, and debayer and process in post, the camera hasn't done any of that.

        It's only smartphones that might be doing internal AI Debayering, but they're already hallucinating most of the image anyway.

        • By liampulles 2025-12-2921:582 reply

          Sure - if you don't want to do demosaicing on the camera, that's fine. It doesn't mean there is not an algorithm there as an option.

          If you care about trying to get an image that is as accurate as possible to the scene, then it is well within your interest to use a Convolutional Neural Network based algorithm, since these are amongst the highest performing in terms of measured PSNR (which is what nearly all demosaicing algorithms in academia are measured on). You are maybe thinking of generative AI?

          • By kuschku 2026-01-0116:28

            At least in broadcast/cinema, no one uses CNN for debayering, because why would you?

            In cinema, you just use a 6K sensor and use conventional debayering for a perfect 4K image. Even the $2000 Sony FX-30 ships with that feature nowadays. Combined with a good optical low pass filter, that'll also avoid any and all moiré noise.

            In broadcast, if you worry about moiré noise or debayering quality, you just buy a Sony Z750 with a three-chip prism design, which avoids the problem entirely by just having three separate full-resolution sensors.

        • By 15155 2025-12-2914:261 reply

          Yes, people usually shoot RAW (anyone spending this much on a camera knows better) - but these cameras default to JPEG and often have dual-capture (RAW+JPEG) modes.

          • By qubitcoder 2025-12-300:591 reply

            To be clear, they default to JPEG for the image preview on the monitor (LCD screen). Whenever viewing an image on a professional camera, you’re always seeing the resulting JPEG image.

            The underlying data is always captured as a RAW file, and only discarded if you’ve configured the camera to only store the JPEG image (discarding the original RAW file after processing).

            • By 15155 2025-12-301:201 reply

              > Whenever viewing an image on a professional camera

              Viewing any preview image on any camera implies a debayered version: who says is it JPEG-encoded - why would it need to be? Every time I browse my SD card full of persisted RAWs, is the camera unnecessarily converting to JPEG just to convert it back to bitmap display data?

              > The underlying data is always captured as a RAW file, and only discarded if you’ve configured the camera to only store the JPEG image (discarding the original RAW file after processing).

              Retaining only JPEG is the default configuration on all current-generation Sony and Canon mirrorless cameras: you have to go out of your way to persist RAW.

              • By account42 2026-01-0916:09

                The cameras typically store a camera display sized preview JPEG in the raw files.

      • By NooneAtAll3 2025-12-299:141 reply

        > we are most sensitive to red light

        > green does indeed overwhelmingly contribute to perceptual luminance

        so... if luminance contribution is different from "sensitivity" to you - what do you imply by sensitivity?

        • By liampulles 2025-12-2910:351 reply

          Upon further reading, I think I am wrong here. My confusion was that I read that over 60% of the cones in ones eye are "red" cones (which is a bad generalization), and there is more nuance here.

          Given equal power red, blue, or green light hitting our eyes, humans tend to rate green "brighter" in pairwise comparative surveys. That is why it is predominant in a perceptual luminance calculation converting from RGB.

          Though there are much more L-cones (which react most strongly to "yellow" light, not "red", also "much more" varies across individuals) than M-cones (which react most strongly to a "greenish cyan"), the combination of these two cones (which make ~95% of the cones in the eye) mean that we are able to sense green light much more efficiently than other wavelengths. S-cones (which react most strongly to "purple") are very sparse.

          • By skinwill 2025-12-2915:33

            This is way over simplifying here but I always understood it as: our eyes can see red with very little power needed. But our eyes can differentiate more detail with green.

      • By devsda 2025-12-2910:452 reply

        Is it related to the fact that monkeys/humans evolved around dense green forests ?

        • By frumiousirc 2025-12-2911:491 reply

          Well, plants and eyes long predate apes.

          Water is most transparent in the middle of the "visible" spectrum (green). It absorbs red and scatters blue. The atmosphere has a lot of water as does, of course, the ocean which was the birth place of plants and eyeballs.

          It would be natural for both plants and eyes to evolve to exploit the fact that there is a green notch in the water transparency curve.

          Edit: after scrolling, I find more discussion on this below.

          • By seba_dos1 2025-12-2915:14

            Eyes aren't all equal. Our trichromacy is fairly rare in the world of animals.

        • By zuminator 2025-12-2917:42

          I think any explanation along those lines would have a "just-so" aspect to it. How would we go about verifying such a thing? Perhaps if we compared and contrasted the eyes of savanna apes to forest apes, and saw a difference, which to my knowledge We do not. Anyway, sunlight at the ground level peaks around 555nm, so it's believed that we're optimizing to that by being more sensitive to green.

    • By brookst 2025-12-291:093 reply

      Even old school chemical films were the same thing, just different domain.

      There is no such thing as “unprocessed” data, at least that we can perceive.

      • By cge 2025-12-2923:09

        Yes. Writing a post like this, but for film, would be illustrative of that similarity, but significantly more challenging to represent, especially for color film. I actually don't know the whole process in enough detail to write one, and the visualizations would be difficult, but the processing is there.

        You have layers of substrate with silver halides, made sensitive to different frequency ranges with sensitizing dyes, crystallized into silver halide crystals, rather than a regular grid of pixels; you take a photo that is not an image, but a collection of specks of metallic silver. Through a series of chemical reactions, you develop those specks. Differences in chemistry, in temperatures, in agitation, in the film, all affect what for digital images is described as processing. Then in printing, you have a similar process all over again.

        If anything, one might argue that the digital process allows a more consistent and quantitative understanding of the actual processing being done. Analog film seems like it involves less processing only because, for most people, the processing was always a black box of sending off the film for development and printing.

      • By kdazzle 2025-12-294:102 reply

        Exactly - film photographers heavily process(ed) their images from the film processing through to the print. Ansel Adams wrote a few books on the topic and they’re great reads.

        And different films and photo papers can have totally different looks, defined by the chemistry of the manufacturer and however _they_ want things to look.

        • By acomjean 2025-12-295:052 reply

          Excepting slide photos. No real adjustment once taken (a more difficult medium than negative film which you can adjust a little when printing)

          You’re right about Ansel Adams. He “dodged and burned” extensively (lightened and darkened areas when printing.) Photoshop kept the dodge and burn names on some tools for a while.

          https://m.youtube.com/watch?v=IoCtni-WWVs

          When we printed for our college paper we had a dial that could adjust the printed contrast a bit of our black and white “multigrade” paper (it added red light). People would mess with the processing to get different results too (cold/ sepia toned). It was hard to get exactly what you wanted and I kind of see why digital took over.

          • By cge 2025-12-2923:11

            >Excepting slide photos. No real adjustment once taken (a more difficult medium than negative film which you can adjust a little when printing)

            One might argue that there, many of the processing choices are being made by the film manufacturer, in the sensitizing dyes being used, etc.

          • By macintux 2025-12-2915:09

            I found one way to "adjust" slide photos: I accidentally processed a (color) roll of mine using C-41. The result was surprisingly not terrible.

        • By NordSteve 2025-12-2917:13

          A school photography company I worked for used a custom Kodak stock. They were unsatisfied with how Kodak's standard portrait film handled darker skin tones.

          They were super careful to maintain the look across the transition from film to digital capture. Families display multiple years of school photos next to each other and they wanted a consistent look.

      • By adrian_b 2025-12-2911:00

        True, but there may be different intentions behind the processing.

        Sometimes the processing has only the goal to compensate the defects of the image sensor and of the optical elements, in order to obtain the most accurate information about the light originally coming from the scene.

        Other times the goal of the processing is just to obtain an image that appears best to the photographer, for some reason.

        For casual photographers, the latter goal is typical, but in scientific or technical applications the former goal is frequently encountered.

        Ideally, a "raw" image format is one where the differences between it and the original image are well characterized and there are no additional unknown image changes done for an "artistic" effect, in order to allow further processing when having either one of the previously enumerated goals.

    • By dheera 2025-12-290:107 reply

      This is also why I absolute hate, hate, hate it when people ask me whether I "edited" a photo or whether a photo is "original", as if trying to explain away nice-looking images as if they are fake.

      The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime.

      • By beezle 2025-12-293:592 reply

        As a mostly amateur photographer, it doesn't bother me if people ask that question. While I understand the point that the camera itself may be making some 'editing' type decision on the data first, a) in theory each camera maker has attempted to calibrate the output to some standard, b) public would expect two photos taken at same time with same model camera should look identical. That differs greatly from what often can happen in "post production" editing - you'll never find two that are identical.

        • By vladvasiliu 2025-12-2911:08

          > public would expect two photos taken at same time with same model camera should look identical

          But this is wrong. My not-too-exotic 9-year-old camera has a bunch of settings which affect the resulting image quite a bit. Without going into "picture styles", or "recipes", or whatever they're called these days, I can alter saturation, contrast, and white balance (I can even tell it to add a fixed alteration to the auto WB and tell it to "keep warm colors"). And all these settings will alter how the in-camera produced JPEG will look, no external editing required at all.

          So if two people are sitting in the same spot with the same camera, who's to say they both set them up identically? And if they didn't, which produces the "non-processed" one?

          I think the point is that the public doesn't really understand how these things work. Even without going to the lengths described by another commenter (local adjust so that there appears to be a ray of light in that particular spot, remove things, etc), just playing with the curves will make people think "it's processed". And what I described above is precisely what the camera itself does. So why is there a difference if I do it manually after the fact or if I tell the camera to do it for me?

        • By integralid 2025-12-296:35

          You and other responders to GP disagree with TFA:

          >There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.

      • By gorgolo 2025-12-299:09

        I noticed this a lot when taking pictures in the mountains.

        I used to have a high resolution phone camera from a cheaper phone and then later switched to an iPhone. The latter produced much nicer pictures, my old phone just produces very flat-looking pictures.

        People say that the iPhone camera automatically edits the images to look better. And in a way I notice that too. But that’s the wrong way of looking at it; the more-edited picture from the iPhone actually corrresponds more to my perception when I’m actually looking at the scene. The white of the snow and glaciers and the deep blue sky really does look amazing in real life, and when my old phone captured it into a flat and disappointing looking photo with less postprocessing than an iPhone, it genuinely failed to capture what I can see with my eyes. And the more vibrant post processed colours of an iPhone really do look more like what I think I’m looking at.

      • By dsego 2025-12-299:212 reply

        I don't think it's the same, for me personally I don't like heavily processed images. But not in the sense that they need processing to look decent or to convey the perception of what it was like in real life, more in the sense that the edits change the reality in a significant way so it affects the mood and the experience. For example, you take a photo on a drab cloudy day, but then edit the white balance to make it seem like golden hour, or brighten a part to make it seems like a ray of light was hitting that spot. Adjusting the exposure, touching up slightly, that's all fine, depending on what you are trying to achieve of course. But what I see on instagram or shorts these days is people comparing their raws and edited photos, and without the edits the composition and subject would be just mediocre and uninteresting.

        • By gorgolo 2025-12-2911:43

          The “raw” and unedited photo can be just as or even more unrealistic than the edited one though.

          Photographs can drop a lot of the perspective, feeling and colour you experience when you’re there. When you take a picture of a slope on a mountain for example (on a ski piste for example), it always looks much less impressive and steep on a phone camera. Same with colours. You can be watching an amazing scene in the mountains, but when you take a photo with most cameras, the colours are more dull, and it just looks flatter. If a filter enhances it and makes it feel as vibrant as the real life view, I’d argue you are making it more realistic.

          The main message I get from OP’s post is precisely that there is no “real unfiltered / unedited image”, you’re always imperfectly capturing something your eyes see, but with a different balance of colours, different detector sensitivity to a real eye etc… and some degree of postprocessing is always required make it match what you see in real life.

        • By foldr 2025-12-2911:42

          This is nothing new. For example, Ansel Adams’s famous Moonrise, Hernandez photo required extensive darkroom manipulations to achieve the intended effect:

          https://www.winecountry.camera/blog/2021/11/1/moonrise-80-ye...

          Most great photos have mediocre and uninteresting subjects. It’s all in the decisions the photographer makes about how to render the final image.

      • By make3 2025-12-291:191 reply

        it's not a crime but applying post processing in an overly generous way that goes a lot further than replicating what a human sees does take away from what makes pictures interesting imho vs other mediums, that it's a genuine representation of something that actually happened.

        if you take that away, a picture is not very interesting, it's hyperrealistic so not super creative a lot of the time (compared to eg paintings), & it doesn't even require the mastery of other mediums to get hyperrealistism

        • By Eisenstein 2025-12-291:342 reply

          Do you also want the IR light to be in there? That would make it more of 'genuine representation'.

          • By BenjiWiebe 2025-12-292:092 reply

            Wouldn't be a genuine version of what my eyes would've seen, had I been the one looking instead of the camera.

            I can't see infrared.

            • By ssl-3 2025-12-294:021 reply

              Perhaps interestingly, many/most digital cameras are sensitive to IR and can record, for example, the LEDs of an infrared TV remote.

              But they don't see it as IR. Instead, this infrared information just kind of irrevocably leaks into the RGB channels that we do perceive. With the unmodified camera on my Samsung phone, IR shows up kind of purple-ish. Which is... well... it's fake. Making invisible IR into visible purple is an artificially-produced artifact of the process that results in me being able to see things that are normally ~impossible for me to observe with my eyeballs.

              When you generate your own "genuine" images using your digital camera(s), do you use an external IR filter? Or are you satisfied with knowing that the results are fake?

              • By lefra 2025-12-298:17

                Silicon sensors (which is what you'll get in all visible-light cameras as far as I know) are all very sensitive to near-IR. Their peak sensitivity is around 900nm. The difference between cameras that can see or not see IR is the quality of their anti-IR filter.

                Your Samsung phone probably has the green filter of its bayer matrix that blocks IR better than the blue and red ones.

                Here's a random spectral sensitivity for a silicon sensor:

                https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRkffHX...

            • By Eisenstein 2025-12-292:46

              But the camera is trying to emulate how it would look if your eyes were seeing it. In order for it to be 'genuine' you would need not only the camera to genuine, but also the OS, the video driver, the viewing app, the display and the image format/compression. They all do things to the image that are not genuine.

          • By make3 2025-12-295:33

            "of what I would've seen"

      • By to11mtm 2025-12-290:261 reply

        JPEG with OOC processing is different from JPEG OOPC (out-of-phone-camera) processing. Thank Samsung for forcing the need to differentiate.

        • By seba_dos1 2025-12-290:402 reply

          I wrote the raw Bayer to JPEG pipeline used by the phone I write this comment on. The choices on how to interpret the data are mine. Can I tweak these afterwards? :)

          • By Uncorrelated 2025-12-297:241 reply

            I found the article you wrote on processing Librem 5 photos:

            https://puri.sm/posts/librem-5-photo-processing-tutorial/

            Which is a pleasant read, and I like the pictures. Has the Librem 5's automatic JPEG output improved since you wrote the post about photography in Croatia (https://dosowisko.net/l5/photos/)?

            • By seba_dos1 2025-12-2912:09

              Yes, these are quite old. I've written a GLSL shader that acts as a simple ISP capable of real-time video processing and described it in detail here: https://source.puri.sm/-/snippets/1223

              It's still pretty basic compared to hardware accelerated state-of-the-art, but I think it produces decent output in a fraction of a second on the device itself, which isn't exactly a powerhouse: https://social.librem.one/@dos/115091388610379313

              Before that, I had an app for offline processing that was calling darktable-cli on the phone, but it took about 30 seconds to process a single photo with it :)

          • By to11mtm 2025-12-291:08

            I mean it depends, does your Bayer-to-JPEG pipeline try to detect things like 'this is a zoomed in picture of the moon' and then do auto-fixup to put a perfect moon image there? That's why there's some need to differentiate between SOOC's now, because Samsung did that.

            I know my Sony gear can't call out to AI because the WIFI sucks like every other Sony product and barely works inside my house, but also I know the first ILC manufacturer that tries to put AI right into RAW files is probably the first to leave part of the photography market.

            That said I'm a purist to the point where I always offer RAWs for my work [0] and don't do any photoshop/etc. D/A, horizon, bright adjust/crop to taste.

            Where phones can possibly do better is the smaller size and true MP structure of a cell phone camera sensor, makes it easier to handle things like motion blur. and rolling shutter.

            But, I have yet to see anything that gets closer to an ILC for true quality than the decade+ old pureview cameras on Nokia cameras, probably partially because they often had sensors large enough.

            There's only so much computation can do to simulate true physics.

            [0] - I've found people -like- that. TBH, it helps that I tend to work cheap or for barter type jobs in that scene, however it winds up being something where I've gotten repeat work because they found me and a 'photoshop person' was cheaper than getting an AIO pro.

      • By fc417fc802 2025-12-292:52

        There's a difference between an unbiased (roughly speaking) pipeline and what (for example) JBIG2 did. The latter counts as "editing" and "fake" as far as I'm concerned. It may not be a crime but at least personally I think it's inherently dishonest to attempt to play such things off as "original".

        And then there's all the nonsense BigTech enables out of the box today with automated AI touch ups. That definitely qualifies as fakery although the end result may be visually pleasing and some people might find it desirable.

      • By qwertywert_ 2025-12-301:15

        That's completely unreasonable. Sure the camera processes them heavily. but when you open it up and start editing in photoshop you are changing this area over that one, or highlighting one color over another etc.. or just boosting the brightness way higher than what it looked like that day. It's a perfectly normal question to ask.

    • By JumpCrisscross 2025-12-295:371 reply

      > modern photography is just signal processing with better marketing

      I pass on a gift I learned of from HN: Susan Sunday’s “On Photography”.

      • By raphman 2025-12-297:491 reply

        Thanks! First hit online: https://www.lab404.com/3741/readings/sontag.pdf

        Out of curiosity: what led you to write "Susan Sunday" instead of "Susan Sontag"? (for other readers: "Sonntag" is German for "Sunday")

        • By JumpCrisscross 2025-12-2919:29

          > Out of curiosity: what led you to write "Susan Sunday" instead of "Susan Sontag"?

          Grew up speaking German and Sunday-night brain did a substitution.

    • By integralid 2025-12-296:511 reply

      And this is just what happens for a single frame. It doesn't even touch computational photography[1].

      [1] https://dpreview.com/articles/9828658229/computational-photo...

      • By cataflam 2025-12-2910:24

        Great series of articles!

    • By yzydserd 2025-12-2910:56

    • By mradalbert 2025-12-299:52

      Also worth noting that manufacturers advertise photodiode count as a sensor resolution. So if you have 12 Mp sensor then your green resolution is 6 Mp and blue and red are 3 Mp

    • By formerly_proven 2025-12-2911:01

      > It really highlights that modern photography is just signal processing with better marketing.

      Showing linear sensor data on a logarithmic output device to show how hard images are processed is an (often featured) sleight of hand, however.

    • By fsckboy 2025-12-300:24

      >It really highlights that modern photography is just signal processing with better marketing

      your perception of the world is just signal processing that's susceptible to marketing

    • By mwambua 2025-12-295:423 reply

      > The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data

      How does this affect luminance perception for deuteranopes? (Since their color blindness is caused by a deficiency of the cones that detect green wavelengths)

      • By fleabitdev 2025-12-299:47

        Protanopia and protanomaly shift luminance perception away from the longest wavelengths of visible light, which causes highly-saturated red colours to appear dark or black. Deuteranopia and deuteranomaly don't have this effect. [1]

        Blue cones make little or no contribution to luminance. Red cones are sensitive across the full spectrum of visual light, but green cones have no sensitivity to the longest wavelengths [2]. Since protans don't have the "hardware" to sense long wavelengths, it's inevitable that they'd have unusual luminance perception.

        I'm not sure why deutans have such a normal luminous efficiency curve (and I can't find anything in a quick literature search), but it must involve the blue cones, because there's no way to produce that curve from the red-cone response alone.

        [1]: https://en.wikipedia.org/wiki/Luminous_efficiency_function#C...

        [2]: https://commons.wikimedia.org/wiki/File:Cone-fundamentals-wi...

      • By doubletwoyou 2025-12-297:241 reply

        The cones are the colour sensitive portion of the retina, but only make up a small percent of all the light detecting cells. The rods (more or less the brightness detecting cells) would still function in a deuteranopic person, so their luminance perception would basically be unaffected.

        Also there’s something to be said about the fact that the eye is a squishy analog device, and so even if the medium wavelengths cones are deficient, long wavelength cones (red-ish) have overlap in their light sensitivities along with medium cones so…

        • By fleabitdev 2025-12-2910:031 reply

          The rods are only active in low-light conditions; they're fully active under the moon and stars, or partially active under a dim street light. Under normal lighting conditions, every rod is fully saturated, so they make no contribution to vision. (Some recent papers have pushed back against this orthodox model of rods and cones, but it's good enough for practical use.)

          This assumption that rods are "the luminance cells" is an easy mistake to make. It's particularly annoying that the rods have a sensitivity peak between the blue and green cones [1], so it feels like they should contribute to colour perception, but they just don't.

          [1]: https://en.wikipedia.org/wiki/Rod_cell#/media/File:Cone-abso...

      • By volemo 2025-12-297:30

        It’s not that their M-cones (middle, i.e. green) don’t work at all, their M-cones responsivity curve is just shifted to be less distinguishable from their L-cones curve, so they effectively have double (or more) the “red sensors”.

    • By f1shy 2025-12-296:574 reply

      > The human eye is most sensitive to green light,

      This argument is very confusing: if is most sensitive, less intensity/area should be necessary, not more.

      • By Lvl999Noob 2025-12-297:54

        Since the human eye is most sensitive to green, it will find errors in the green channel much easier than the others. This is why you need _more_ green data.

      • By gudzpoz 2025-12-297:521 reply

        Note that there are two measurement systems involved: first the camera, and then the human eyes. Your reasoning could be correct if there were only one: "the sensor is most sensitive to green light, so less sensor area is needed".

        But it is not the case, we are first measuring with cameras, and then presenting the image to human eyes. Being more sensitive to a colour means that the same measurement error will lead to more observable artifacts. So to maximize visual authenticity, the best we can do is to make our cameras as sensitive to green light (relatively) as human eyes.

        • By f1shy 2025-12-2916:26

          Oh you are right! I’m so dumb! Of course it is the camera. To have the camera have the same sensitivity, we need more green pixels! I had my neurons off. Thanks.

      • By afiori 2025-12-2914:31

        Because that reasoning applies to binary signals, where the sensibility is about detection, in the case of our eyes sensibility means that we can detect many more distinct values let's say we can see N distinct luminosity levels of monochrome green light but only N*k or N^k distinct levels of blue light.

        So to describe/reproduce what our eyes see you need more detection range in the green spectrum

      • By matsemann 2025-12-2911:531 reply

        Yeah, was thinking the same. If we're more sensitive, why do we need double sensors? Just have 1:1:1, and we would anyways see more of the green? Won't it be too much if we do 1:2:1, when we're already more perceptible to green?

        • By seba_dos1 2025-12-2915:25

          With 1:1:1 the matrix isn't square, and if you have to double one of the channels for practical purposes then the green one is the obvious pick as it's the most beneficial in increasing the image quality cause it's increasing the spatial resolution where our eyes can actually notice it.

          Grab a random photo and blur its blue channel out a bit. You probably won't notice much difference aside of some slight discoloration. Then try the same with the green channel.

    • By jamilton 2025-12-294:391 reply

      Why that ratio in particular? I wonder if there’s a more complex ratio that could be better.

      • By shiandow 2025-12-2910:14

        This ratio allows for a relatively simple 2x2 repeating pattern. That makes interpolating the values immensely simpler.

        Also you don't want the red and blue to be too far apart, reconstructing the colour signal is difficult enough as it is. Moire effects are only going to get worse if you use an even sparser resolution.

    • By thousand_nights 2025-12-290:295 reply

      the bayer pattern is one of those things that makes me irrationally angry, in the true sense, based on my ignorance of the subject

      what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)

      • By MyOutfitIsVague 2025-12-291:461 reply

        Green is in the center of the visible spectrum of light (notice the G in the middle of ROYGBIV), so evolution should theoretically optimize for green light absorption. An interesting article on why plants typically reflect that wavelength and absorb the others: https://en.wikipedia.org/wiki/Purple_Earth_hypothesis

        • By bmitc 2025-12-293:452 reply

          Green is the highest energy light emitted by our sun, from any part of the entire light spectrum, which is why green appears in the middle of the visible spectrum. The visible spectrum basically exists because we "grew up" with a sun that blasts that frequency range more than any other part of the light spectrum.

          • By imoverclocked 2025-12-294:11

            I have to wonder what our planet would look like if the spectrum shifts over time. Would plants also shift their reflected light? Would eyes subtly change across species? Of course, there would probably be larger issues at play around having a survivable environment … but still, fun to ponder.

          • By cycomanic 2025-12-299:372 reply

            That comment does not make sense. Do you mean the sun emits it's peak intensity at green (I don't believe that is true either, but at least it would make a physically sensical statement). To clarify why the statement does not make sense, the energy of light is directly proportional to its frequency so saying that green is the highest energy light the sun emits is saying the sun does not emit any light at frequency higher than green, i.e. no blue light no UV... That's obviously not true.

      • By milleramp 2025-12-291:26

        Several reasons, -Silicon efficiency (QE) peaks in the green -Green spectral response curve is close to the luminance curve humans see, like you said. -Twice the pixels to increase the effective resolution in the green/luminance channel, color channels in YUV contribute almost no details.

        Why is YUV or other luminance-chrominance color spaces important for a RGB input? Because many processing steps and encoders, work in YUV colorspaces. This wasn't really covered in the article.

      • By Renaud 2025-12-294:261 reply

        Not sure why it would invoke such strong sentiments but if you don’t like the bayer filter, know that some true monochrome cameras don’t use it and make every sensor pixel available to the final image.

        For instance, the Leica M series have specific monochrome versions with huge resolutions and better monochrome rendering.

        You can also modify some cameras and remove the filter, but the results usually need processing. A side effect is that the now exposed sensor is more sensitive to both ends of the spectrum.

        • By NetMageSCW 2025-12-295:58

          Not to mention that there are non-Bayer cameras that vary from the Sigma Foveon and Quattro sensors that use stacked sensors to filter out color entirely differently to the Fuji EXR and X-Trans sensors.

      • By shiandow 2025-12-2910:162 reply

        You think that's bad? Imagine finding out that all video still encodes colour at half resolution simply because that is how analog tv worked.

        • By seba_dos1 2025-12-2915:481 reply

          I don't think that's correct. It's not "all video" - you can easily encode video without chroma subsampling - and it's not because this is how analog TV worked, but rather for the same reason why analog TV worked this way, which is the fact that it lets you encode significantly less data with barely noticeable quality loss. JPEGs do the same thing.

          • By shiandow 2025-12-2919:18

            It's a very crude method, with modern codecs I would be very surprised if you didn't get a better image just encoding the chroma at a lower bitrate.

        • By heckelson 2025-12-2915:42

          Isn't it the other way round? We did and still do chroma subsampling _because_ we don't see that much of a difference?

      • By japanuspus 2025-12-299:27

        If the Bayer pattern makes you angry, I imagine it would really piss you off to realize that the whole concept encoding an experienced color by a finite number of component colors is fundamentally species-specific and tied to the details of our specific color sensors.

        To truly record an appearance without reference to the sensory system of our species, you would need to encode the full electromagnetic spectrum from each point. Even then, you would still need to decide on a cutoff for the spectrum.

        ...and hope that nobody ever told you about coherence phenomena.

    • By bstsb 2025-12-290:133 reply

      hey, not accusing you of anything (bad assumptions don't lead to a conducive conversation) but did you use AI to write or assist with this comment?

      this is totally out of my own self-interest, no problems with its content

      • By sho_hn 2025-12-290:224 reply

        Upon inspection, the author's personal website used em dashes in 2023. I hope this helped with your witch hunt.

        I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.

        • By brookst 2025-12-291:111 reply

          Phew. I have published work with em dashes, bulleted lists, “not just X, but Y” phrasing, and the use of “certainly”, all from the 90’s. Feel sorry for the kids, but I got mine.

          • By qingcharles 2025-12-298:11

            I'm grandfathered in too. RIP the hyphen crew.

        • By mr_toad 2025-12-292:09

          > I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.

          At least Robespierre needed two sentences before condemning a man. Now the mob is lynching people on the basis of a single glyph.

        • By ozim 2025-12-297:35

          I started to use — dash so that algos skip my writing thinking they were AI generated.

        • By bstsb 2025-12-2911:05

          wasn't talking about the em dashes (i use them myself) but thanks anyway :)

      • By ekidd 2025-12-290:351 reply

        I have been overusing em dashes and bulleted lists since the actual 80s, I'm sad to say. I spent much of the 90s manually typing "smart" quotes.

        I have actually been deliberately modifying my long-time writing style and use of punctuation to look less like an LLM. I'm not sure how I feel about this.

        • By disillusioned 2025-12-290:444 reply

          Alt + 0151, baby! Or... however you do it on MacOS.

          But now, likewise, having to bail on emdashes. My last differentiator is that I always close set the emdash—no spaces on either side, whereas ChatGPT typically opens them (AP Style).

          • By piskov 2025-12-290:50

            Just use some typography layout with a separate layer. Eg “right alt” plus “-” for m-dash

            Russians use this for at least 15 years

            https://ilyabirman.ru/typography-layout/

          • By qingcharles 2025-12-298:12

            I'm a savage, I just copy-paste them from Unicode sites.

          • By ksherlock 2025-12-291:111 reply

            On the mac you just type — for an em dash or – for an en dash.

            • By xp84 2025-12-295:56

              Is this a troll?

              But anyway, it’s option-hyphen for a en-dash and opt-shift-hyphen for the em-dash.

              I also just stopped using them a couple years ago when the meme about AI using them picked up steam.

      • By ajkjk 2025-12-290:212 reply

        found the guy who didn't know about em dashes before this year

        also your question implies a bad assumption even if you disclaim it. if you don't want to imply a bad assumption the way to do that is to not say the words, not disclaim them

        • By bstsb 2025-12-2911:041 reply

          didn't even notice the em dashes to be honest, i noticed the contrast framing in the second paragraph and the "It's impressive how" for its conclusion.

          as for the "assumption" bit, yeah fair enough. was just curious of AI usage online, this wasn't meant to be a dig at anyone as i know people use it for translations, cleaning up prose etc

          • By barishnamazov 2025-12-2911:221 reply

            No offense taken, but realize that good number of us folks who have learned English as a second language have been taught in this way (especially in an academic setting). LLMs' writing are like that of people, not the other way around.

            • By ajkjk 2025-12-3015:30

              wouldn't say that... they're very distinctly not like people, that's (part of) the problem. But I don't think the difference is measured exactly in the choices of words and punctuation. It's more like... you can tell, reading AI writing, that it's not "sincere"; no person would want to say what the AI is saying, because it feels fake and disingenuous. The phrases and em dashes and whatever else are just the method for this effect. Real people use the same phrases but with real intent to communicate behind them, and the result is different in a way that is curiously easy to detect.

        • By reactordev 2025-12-290:231 reply

          The hatred mostly comes from TTS models not properly pausing for them.

          “NO EM DASHES” is common system prompt behavior.

          • By xp84 2025-12-295:58

            You know, I didn’t think about that, but you’re right. I have seen so many AI narrations where it reads the dash exactly like a hyphen, actually maybe slightly reducing the inter-word gap. Odd the kinds of “easy” things such as complicated and advanced system gets wrong.

  • By MarkusWandel 2025-12-291:058 reply

    But does applying the same transfer function to each pixel (of a given colour anyway) count as "processing"?

    What bothers me as an old-school photographer is this. When you really pushed it with film (e.g. overprocess 400ISO B&W film to 1600 ISO and even then maybe underexpose at the enlargement step) you got nasty grain. But that was uniform "noise" all over the picture. Nowadays, noise reduction is impressive, but at the cost of sometimes changing the picture. For example, the IP cameras I have, sometimes when I come home on the bike, part of the wheel is missing, having been deleted by the algorithm as it struggled with the "grainy" asphalt driveway underneath.

    Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look. I'd prefer honest noise, or better yet an adjustable denoising algorithm from "none" (grainy but honest) to what is now the default.

    • By 101008 2025-12-291:541 reply

      I hear you. Two years ago I went to my dad's and I spent the afternoon "scanning" old pictures of my grandparents (his parents), dead almost two decades ago. I took pictures of the physical photos, situating the phone as horizontal as possible (parallel to the picture), so it was as similar as a scan (to avoid perspective, reflection, etc).

      It was my fault that I didn't check the pictures while I was doing it. Imagine my dissapointment when I checked them back at home: the Android camera decided to apply some kind of AI filter to all the pictures. Now my grandparents don't look like them at all, they are just an AI version.

      • By krick 2025-12-2917:06

        What phone it was? I am sure that there is a lot of ML involved to figure out how to denoise photos in the dark, etc., but I never noticed anything that I'd want to describe as "AI filter" on my photos.

    • By Aurornis 2025-12-291:52

      > For example, the IP cameras I have, sometimes when I come home on the bike, part of the wheel is missing, having been deleted by the algorithm as it struggled with the "grainy" asphalt driveway underneath.

      Heavy denoising is necessary for cheap IP cameras because they use cheap sensors paired with high f-number optics. Since you have a photography background you'll understand the tradeoff that you'd have to make if you could only choose one lens and f-stop combination but you needed everything in every scene to be in focus.

      You can get low-light IP cameras or manual focus cameras that do better.

      The second factor is the video compression ratio. The more noise you let through, the higher bitrate needed to stream and archive the footage. Let too much noise through for a bitrate setting and the video codec will be ditching the noise for you, or you'll be swimming in macroblocks. There are IP cameras that let you turn up the bitrate and decrease the denoise setting like you want, but be prepared to watch your video storage times decrease dramatically as most of your bits go to storing that noise.

      > Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look. I'd prefer honest noise, or better yet an adjustable denoising algorithm from "none" (grainy but honest) to what is now the default.

      If you have an iPhone then getting a camera app like Halide and shooting in one of the RAW formats will let you do this and more. You can also choose Apple ProRAW on recent iPhone Pro models which is a little more processed, but still provides a large amount of raw image data to work with.

    • By dahart 2025-12-291:591 reply

      > does applying the same transfer function to each pixel (of a given colour anyway) count as “processing”?

      This is interesting to think about, at least for us photo nerds. ;) I honestly think there are multiple right answers, but I have a specific one that I prefer. Applying the same transfer function to all pixels corresponds pretty tightly to film & paper exposure in analog photography. So one reasonable followup question is: did we count manually over- or under-exposing an analog photo to be manipulation or “processing”? Like you can’t see an image without exposing it, so even though there are timing & brightness recommendations for any given film or paper, generally speaking it’s not considered manipulation to expose it until it’s visible. Sometimes if we pushed or pulled to change the way something looks such that you see things that weren’t visible to the naked eye, then we call it manipulation, but generally people aren’t accused of “photoshopping” something just by raising or lowering the brightness a little, right?

      When I started reading the article, my first thought was, ‘there’s no such thing as an unprocessed photo that you can see’. Sensor readings can’t be looked at without making choices about how to expose them, without choosing a mapping or transfer function. That’s not to mention that they come with physical response curves that the author went out of his way to sort-of remove. The first few dark images in there are a sort of unnatural way to view images, but in fact they are just as processed as the final image, they’re simply processed differently. You can’t avoid “processing” a digital image if you want to see it, right? Measuring light with sensors involves response curves, transcoding to an image format involves response curves, and displaying on monitor or paper involves response curves, so any image has been processed a bunch by the time we see it, right? Does that count as “processing”? Technically, I think exposure processing is always built-in, but that kinda means exposing an image is natural and not some type of manipulation that changes the image. Ultimately it depends on what we mean by “processing”.

      • By henrebotha 2025-12-297:411 reply

        It's like food: Virtually all food is "processed food" because all food requires some kind of process before you can eat it. Perhaps that process is "picking the fruit from the tree", or "peeling". But it's all processed in one way or another.

        • By littlestymaar 2025-12-298:391 reply

          Hence the qualifier in “ultra-processed food”

          • By NetMageSCW 2025-12-2915:592 reply

            But that qualifier in stupid because there’s no start or stopping point for ultra processed versus all foods. Is cheese an ultra-processed food? Is wine?

            • By Edman274 2025-12-2919:271 reply

              There actually is a stopping point , and the definition of ultra processed food versus processed food is often drawn at the line where you can expect someone in their home kitchen to be able to do the processing. So, the question kind of goes whether or not you would expect someone to be able to make cheese or wine at home. I think there you would find it natural to conclude that there's a difference between a Cheeto, which can only be created in a factory with a secret extrusion process, versus cottage cheese, which can be created inside of a cottage. And you would probably also note that there is a difference between American cheese which requires a process that results in a Nile Red upload, and cheddar cheese which still could be done at home, over the course of months like how people make soap at home. You can tell that wine can be made at home because people make it in jails. I have found that a lot of people on Hackernews have a tendency to flatten distinctions into a binary, and then attack the binary as if distinctions don't matter. This is another such example.

              • By henrebotha 2025-12-3020:08

                There actually is no agreed-upon definition of "ultra-processed foods", and it's much murkier than you make it out to be. Not to mention that "can't be made at home" and "is bad for you" are entirely orthogonal qualities.

            • By littlestymaar 2025-12-2919:28

              With that kind of reasoning you can't name anything, ever. For instance, what's computer? Is a credit card a computer.

    • By jjbinx007 2025-12-291:44

      Equally bad is the massive over sharpening applied to CCTV and dash cams. I tried to buy a dash cam a year ago that didn't have over sharpened images but it proved impossible.

      Reading reg plates would be a lot easier if I could sharpen the image myself rather than try to battle with the "turn it up to 11" approach by manufacturers.

    • By Gibbon1 2025-12-295:18

      Was mentioning to my GF (non technical animator) about the submission Clock synchronization is a nightmare. And how it comes up like a bad penny. She said in animation you have the problem that you're animating to match different streams and you have to keep in sync. Bonus you have to dither because if you match too close the players can smell it's off.

      Noise is part of the world itself.

    • By kccqzy 2025-12-2914:24

      > Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look.

      My theory is that this is trying to do denoising after capturing the image with a high ISO. I personally hate that look.

      On my dedicated still camera I almost always set ISO to be very low (ISO 100) and only shoot people when lighting is sufficient. Low light is challenging and I’d prefer not to deal with it when shooting people, unless making everything dark is part of the artistic effect I seek.

      On the other hand on my smartphone I just don’t care that much. It’s mostly for capturing memories in situations where bringing a dedicated camera is impossible.

    • By kqr 2025-12-2912:02

      > But does applying the same transfer function to each pixel (of a given colour anyway) count as "processing"?

      In some sense it has to, because you can include a parametric mask it that function which makes it possible to perform local edits with global functions.

    • By eru 2025-12-291:333 reply

      Just wait a few years, all of this is still getting better.

      • By coldtea 2025-12-292:25

        It's not really - it's going in the inverse direction regarding how much more processed and artificially altered it gets.

      • By trinix912 2025-12-299:351 reply

        Except it seems to be going in the opposite direction, every phone I've upgraded (various Androids and iPhones) seemed to have more smoothing than the one I'd had before. My iPhone 16 night photos look like borderline generative AI and there's no way to turn that off!

        I was honestly happier with the technically inferior iPhone 5 camera, the photos at least didn't look fake.

        • By vbezhenar 2025-12-2912:091 reply

          If you can get raw image data from the sensor, then there will be apps to produce images without AI processing. Ordinary people love AI enhancements, so built-in apps are optimised for this approach, but as long as underlying data is accessible, there will be third-party apps that you can use.

          • By trinix912 2025-12-2915:08

            That's a big IF. There's ProRaw but for that you need an iPhone Pro, some Androids have RAW too but it's huge and lacks even the most basic processing resulting in photos that look like one of the non-final steps in the post.

            Third party apps are hit or miss, you pay for one only to figure out it doesn't actually get the raw output on your model and so on.

            There's very little excuse for phone manufacturers to not put a toggle to disable excessive post-processing. Even iOS had an HDR toggle but they've removed it since.

      • By MarkusWandel 2025-12-291:361 reply

        "Better"...

        • By DonHopkins 2025-12-292:051 reply

          "AIer"... Who even needs a lens or CCD any more?

          Artist develops a camera that takes AI-generated images based on your location. Paragraphica generates pictures based on the weather, date, and other information.

          https://www.standard.co.uk/news/tech/ai-camera-images-paragr...

          • By RestartKernel 2025-12-295:07

            Thanks for the link, that's a very interesting statement piece. There must be some word though for the artistic illiteracy in those X/Twitter replies.

  • By NiloCK 2025-12-292:373 reply

    You may know that intermittent rashes are always invisible in the presence of medical credentials.

    Years ago I became suspicious of my Samsung Android device when I couldn't produce a reliable likeness of an allergy induced rash. No matter how I lit things, the photos were always "nicer" than what my eyes recorded live.

    The incentives here are clear enough - people will prefer a phone whose camera gives them an impression of better skin, especially when the applied differences are extremely subtle and don't scream airbrush. If brand-x were the only one to allow "real skin" into the gallery viewer, people and photos would soon be decried as showing 'x-skin', which would be considered gross. Heaven help you if you ever managed to get close to a mirror or another human.

    To this day I do not know whether it was my imagination or whether some inline processing effectively does or did perform micro airbrushing on things like this.

    Whatever did or does happen, the incentive is evergreen - media capture must flatter the expectations of its authors, without getting caught in its sycophancy. All the while, capacity improves steadily.

    • By astrange 2025-12-296:084 reply

      iOS added a camera mode for medical photos that extra doesn't do that.

      https://developer.apple.com/videos/play/wwdc2024/10162/

      • By CrompyBlompers 2025-12-2917:181 reply

        To disambiguate, this is not meant as “added a mode to the stock Camera app”, but rather “added a mode to the camera API that iOS developers can use”.

        • By lesuorac 2025-12-2918:03

          That's so annoying it's not stock.

          I always have to get a very bright flashlight to make rashes show in a photo and then the rest of the body looks discolored as well but at least I have something to share remotely :/

      • By Maxion 2025-12-298:051 reply

        Huh wonder which camera apps enable use of this API?

      • By 71bw 2025-12-297:28

        This is VERY interesting and I am glad you posted this, as it is my first time coming across this.

      • By NiloCK 2025-12-2917:251 reply

        Thank you for sharing. Seems to validate my suspicions!

        • By astrange 2025-12-304:50

          It's not really something to be suspicious of. Cameras just don't know what colors things "actually" are, mostly because they don't know what color the lighting is. Auto exposure/auto white balance erases color casts or unusual skin colors.

          You can put a color calibration card in the picture to achieve a similar effect, but it's not as predictable.

    • By nerdponx 2025-12-292:42

      I've had problems like this before, but I always attributed it to auto white balance. That great ruiner of sunset photos the world over.

    • By herpdyderp 2025-12-294:32

      I remember when they did this to pictures of the moon: https://arstechnica.com/gadgets/2023/03/samsung-says-it-adds...

HackerNews