The Sensor vs the RAW File

Your camera’s sensor records a lot more than finally appears in your master TIFF file. Actually it might have more pixels than those appearing in the RAW file.

Last year I did a little research about this topic, and the following graphic shows the “big picture”. There are two distinct regions on the sensor: uncovered, regular pixels, and ones covered with a black mask. The black mask is used to determine the black level (i.e. how black is black with regards to thermal noise and sensor design).

One step of processing a RAW file is scaling – mapping all the values between the sensor’s black level and white saturation level into the 0-1 interval. Yes, the blackest black on a sensor is not represented by a zero readout from a pixel. For example, on a Canon 5D Mark II, which is a 14-bit camera and thus each pixel theoretically can be of any value between 0 and 16383, the black level is 1023 and white saturation level is 15600. So you lose a bit at each end.

How the black level is determined varies by vendor to vendor – or Canon vs everybody else. Canon puts the entire image into its RAW files (including the black masked pixels) so a RAW converter have the opportunity to calculate the black level from these pixels on its own. On every other camera I tested (a bunch of Nikons, Leicas, Sonys and Phase One backs) the camera determines the black level and subtracts it from every pixel. That is, the camera does the black half of scaling.

This have severe effect on some applications – astrophotography for example – where one creates multiple exposures and averages them. With a Canon RAW file and proper processing noise in the darkest tone will oscillate around the black level. So noise from multiple averaged exposures will cancel out. With black scaled files however, half of the noise oscillation is cut down and there are no negative values that cancel out positive ones around the black level. All in all, a Canon is theoretically better for averaging than any other camera.

But why the default crop is needed? Why don’t we get all the pixels from the active sensor area? Because RAW conversion algorithms need a startup area. In other words most RAW conversion methods produce ugly artifacts around the borders. So the solution is to simply crop these out.

  ☕ ☕ ☕

Did you enjoy this post? Consider buying me a coffee if so.