The Adventure of the Mendacious Histogram

Digital exposure optimization is a controversial topic. Although the notion of “exposing to the right” is widespread, widely accepted with a large group of advocates, camera manufacturers “doesn’t seem to get it”. But there’s much more technical stuff behind this than simple ignorance. In this post I’ll shed some light on how complicated digital exposure optimization could be.

Let’s start with ETTR. Two of my masters, Michael Reichmann and Jeff Schewe had extensively written about the topic, so instead of replicating it I would encourage you to read the following articles and Jeff’s new book. It is imperative to grasp the idea so that you can understand the rest of the post.

  • Expose (to the) Right – The original article from 2003.
  • Optimizing Exposure – A rather utopistic view of the problem. Reminds me of Adams’ (Douglas, not Ansel) Total Perspective Vortex – as it extrapolates a whole universe not from a fairy cake, but from the fact that increasing exposure reduces shadow noise. Anyway, a good read on what would be really needed from a photographers point of view, even if its not possible with the current technology.
  • The Digital Negative – Jeff’s book collects the majority of information about digital exposure right in its first chapter.

To summarize: increasing exposure has advantages to the shadows and the amount of information retained in the RAW file. That’s great. But here comes the million dollar question: how much one should increase exposure being hundred percent confident that highlights aren’t blown or destroyed (and thus, keeping all the possible information)? You should read Ctein’s article on the dangers of ETTR regarding lost highlights.

I hope you are confused enough about whether to ETTR or not and how can you really assess over-exposure. Don’t be afraid, this is the where our adventure begins.

Before we embark on it, let me rephrase the question: as overexposure is terminal to the data (details) in the overexposed area, how can one avoid it with confidence? Regardless of whether you ETTR or not, this is important. Imagine a bright yellow flower for example. Overexposing one or more channels will destroy fine color variance – which is a bad thing (except if you will run the image through some ugly lo-fi filter, of course, in which case the things I’m writing about is totally unimportant to you). Also, what I’m writing about is for RAW shooters only. JPEG guys only get what they see, so these topics does not apply to them.

Let’s begin!

One Image – Three (Different) Histograms

The histogram is the primary tool for assessing exposure on a digital camera. But what your camera shows has only a little resemblance to the RAW data recorded. This is because all JPEG settings, such as white balance, color space, sharpening, contrast, etc. influence the histogram display. As a result, the histogram on the LCD comes from the JPEG preview of your RAW file. In an ideal world, one could set the histogram into “RAW mode” which would instruct the camera to calculate the histogram from the RAW data instead of the JPEG preview.

Canon 5D III, Auto WB

The majority of the parameters mentioned above can be zeroed (like sharpening, contrast) on the camera. The problem-child is white balance, where we have little influence by default.

Take the image on the left, for example. The RGB histogram shows gross overexposure in the red channel, you can even see overexposure warning “blinkies” in Elmo’s eyes.

Based on the camera’s histogram one would lower the exposure to avoid overexposing Elmo’s… wait! Blinkies warn for overexposure in the eyes, not the red fur, even if the red channel was blown. Something isn’t kosher with this…

Histogram from RAW Data

On the right is a histogram generated from the same image with Kuuvik Capture utilizing gamma-corrected RAW data. As you can see reds are far from being overexposed. You can also see a different distribution in the RAW histogram, with the peaks are being at distinctly different places. This is because the RAW histogram in KCapture is not white balanced.

White balance is set by multiplying data coming from a channel with a number (the white balance coefficient) to “scale” it to reach the desired white point. It is represented by a four element vector (one number for each channel, in RGGB order). You can use exiftool to examine these coefficients, they are displayed as WB RGGB Levels As Shot. The actual values for the above image are: (2185, 1024, 1024, 1526). That is, you have to multiply the red channel by 2185/1024 = 2.13 to get the white balanced image. You can easily see from the RAW histogram that multiplying reds by 2.13 on this image will blow the channel out – in that white balance setting.

Sidebar: white balance is always represented as RGGB coefficients internally, not with color temperature/tint as RAW converters and cameras present this data to you. Color temperature is an “artificial” construct to handle these numbers in a more user-friendly way. And the way these coefficients are converted into color temperature is a proprietary process for each converter. This is why you get completely different white by using the same Kelvin value in different converters.

Now let’s take a look on what RAW converters, Adobe Camera RAW 7.3 to be exact, think about the same image (Capture One displays a similar one).

Histogram in ACR 7

Part of the black magic of RAW conversion is graceful handling of the roll-off into overexposure. The fact that a non-overexposed channel can be blown during white balancing is responsible for the converter’s ability to do highlight recovery. They are mostly taming the data that is blown only by the currently set white balance. As the common myth goes: RAW files have more headroom in the highlights. And as with most of the myths, there is truth lurking behind it: because the white balance is not fixed in RAW captures, converters have the ability to extract more information from it than you would be able to get from a JPEG, where clipped highlights (even if clipped by the current white balance setting) are lost forever.

Above I shown you the case when the JPEG histogram shows overexposure, while the RAW histogram doesn’t. It could happen the other way around, which is even more dangerous. I recommend you to read Alex Tutbalin’s article on white balancing problems for an example. Alex is the author of Libraw, the library Kuuvik Capture is also using for extracting RAW data from proprietary file formats, such as Canon’s CR2.

On Gamma Correction

If you read Alex’s article, you saw the Rawnalyze tool, and if you try both that and Kuuvik Capture, you’ll get different histograms. Why? Because what Rawnalyze displays is the rawest raw data possible. That is, it doesn’t map the camera’s black level to the left side and the maximum saturation level on a given ISO to the right of the histogram (in other words, it does not scale the data). In KCapture I wanted to make the RAW histogram to look familiar to photographers (including myself), so instead of blindly displaying the raw data the app does a little processing. The processing consists of scaling (so black is on the left and white is on the right instead of somewhere in the middle of the histogram), and gamma correction.

By default RAW data is linear, that is the highest exposure stop occupies the entire right half of the histogram, the next 1/4 of it, and so on. The result is that a linear RAW histogram pushes all the data to the left, and makes it hard to judge shadow exposure and see whether we have a clipping there. Instead KCapture corrects this data the same way it happens during RAW conversion: by applying a gamma curve to make exposure stops equal sized on the display.

Mapping Theory to Practice

Let’s draw some conclusions. First, 1) white balanced in-camera histograms are not suitable for checking overexposure. RAW histograms are markedly better in this, but 2) RAW histograms can only show physical overexposure of channel(s), and are blind to white balancing induced highlight clipping. Most of that clipping is curable in the converter utilizing some form of highlight recovery, however. The 3) final word on highlight clipping is said by the RAW converter after white balance has been set.

(2) is why I called Michael’s second article an utopia. When we wrote the specification for Kuuvik Capture back in December 2011, our goal was to implement ETTR optimization described in that article. It turned out rather quickly that you can’t do this unless you have the final white balance set – which will not happen until later in the process. And even if you can do this, you shouldn’t always ETTR. Sometimes artificially overexposed images will push the noise from the shadows into the sky (scroll down to the ETTR section in the linked article for an example). And even ETTR could be behind skies turning purple.

(3) is the real reason why camera manufacturers are unable to show the same histogram as your converter (unless they are using the same algorithms, of course – which is highly unlikely).

So how do I use this information in practice?

When I’m shooting tethered (which is the majority of cases when I do landscape work), I rely on Kuuvik Capture to check the physical exposure from RAW histograms. If your physical exposure is bad it won’t get any better during RAW conversion. What I look for here is potentially uncorrectable overexposure (non-specular highlights) and as a Canon shooter who’s cursed with muddy shadows, I check for underexposure. I usually push the exposure to the right when the shadows are in danger or when there’s plenty of room for the highlights.

Then I pass the image to Capture One for the final decision. Capture One 6 had some issues with highlight roll-off handling with the 5D3, so I had to back-off a bit from extreme highlights, but v7 fixes this problem (be sure to use the v7 process).

Free (that is, non-tethered) shooting is a different beast. One needs a trick to cancel the side effects of white balancing.

Unitary White Balance

Guillermo Luijk came up with the idea of UniWB in 2008. UniWB is basically a custom white balance that sets the WB coefficients to 1 (hence the name unitary). It could be useful in the field if you can live with the ugly green images (to avoid that I usually just use UniWB for exposure tuning and switch back to AWB for the real shot).

The real downside of UniWB was the tedious process of obtaining the magenta target image. Having control over the WB coefficients, you can obtain the UniWB setting in Kuuvik Capture just with two clicks: on the Set Unitary White Balance item on the Camera menu.

Capture One 7.1 Can Give You Extra Pixels

Remember my post describing what’s in the RAW file vs the sensor? I recommend you to read it before proceeding.

Last week I updated to the latest and greatest Capture One release, 7.1. Business went as usual, until last Sunday, when I faced a big surprise. I was processing files from the morning shoot, and wanted to check how different cropping looks on an image, so activated the crop tool. What I saw is depicted below.

The C1 7.1 surprise: getting more pixels than the default crop

The C1 7.1 surprise: getting more pixels than the default crop

Note the gray border around the image. This shows that the image is cropped, although I didn’t crop it at all at this point. You can even move the crop border to include these border areas! In this example I got a 5843 x 3867 image instead of the 5760 x 3840 default crop. That’s 2.1% more image data.

There’s a limitation, however. I was only able to produce this extra data with lenses having a lens correction profile. I suspect this “feature” is a leftover from distortion correction, and is available in cases where the correction doesn’t eat up the entire border around the default crop. Even pulling the Distortion slider to the left removes those extra pixels.

C1 3.x had a similar feature when I was able to extract more pixels than with other converters. Nowadays you can always get those pixels using dcraw, but I avoid using that for anything except research – commercial converters are that much better.

I hope this isn’t treated as a bug by Phase One, and I would like to see this feature in upcoming releases. After all, default crop is just that: default crop. If one can extract more pixels with less waste, then it’s a good thing.

Update 3/11/2013

Following Jeppe’s comment I examined the situation a bit deeper by closely watching border pixels while modifying distortion correction amount. Jeppe’s right, you don’t get pixels outside of the default crop. Here’s an illustration on what happens:


Gray is the original image, blue what you get after distortion correction. And what I see is the ability to increase the crop from the original image size to recap some of those pixels otherwise would be lost to distortion correction (green). Am I right, Jeppe? Anyway, it’s a nice touch as you have the option to retain more image data.

Capture One 7.0.1 – OpenCL Works on 10.8

A few days ago Phase One released Capture One 7.0.1, which fixes at least two of the issues I complained about in my original short piece.

  • OpenCL acceleration now works on OS X 10.8. The application is MUCH faster! Keep in mind that it will drain your batteries rather quickly (you can turn off OpenCL acceleration if you think things are better the old way). Seems that beating the heads of Phase One support personnel was a worthwhile exercise… 🙂
  • Overly aggressive, detail-obliterating luminance sharpening seems to work fine now. Just have a little time to test it on some of the problem images (where I found this issue originally), but all seems fine now.

I had no time to test the catalog functionality again (actually abandoned the idea to work with the catalog after all). And Canon tethering support is the same crap it was.

Capture One Pro 7 Quick Review

Choosing a RAW converter is a highly personal choice, kinda like choosing the type of emulsion was in the old days. I have a license for the majority of big players in the market (Capture One Pro, DxO Optics Pro Elite and Lightroom/Photoshop), partly out of curiosity, and partly the look they produce can significantly change from release to release. And also because you can’t know in which one a given image looks better. But the starting point, the converter I always try first, is Capture One Pro. So I was very excited to see a new release. Following are my observations working a few days with the software.

Retina Display Support

To be honest, this single feature worth the upgrade price for me. The immersive potential of working with files on a print-resolution screen is simply mind-blowing. You must try it in person to understand all of the benefits and experience this visual joy.

New Processing Engine

This is an incremental update to what I consider the best image quality in the industry. Don’t believe all the marketing bullshit, though. The fairly aggressive noise reduction in the Capture One 7 process is disastrous to details on anything I shot below ISO 800. I don’t say that the noise reduction is bad, just I find the default 50 way too much. If you are used to the huge amount of details Capture One can produce, just swing the luminance noise reduction slider back to zero or a small value. And presto, you’ll get back all the details. Other that that I prefer color from the new imaging engine. It also handles highlights on files from my 5D3 way better than the v6 process. I usually ETTR (expose to the right) optimize my exposures, so handling highlights well is of major importance for me. Lightroom 4 does a great job in this regard, but the v7 process is comparable. The net result is that I don’t have to back about 1/3 of a stop from the correct ETTR exposure to protect the extreme highlights.

It is a pity that the engine does not work at all in accelerated OpenCL mode on my retina MacBook Pro. The log file tells that:

OpenCL : found platform Apple, OpenCL Version 
       : OpenCL 1.2 (Aug 24 2012 00:53:09)
OpenCL Device : GeForce GT 650M
OpenCL Driver Version : CLH 1.0
OpenCL Compute Units : 2
OpenCL not enabled in Mac OS 10.8

Seriously hope that Phase will fix it in upcoming updates. It does take advantage of the 4-core/8-thread processor however. So overall responsiveness is very good. I just want some more as the GPU in the machine is capable of 4x the floating point performance of the CPU (253 vs 64 gigaflops)…


This is the same crap I struggled with in Lightroom 1 days… Tried to import my RAW archive containing about 30,000 images. After about 40 minutes of crunching the application hanged, so I had to kill it. This left the catalog corrupted, not a single folder showed up. Although Phase does not recommend large sessions, I’m still using a single session for all my images. If it gets lousy, I simply throw it out and start from scratch without any ill effects. I think the catalog is full of bugs (is this the crap they bought from Microsoft a few years back?) and operational gotchas, so I’ll skip it for the foreseeable future. One piece of advice if you start to experiment with it: importing into the catalog DOES NOT import previous image settings, UNLESS you explicitly mark it so with a semi-hidden check box on the import dialog. This should be the other way!

Live View with Canons

This is a rather irritating move from Phase: release an app with a feature that does not work at all! There are no workarounds, it just doesn’t work. Actually this is just another chapter of the buggy Canon EOS SDK saga that plagues OS X 10.7.5 and 10.8.x users. In my opinion they should disable this feature until Canon releases an SDK that finally works (their EOS Utility now does, so the SDK should be on the horizon). So because of this, I was unable to test this feature.


Retina support +++, new processing engine ++ and -, catalog is a big – (I would not use it in the current form), live view for Canons is another big – (I’m using another solution by the way). The new release does bring improvements for the things I care about (especially retina display support), and is rather lousy in things I don’t give a damn about. So at the end I’m very happy with this release!

Update 12/2/2012:

Version 7.0.1 fixes some of the issues reported here.