Developing with FotoV
FotoV is a tool I am developing that converts RAW files into finished photographs and that I already use for all of my images.
Why FotoV specifically?
The overall goal is to create a tool that I myself and others who like natural-looking images can use as an alternative to ordinary software such as Lightroom, Darktable, or Affinity. A way to make the most of the captured sensor data without being dependent on what any one individual thinks ~looks right~, but instead to let the tool make that judgment and let that become a ground for talking about and scrutinizing how we want images to come into being.
Conventional tools are often built on compromises that are very understandable historically and practically. Some of the trickier questions concern reliable automatic white balance, the tone curve, and the color profile. At some point I want to explain more about those things on this page, but for now I thought I would focus on what I find trickiest, namely demosaicing and what FotoV is trying to do differently with it.
Demosaicing and CFD
On top of virtually all color camera sensors there is a checkerboard-like layer whose purpose is to block light that is not of the right kind for that particular pixel of the sensor. The most common arrangement is a Bayer filter, with one red, two green, and one blue pixel on the sensor per 2x2 area. In short, this means that in order to get an image with as many pixels as the sensor has, but with full color information everywhere, software is needed to conjure up two thirds of the information so that the triplet R, G, B exists everywhere. That is not so easy to do well.
Many established methods try to solve that by assuming a strong chromatic correlation between green and red or blue. That is not an unreasonable assumption in itself, and it often works. The problem is that the same assumption can fail quite hard as soon as the subject contains abrupt chromatic variation.
When that happens, you do not just get a little blur or a little noise. You can get false structure, strange neutralizations, or other errors that depend more on the algorithm's quirks than on the scene itself.
The method used in FotoV therefore makes sure not to make any such assumption, and limits itself to making inferences based on meaningfully comparable information, doing the best it can despite that. This is why I call the strategy 'Commensurability-first demosaicing'.
An example of the effect of this can be seen in the case below, where I have taken screenshots of a close-up of a phone screen that has been developed with different demosaicing methods. It is something of a nightmare scenario for all methods, considering how abruptly all the colors vary over such short distances. The reference in that case is a Pixel Shift version of the same image, where the sensor has physically moved between four exposures so that the combined version contains full color information.
CFD
Feel free to click RGB above the Pixel Shift image to see what it looks like in full color, which may make it clearer what the image depicts, even though the comparison images are only in grayscale, since it is the red channel that is isolated in them.
CFD holds the red lattice together better than the other examples, even if it does not match the reference perfectly.