One of my past blog posts, “Monochrome digital”, dealt with the advantages of a monochrome camera sensor. The ones that can be objectively measured include higher dynamic range, better resolution in terms of sharpness and amount of detail, and unparalleled low-light performance. Unfortunately, cameras offering a true black and white sensor are as few as they are pricey, but if you’re ready to forego colour photography forever (well, almost, keep reading…) and willing to treat your camera sensor as if it were a lottery scratchcard, there might be a radical solution to your predicament.
The most popular type of digital camera sensor nowadays is by far the Bayer sensor. In order to register colour, each pixel of the sensor has a tiny colour filter in front of it and is therefore able to register the intensity of light in that colour’s wavelength. Each 2×2 pixel array features one red filter, one blue filter and two green ones (the human eye is most sensitive to the colour green) and the raw image formed this way then gets interpolated to achieve the 16 million colours of the RGB colour space. The same principle is employed in certain other sensor types, as long as they rely on a single-layer colour array of varying patterns.
Atop that colour filter array lies a layer of microlenses that aren’t essential for producing an image but which play a role in enhancing image quality in the corners of the image, where light would otherwise be cast onto the sensor at an acute angle.
This seems like a very fragile construction, considering sensors nowadays feature dozens of millions of pixels and each and every single one of those pixels has its own colour filter and a microlens. You wouldn’t want to disturb this delicate structure as it would be virtually unfixable.
Debayering a sensor
When you buy your first camera, often the first non-essential accessory you’re going to get is a protective case (and a protective filter for your lens, which actually might cause more harm than good, but that’s a whole other matter…). You want to keep it pristine forever and that first scratch is a painful blow. Later on you start treating your gear less and less like an egg but you still draw a line somewhere. You might attempt to clean the sensor by yourself but you won’t make a chisel out of the plastic end of a paintbrush and start hacking away… Or will you?
This is precisely something you might want to try if you have a spare old camera lying around and nerves of steel. Especially if you’re into astronomy and astrophotography. The procedure was pioneered by Raymond Collecutt of Whangarei, New Zealand back in 2012 and involves taking the entire sensor module out of the camera body, taking the protective glass filter off it, and then literally scratching off the layer of microlenses and colour filters.
Here’s a video showing all the gory details involved. It looks like something that would be really REALLY bad for the camera. But, if done skillfully, it doesn’t damage the sensor itself, only strips it off its colour-registering capabilities.
Here’s the result of a partially scraped off layer of colour filters showing a drastic improvement in sensor’s sensitivity. Under the following link you can also compare samples in terms of sharpness.
It’s one of the most drastic camera mods I’ve seen around but it’s not as rare as one might think. Google and ye shall find many sample photos from debayered cameras, some of them even as fancy and expensive as the Canon 6D. It’s particularly popular among astrophotographers as it allows them to achieve sharper, more detailed images like these. If you don’t see the point of voiding the warranty of your camera and stripping it off the functionality that most photogs out there take for granted, it’s not for you. It’s also definitely not for the faint-hearted, the down-to-earth practical types, the point-and-shooters, the ones claiming black and white photography is an anachronism, or people who claim setting your camera to spit out black and white JPEGs will make it “just like the £5000 Leica Monochrom!” No, it won’t. Better shoot RAW. Oh, and also – shoot RAW. Seriously, just shoot RAW.
Monochrome to colour
A while back I was on my way to a gig, it was a lovely late summer evening and the sky was all sorts of warm tones. An idea popped into my head – is it possible to take colour photos with my Leica Monochrom? The three colour principle from back when colour photography was born in the 1800s could theoretically be applied to a digital process so I didn’t see why not. A quick Google search revealed one such attempt, which in a way was disappointing as I was quietly hoping to venture into uncharted waters. But at least I knew I was on the right track.
The three colour principle requires for the shot to be taken 3 times in black and white through 3 colour filters – red, green and blue, and then combined together to create a full-colour photo. The stunning example below was shot by Sergey Prokudin-Gorsky in 1911. On the right you can see the 3 frames that, when combined, produce the image shown on the left.
Unfortunately it is difficult to find filters that are pure red, green and blue. I guess that would make them less practical because they would allow through such a narrow range of light wavelength. But for a proof of concept I figured Cokin A-Series red, green and dark blue, which can be had really cheap off eBay, will be sufficient.
Initial tests need to be performed to see how to correct exposure with each of the filters. Using something white within the photographed scene we can evaluate how to expose each shot as a white surface should reflect red, green and blue light in equal measures and therefore should come out in the photos as the same shade of gray. In my case, the red filter photo was the base at 0EV, green needed to be corrected by -1EV and blue by -2EV.
Once this is done, you can take your test shots. Notice that the music stand used in my sample photos is red, therefore it looks brighter in the shot using the red filter. Next step is to apply the colours onto the photos. Digital photos have 3 channels – red, green and blue. Each channel can have a value between 0 and 255. This creates a colour space of over 16 million colours (256x256x256 possible combinations). Shades of gray will always have an equal value of red, green and blue (black is 0.0.0, white is 255.255.255, a shade of gray somewhere in between can be for example 53.53.53). When creating our colourised shots, we basically change from grayscale to redscale, greenscale and bluescale. Black 0.0.0 remains black 0.0.0, white turns from 255.255.255 to 255.0.0 (for red), an example tone of gray turns from 53.53.53 to 53.0.0 (once again for red). In simpler speak, black remains black, and white turns into a pure red, green and blue. Everything in between turns from a shade of gray into a shade of red, green and blue. Below you can see the 3 black and white shots and the 3 colourised ones (click through for a larger version).
Finally, we need to stack the 3 colourised shots in separate layers and add them to each other. This way 0.0.0 (black) + 0.0.0 + 0.0.0 will remain 0.0.0, 255.0.0 (red) + 0.255.0 (green) + 0.0.255 (blue) will result in 255.255.255 (white), and all tones in between will result in one of the 16 million plus colours that can be achieved in the RGB colour space. The raw result looks like this:
The colours look quite dull, probably due to the filters not being pure enough in terms of colour. After adjusting vibrance or saturation (I prefer vibrance) and tweaking the white balance, the photo looks a lot more realistic.
A stable tripod needs to be used in order for the shots to be perfectly aligned. This technique, for obvious reasons, will only really work for static subjects. It’s highly impractical compared to simply using a colour camera, but produces a colour photo as sharp as the black and white one would be. As a matter of fact, the image retains all of the advantages of a photo taken with a true black and white camera. But in colour.
Moving subjects will produce misalignments between the colourised layers (see photo below), which can be used in a creative way to achieve a pretty psychedelic effect (notice how the clouds were moving across the sky and also the different-coloured pedestrians walking across the frame). This technique is not the least bit invasive and doesn’t require any further mods to be done to your camera body. Also should be noted that the same process can be used for video but it will be nearly impossible to avoid misalignments, even when using 3 separate cameras simultaneously.