1:1 crop from a 7D RAW video frame.
Processed with RawTherapee
(sharpening, noise reduction).
A while ago I bought a Canon 7D DSLR camera to use for taking pictures and making videos. It's had a good life so far, producing thousands of pictures and many videos from different events.
Like many people, I set the camera to take both JPEG and RAW photos. Normally cameras try to apply many algorithms to the images coming from their sensors to produce the best possible finished JPEG image. Typically, this involves the camera making snap decisions which throw away information, and cannot be undone later.
RAW photo files on the other hand, record the original unmodified data from the sensor without applying any of those irreversible decisions. Instead, those decisions can be made by you, peacefully, later, using a software tool which can convert the RAW image data to a standard image file.
I'm by no means a photography pro. But I've been very happy to have access to those RAW files whenever I have by luck or slowly increasing skill, caught a promising picture, so I can try to make it even better later. (In case you're interested, I've been using the nice and free tool RawTherapee for the processing).
So what is a RAW file exactly? Well, most camera sensors are actually black and white. To capture colour they have had a filter pattern printed on top which (sadly) throws away much of the light arriving at the sensor so that each pixel can see either red, green, or blue. This is often arranged in a so-called Bayer pattern where every 2x2 block contains 1 red, 1 blue, and 2 green pixels. The RAW file is just that raw pattern of pixels. To turn it back into a full colour image, the missing values must be guessed ("interpolated") using the surrounding values.
Bayer colour filter pattern
The other important difference between a RAW file and a typical JPEG image is the bit depth. Most computer screens and TVs are 8-bit, meaning they can show 256 different levels for each red, green or blue pixel. But, out in the real world, light comes in a much wider range of strengths than this 8-bits can cover. For example, the difference in light strength between daylight and moonlight can be 100000 times.
Luckily, camera sensors can record quite much of that range. The 7D for instance has a 14-bit sensor, which means every pixel in the RAW file can have any of 16384 levels. In fact, one of the big decisions to make when processing a RAW file is just how to squeeze all those original 16834 levels down to just 256 for display.
Regarding video recording, the 7D like many other consumer cameras records "FullHD" (1920x1080) at 24, 25 or 30 frames per second using the H.264 compression format. In fact, the 7D actually records from the sensor at only 1728x972, by throwing away 8 out of every original 9 pixels, and then scaling up the result up a little bit to 1920x1080. After that it converts from 14bit down to 8bit with the same irreversible decisions as for the RAW-to-JPEG conversion, and compresses the resulting data by about 10 times.
The results are quite ok when you play a video back, but if you pause it to look at one of the frames, it's clear the result is very much worse than what you see from RAW still images taken with the same camera.
The 7D was launched in late 2009, which makes it almost an antique in consumer electronics terms (that was the same year as the iPhone 3GS). It did get a new firmware version from Canon in 2012, but that didn't include much in the way of major features.
However, shortly after that, the hacker community delivered what Canon would not. Magic Lantern, a software add-on for Canon EOS cameras, was finally ported to the 7D. It had taken a while because the 7D had a more exotic hardware architecture than the other EOS cameras (two Canon "Digic 4" processors instead of just one).
Magic Lantern for 7D made it much more interesting for taking videos. Useful features like live "focus peaking" - highlighting in-focus areas on the viewfinder were a big improvements over the normal interface. And you could also boost the H.264 encoding bitrate higher to try and record more of the original data.
Earlier this year, the Magic Lantern team worked out how to make the EOS cameras record "RAW video" instead of just H.264. Exactly as the names suggests, this is 14bit RAW images recorded at video frame rates, typically between 24 and 30 FPS. This worked best on the expensive 5Dmk3, at real 1920x1080. During the last few weeks, this feature has also come to the 7D at its native video resolution of 1728x972.
Getting that working on my 7D required a bit of investment. Since a RAW video is more than 10 times bigger than the equivalent H.264 file, that means you need a memory card with a very high write speed. The 7D uses CF cards - a format with a long history that I first used on my Psion Series 5 back in 1997(!). The latest and most expensive CF cards can write at 100 Mbytes/second or more, which is what you need to record that RAW video. So I invested in a "1000x 64Gbyte KomputerBay" CF card, and it worked perfectly. But I was in for some shocks.
The first shock was of course the size of the files. Previousy, I had a 32Gb card, which could record more than 80 minutes of H.264. Now my shiny new 64Gbyte could manage only 15 minutes of RAW before getting full.
The second shock was to my PC hard disks, which started filling up at a rate never seen before. The Terabyte or so of spare space I had had was soon gone, filled up with RAW files plus thousands of intermediate image files.
Those were the third shock - there was no player for RAW video files. First they needed to be extracted into thousands of "DNG" files, each one run through a RAW conversion tool with some suitable settings - and finally recompressed into a high quality "normal" video for playback. To get good results, this typically required a few hours of processing time for just a few minutes long video.
But the results were worth it, visibly better than anything I had seen the same camera produce in H.264.
There's just no going back to H.264 after sampling RAW video. All those 14bits of light data from the sensor are still there, and for the first time (for me) I'm the one who can decide how they should be displayed - not the camera. Every frame looks great, and due to spending lots of time looking at the intermediate stills, I quickly got a new appreciation for the way the illusion of smooth motion in film is created.
That was another shock actually. Some of you know might know me well as the champion of 60FPS+ motion on smartphone screens. For motion in sharp synthetic scenes, I still believe continuous 60FPS is the minimum needed. But for natural scenes, it's clear how well the time honoured Hollywood approach of exposing each frame for half the inter-frame time (so called 180 degree shutter) works to create just the right amount of blurring for moving objects so that just 24 frames per second is enough to create the illusion.
So, that's it. I'm going to make the effort from now on to film only in RAW. And I'm going to start treating HDs the way I treated DV tape 10 years ago, e.g. buying lots of them.
But, the "workflow" to make best use of the RAW video files is in serious need of improvement. Sounds like just the job for some OpenGL shaders.