Is Computational Photography the Future?

In my recent camera sales is declining video and article I received several comments claiming that the introduction and advancement of computational photography in smartphones is the main reason that negates the need for a camera, and implications were made about how cameras should adopt computational photography and “catch” up to smartphones. However, this is something that deeply troubles me – computational photography has been around since the beginning of digital photography and the level of computational photography applied in dedicated cameras today is not lagging behind smartphones. I want explore the significance of computational photography that has already been adopted integrally in Olympus OM-D cameras. 
The core of digital photography comprises of 3 components – Lens, Image Sensor and Digital Processing. The lens allows light to enter the camera and focuses light onto the image sensor to form an image. The image sensor then translates this light from analogue into digital signal. Then the digital processing converts this into the final output, resulting in JPEG files which can be viewed on digital devices and shared on online social media platforms. Computational photography comes into play during the capturing process of the images as well as during the digital processing stage. A lot of camera operations rely on software algorithms and the final optimization process to produce that beautiful JPEG file relies heavily on computational photography. 
How is smartphone’s computational photography better than camera’s? I just do not see it. A lot of reference was made to the fake bokeh rendering, smart HDR processing and auto-compositing of images done by smartphones with ease and minimal effort, to produce amazing results. I argue that these advancement in software has been seen in cameras for a while now and there is nothing new in the smartphones that can even come  close to what the dedicated cameras can do. Except maybe the fake bokeh artifical rendering but in cameras why do we need fake bokeh when we can acquire real and better more natural looking real ones?
Now let’s take a look at the processing chip found in Olympus OM-D current line-up of cameras – Truepic VIII. In a single Truepic VIII chip, Olympus claims that there are two Quad Core processors. There are 8 independent processing cores in an Olympus camera with a Truepic VIII processor, and you know what? Olympus E-M1X has two Truepic VIII processors, meaning E-M1X  has a total of 16 cores! Each core is assigned for a singular, computational heavy task – one core to compute image stabilization real time, one care for AF operation, one for smart image processing, one for writing/reading to SD card, one for EVF/Live View display, etc. Olympus also claims that the processor that they used in their cameras are more powerful than any consumer mass available Intel processor (true in January 2019, source here)
So why does Olympus need so much processing power (more powerful than any smartphones) in their camera? Is the answer not obvious enough? Of course – computational photography. 
Here is a list of instances when computational photography plays a crucial role in modern digital cameras, especially in Olympus OM-D cameras
1) AF operations
In a single-AF operation, at the half-press of the shutter button the camera quickly captures 240 frames per second, and these images are not stored in the SD card, but in the temporary buffer. The camera’s processing will analyze all these images quickly and using smart contrast detection the “computer” will quickly acquire and lock focus. 
In Continuous AF, or 3D tracking, computational photography plays an even more important role to analyze the pattern of subject movement and apply an adaptive smart algorithm to predict where the subject will move to next, allowing the tracking to work efficiently. None of this is possible without raw computational power, and believe me when I tell you the C-AF or 3D tracking in any top level Canon, Nikon, Sony or Olympus cameras are superior than the best smartphone camera you can find today. 
2) Composite Modes
There are many composite modes in camera – High Res 50MP shot, Live Composite, in camera HDR, Focus Stacking and Hand-held multishot noise reduction, all taking multiple images at once, and merging them together with smart real time analysis and effective processing to accomplish selected, desired results. Each composite settings require the camera to perform some computational photography magic to selectively take some parts of an image and merge them all into a final composite image. Most of these composite modes can be executed with just one click of the shutter button. Computational photography has been used by cameras to push and break boundaries – to acquire more high resolution image, to achieve better image quality (less noise in high ISO, wider dynamic range than single shot), to capture more depth of field and to prevent overexposure in long exposure modes. If this is not pure computational photography, I don’t know what is. 
3) 5-Axis IS
How does image stabilization work? The gyroscope will detect movement of camera shake, and the camera will use the efficient computational power to counter these movements, all happening so fast that the image or video is fully stabilized. We know how capable the 5-Axis IS in Olympus camera is, and then there is the 5-Axis Sync IS, when the body IS works with lens IS in sync to further improve stability of the shot. 
4) Smart JPEG Processing
Modern digital cameras have superior advanced JPEG processing that a lot of people take for granted. The images are not uniformly sharpened, and the noise reduction application is not done on a global level. The camera will analyze each image separately and apply variable sharpening and noise reduction on images with different shooting parameters (different apeture used, ISO number and lens attached). If the lens is sharp, shooting at optimum aperture and lower ISO, the in camera sharpening is lowered and less noise reduction is applied to achieve a more natural look. Also, the camera will study different areas of the image and apply noise reduction and sharpening selectively. There are a lot of processing happening to optimize a JPEG file in camera – barrel distortion correction, vignetting compensation and chromatic aberration suppression just to name a few. Also, since Truepic 6, Olympus uses no low pass filter on their image sensors, hence they have advanced the smart Moire correction algorithm in their processing engine ever since. Furthermore, there is compensation for diffraction when narrow aperture is being used. All this, happening at a click of a shutter button, almost instantaneously, with virtually zero shutter lag when shooting with a camera. 
Do not get me wrong, I am not bashing smartphone photography, in fact, far from it. I am a firm believer that smartphone photography is the future. However the claim that the computational photography in cameras are falling behind and that camera manufacturer’s should play catch up – that is fraudulent. 
The problem with smartphone cameras is not the software. I admit the software is improving, and there will be progress and we will see more exciting things happening in computational photography soon. The real limitations to progress of digital camera in a smartphone is the actual lens and image sensor used in smartphones. The multiple  camera setup is a good idea, but it is not the ultimate solution for smartphone photography. I would be terrified to think of iPhone 15 wiith 15 camera modules at the back of the phone. There is no point having so many cameras, all with similar physical limitations. 
How to improve smartphone cameras? Use larger sensor, maybe include a 1 inch image sensor (like what Panasonic did once), and use larger and higher quality optics to complement the more capable image sensor. Combine that with truly powerful software, then we can talk. At this moment, no matter how advanced the computational photography is in an iPhone or Samsung camera, the fact that those tiny image sensor and crappy lens still render sub-par quality images. I have tested the Samsung Note 10+ recently and trust me, I am NOT impressed. For a smartphone camera, yes it is possibly the best now in the market, but compare that with even an entry level mirrorless camera, say an Olympus PEN E-PL9, there is still a serious gap. 
Let me know if you still hold firm to the believe that, today, the smartphones have higher level computational photography power than cameras? Share your thoughts!

Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 
Subscribe To Our Newsletter

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team.

You have Successfully Subscribed!

SUBSCRIBE
NEWSLETTERS
Signup for our newsletter and read our articles from your inbox!