One of several themes at this year’s Mobile World Congress in Barcelona has been cellphone cameras (the others were waterproof phones, crappy smartwatches, and NFC). Samsung’s new flagship Galaxy S5 ups the pixel count from 13MP 16MP, and adds 4K video capture. Nokia’s handsets can now shoot RAW pictures (or rather, record RAW pictures, as all photos are RAW to begin with) and Sony was showing off new camera modules (the iPhone uses a Sony camera).
As I was walking around the show and shooting everything with my iPhone 5, I started to wonder: who cares?
First, megapixel madness. This happened once before in regular cameras before folks realized that what we really wanted was physically bigger sensors with better light-gathering ability, not just more dots. The “megapixel” race seems to have infected the smartphone industry now, probably because getting a bigger sensor into a slim phone is a challenge (as a general rule, the bigger the sensor, the further the lens needs to be from it).
But there are other differences between camera phone and compact cameras. First is storage. Cameras come with removable storage designed to be used as a temporary buffer before you upload to a computer with a huge hard drive. Android phones can use 128GB microSD cards, but I’d guess that the average iPhone has 32GB or less, and Android handsets are likely less thanks to their reliance on SD cards, that nobody really wants to buy. And your phone will fill up a lot faster with a 16MP camera shooting RAW than an 8MP camera shooting JPGs.
Phones are not just the camera but the computer we use to process and store the photos.
Next up is processing, and relates to RAW capture. If you could get hold of a RAW file from the iPhone, it would probably be ugly as hell when you turned it into a JPG. And remember, even RAW images from Nokias have to be processed before you can look at them.
The iPhone 5S puts the power of the A7 chip to work on this RAW data, and the conversions are tuned and tailored to the chip, the lens, and the tricks that Apple wants to add (the burst mode for example). That is to say, the reason that iPhone pictures look so good is that the RAW has been optimally processed automatically. If you ever switch between RAW and JPG mode on a “real” camera you’ll know how flat the plain converted RAWs look in Lightroom or Aperture before you get to work on them.
Would it be cool to be able to tweak a little more detail out of the shadows? Sure. Or to be able to fix the white-balance after the fact? Hell yes. But this ignores something else – that the JPGs that come out of the iPhone are surprisingly malleable. I have processed fairly poor pictures (low bad light) through the photos app and Snapseed, and then printed out 6×4’s, and they’re still better than the prints I used to get off my 35mm drugstore-processed films.
To sum up, a camera phone doesnt follow the same rules as a regular camera. It’s made to capture, process and share pictures. A camera is made to capture them, and now more. And by adding RAW into the mix, then you make the second two steps way harder. You actually have to process the images yourself, and all the while you’re still dealing with the physical limitations of a cameraphone – no optical zoom, tiny sensor, no viewfinder.
Which is exactly why I stuck an Eye-Fi card in my Fujifilm X100s and used it to snap pictures at the Mobile World Congress that I couldn’t take with my iPhone. And ironically, I set it to shoot JPGs, because the JPGs out of the X100S are better than the results I could manage even after apending an hour working on the picture in Lightroom (I’ve tried).