
A Spectre Retrospective: Part 3 — Engineering
This is the final part of our Spectre retrospective. In this post, I’ll talk about the technology behind a computational long exposure, a few UI innovations, and our plans to share code between apps. The Computational Long Exposure We started building Spectre, among other things, to get around hardware limitations. Most iPhones can expose an image for a maximum of 1/3rd of a second, and only the latest iPhones (XS and XR) support a maximum exposure time of one second. That’s a far cry from the several seconds you need for an interesting long exposure. Why does Apple cap the hardware at one second? While it has never publicly commented on this, the longer a sensor is active, the more heat it generates. A large DSLR has plenty of space to dissipate heat while the compact, airtight body of an iPhone is another story. Heat increases sensor noise, and more importantly, it can damage components in your iPhone. It’s true other smartphones support several second exposures. They use a different set of components in a different configuration that dissipates heat in a different way. (It’s also possible those phone markers are A-OK with the stress on the components.) Every design is about tradeoffs...