IQ4-150: The BEST digital back EVER for Technical Camera use
Bold blog title, I know, but let me break this down, because it is that exciting.
The IQ4-150 by model number alone alludes to a new pinnacle of image gathering potential, with it’s 151 megapixel sensor gathering 16-bits of color across 14.5 stops of dynamic range, but there is far more story to be told about this industry leading medium format digital back when it comes to technical camera use.
NEW FEATURE: Prerecorded Black Reference Frame
Long the bane of shooters who find themselves out imaging from twilight into the night, in order to maintain color noise free images every Phase One digital back prior to the IQ4-150 required a Black Frame Reference (BFR), which is an exposure of equal length to your practical exposure, acquired automatically after your shot.
The Black Frame Reference (a.k.a. Dark Frame, Black Frame Subtraction) is accomplished after the shutter has closed in order to get a noise print of the digital back as it finished the exposure. Waiting for the BFR can be excruciatingly long (seeming at least 5x longer mentally), because the camera is unable to shoot a fresh frame while the reference is gathered.
Typically, at any shutter speed faster than a 1/2 second, the intrusion was small and the same BFR could be used for subsequent images at the same shutter speed and small adjustments to the exposure via shutter speed could be made w/o re-acquisition. As your shutter speed increased beyond one second, it was a different story:
- Change the shutter speed, new BFR.
- Heat the digital back up from multiple exposures, new BFR.
- Immediately after capture, have an amazing streak of last-light appear across the trees in the late-fall sunset scene you were shooting,… new BFR. please wait patiently for 30 seconds for your next exposure… oh, there goes the light!
As described in ‘Dark Frame Be Gone!’, there were cheats and workarounds that limited the intrusion of this process on tech camera, on 645DF/DF+ bodies and even the XF body, but there never was a way to eliminate it entirely.
••With the IQ4-150, gathering a Black Frame Reference at the time of capture IS ENTIRELY A USER PREFERENCE!!••
Back in the lab in Denmark, hundreds of captures across all of the ISOs and shutter speeds have already been shot, analyzed, quantified and entered into a complex mathematical matrix that allows the IQ4 to shoot exposures without having to acquire an accompanying Black Frame Reference at the time of capture, providing:
- an immediate readying of the system for the next capture,
- maintaining the integrity of the file in the IIQ file format that we expect from Phase One.
- On the XF camera, this manifests as an instantly consistent frame rate, approaching two frames per second even when shooting Aperture Priority in varying light.
There are, of course, caveats to using the canned references as they will not be able to identify and remove single pixel noise, or any aberration that is unique to that moment of time within the digital back. The user manual recommends not using the canned profiles on shutter speeds longer than 1/10th of a second, but I’ve pushed way past that into multiple seconds and even up to 7 minutes (so far) with fully usable and beautiful images resulting, showing no discernible difference from spending the extra 7 minutes to create a unique BFR.
When pushing into this new territory, hot pixels start popping into the scene, but those artifacts are easily removed in software using the ‘Single Pixel’ option in the Noise Reduction dialog in Capture One.
100% on ISO 50 on 8 second Exposure, Pushed +4 , No BFR (Click to Expand)
The take away here is that this is huge for anyone shooting twilight, midnight or mid-day with 15 stops of neutral density, you can now shoot consecutive frames in what are the sometimes fleeting moments of light and situation. (Hey, Neutral Density Shooters, KEEP READING, Phase has something new for you)
Phase One engineers are continuing to work on improving the noise response of the IQ4-150, the units we’ve been working with so far are still technically prototypes and they’re moving at a pace of 4 firmware revisions a day to refine the unit until it’s ready to ship.
NEW FEATURE: Less Need for LCCs
When we talk about LCCs and technical camera use, we’re talking about a process of defining the inherent physicality of the lenses attached to the technical cameras with the color casts and vignettes they produce. These casts are a byproduct of the ‘large format’ lenses not being designed for use with a specific digital back nor a strict 1:1 relationship with the orientation of that digital back when attached to a fluid technical or view camera.
Technical cameras have the ability to move the lens by up to 45mm away from the center of the digital back, which causes light to flow into the pixel wells at varying angles and have varying penetration into those pixels wells due to the physical design of traditional frontside illuminated sensors. The use of the a small square of milky plexiglass we call the LCC card, is an absolute necessity to remove these image affecting aberrations on Frontside Illuminated digital sensors.
A deeper investigation into this process demonstrates the problem with legacy medium format sensors to date and the reasons why non-perpendicular light striking the micro-lenses that sit above each pixel on the sensor surface cause such problems.
The wiring mesh necessary for the pixels to communicate information out of the sensor was inserted BETWEEN the micro-lens and the photo receptor (seemingly counterintuitively) which lead not only to additional depth of those pixel wells to contain this structure, but the partial blocking of light by the mesh itself.
The micro-lenses on top of each of those pixels attempted to focus the light casting upon the sensor in a way to adequately penetrate the pixel well in order to gather the largest amount of photonic energy possible. Additionally, the micro-lenses were aimed differently depending on where they were on the sensor, the micro-lenses in the center facing perfectly straight and then working to the edge of the sensor, the lenses turning slightly inward, focused for the Exit Pupil Distance with a fixed relationship, most appropriate for a 645 format camera body. (DF/DF+/XF)
Regardless of the quality of the lens on a technical camera, any lens wider than ‘normal’ was going to produce cast the moment it was moved even slightly off center axis, creating a relative reality for every shift movement made.
32HR shot ±10X with Center Shot (partially masked) floating on top
Legacy lens designs that imaged film and early digital had to be revised as pixel densities increased because the angle of light being sprayed into the image circle from the rear optics was too close to the sensor and the angle of light was too far from perpendicular to the sensor surface for the light to reach all of the pixel wells. Light to skittered across the top of the micro-lenses, missing their intended target and even enter adjacent pixels in the Bayer filter, polluting the color.
Lenses that either were updated or had inherent designs that worked well with digital still create seemingly uncorrectable lens casts. (longer focal lengths traditionally work better than wide-angle due to their naturally straight light projection) The Rodenstock 32HR shown above is generally known as best in class optics and those casts would be handily removed by the Lens Cast Correction in Capture One.
There is however, a negative aspect to using the Lens Cast Corrections, as these are effectively masks where the inverse color/tone is applied in order to achieve a neutral image corner to corner. The application of these masks can lead to extra noise in those portions of the scene being corrected, because they essentially raise the exposure and tweak the color away from what was natively captured by the sensor, automated local curves, in a sense. If a deep shadow in the scene overlapped with the purple region of the lens cast into the vignette, the combined effects could contribute to noise when fully corrected, post LCC process.
Just the idea of having to create a LCC, whether shot at the time of capture (each and every capture at a new camera movement) or banked into the software as a library, painstakingly named and organized as presets and then applied to captures in Capture One as a post process, steered many would-be technical camera shooters away from powerful image making format.
Shooting, Organizing, Renaming and creating LCC Presets
Ok, so now I’ll go ahead and say it…
Lens Cast Correction BE GONE!
As shown below, even when shooting wide lenses like the Rodenstock 32HR (21mm equivalent in 35mm format terms) there seems to be little reason to shoot LCC’s at the time of capture, even if you don’t already have the movements banked into presets in Capture One. (A center filter would still be recommended on the 32HR & 23HR Rodenstock optics)
Lens Cast of 32HR on IQ3-100 vs IQ4-150 (no Center Filter)
(Extreme movements for illustrative purposes only, 12mm would be the typical maximum movement of this lens which would not produce much of the color artifacting. Shooting the 32HR with a Center Filter [negative vignette] would further reduce the need for LCC’s)
New manufacturing processes with extremely small tolerances made available Backside Illuminated sensors where the chip is initially built as a Frontside sensor, flipped over and what was normally the bottom of the chip became the top, as precision engineering ground down the surface, creating relatively short pixel wells and receptors in clear view of the micro-lenses. Big wow.
Not shy of going to the extreme, I shot a 9-image stitch on an Alpa XY with a Rodenstock 90HR needing no LCC’s to assemble the final image (slight vignette in upper right corner).
Final stitched image amounted to a 718 megapixel image
NEW FEATURE: Frame Averaging
For about the last year, I’ve known (at the rumor/speculative/informed-speculative/NDA’d level) that there would be positive changes to both Black Frame Reference and LCC workflow, but this one took me by delightful surprise. In-unit Frame Averaging on the IQ4-150 is going to fundamentally change the capabilities of any field shooter.
Frame Averaging, simply put, is the ability of the sensor to shoot multiple exposures, but rather than each new exposure adding to the final exposure, the exposures are instead averaged together.
There are severals ways to look at this feature and several uses that we know will be value added:
- The initial benefit of this is that cleaner images with a better signal to noise ratio can be captured because through the successive exposures, if noise manifests it will be filtered out at the pixel level by an exposure that didn’t have the noise at that exact pixel site.
- Due to the repetitive captures and noise mitigation, Black Frame Reference will likely not be necessary, even on multi-second exposures.
- The averaging will lead to the effect of long exposure, even though each individual exposure could be short, so 10 second exposures mid-day with no Neutral Density would be possible from the averaging of ~1000 1/125 exposures.
- Frame averaging in post-process has been a thing for a while, but this process will not be generating 1000, 150mp images to export and wrangle in software. All happens in camera with only one final file written down.
This feature is currently in the ‘Lab Features’ portion of the digital back which will house beta projects in order for users to play with and contribute to the refinement of new tools and features in the IQ4 digital backs. As we have time to play with this feature more, we’ll be able to recommend workflows and in what circumstances this workflow can replace neutral density use.
NEW FEATURE: File Storage/ Data Redundancy Options
Shooting in the field always provided a challenge for adequate data redundancy. It was easy enough to shoot your card full of images, and if you had a laptop handy, you could eventually download the card there so you could treat the files stored on the computer as your ‘originals’ and the files still on the CF card were your backup, until you could replicate the data a 3rd time on the computer to a separate hard drive and you could feel reasonably safe making use of that CF card for new images.
The IQ4 provides myriad backup solutions at the time of shooting. The XQD card is the primary non-tethered file destination and with 400mb/s is fast enough to handle the fastest shooting the IQ4 can offer with no data buffering. The 2nd SD card slot is available for in-camera redundant backup or jpg outputs. (The slot for XQD will eventually serve the upcoming CF Express cards that will write at closer to 1GB/s, leaving the current best CF cards at 160MB/s in the dust)
Beyond the card options, a USB hard drive can be plugged directly into the digital back serving as primary or secondary backup storage and the IQ4 will also tether wirelessly to computer off of the Adhoc network it creates. Current transfer speeds are nothing to write home about, but it works flawlessly and in 7 seconds or so, your IIQ-S format file will transfer to the laptop. The buffer in the digital back is so large that you can actually shoot dozens of images at full speed with no XQD or SD card in the DB at all! (not recommended) What this means practically, is that your laptop could be open and drawing power inside your vehicle, quietly and efficiently backing up files shot to your XQD card while you’re outside in the cold shooting your starry night.
Technical camera shooters have reason to rejoice in the technologies offered in the Phase One IQ4-150, allowing them workflow options, speed and efficiencies never available before. I can’t wait to spend more time with these units and continue to find out how they’ll make my photographs better. Many more features to talk about on the IQ4 platform coming up! -BK