The proliferation of single-photon image sensors has opened the door to a plethora of
high-speed and low-light imaging applications. However, data collected by these sensors
are often 1-bit or few-bit, and corrupted by noise and strong motion. Conventional video
restoration methods are not designed to handle this situation, while specialized quanta burst
algorithms have limited performance when the number of input frames is low. In this paper,
we introduce Quanta Video Restoration (QUIVER), an end-to-end trainable network built on the
core ideas of classical quanta restoration methods, i.e., pre-filtering, flow estimation, fusion,
and refinement. We also collect and publish I2-2000FPS, a high-speed video dataset with the highest
temporal resolution of 2000 frames-per-second, for training and testing. On simulated and real data,
QUIVER outperforms existing quanta restoration methods by a significant margin.
If videos are not played correctly, please consider to use Chrome or download them.
Synthetic Data Results
Visual comparisons of the reconstructed results on test videos from the proposed
I2-2000FPS dataset. For fair comparison, all methods utilize 11 3-bit quanta frames
simulated at 3.25 PPP per frame (approx. 1 lux) to produce a restored frame.
Real Data Results
We capture real 1-bit quanta data
using a SPAD and generate 3-bit frames through temporal averaging. All deep
learning based models are trained using a photon level of 4.9 PPP per frame. Best
viewed in zoom.