PG 2022

Real-Time Video Deblurring via
Lightweight Motion Compensation

POSTECH
(*equal contribution)
overall framework

We propose a lightweight Multi-Task Unit that allows
real-time video deblurring and motion compensation in a cost-effective way.

Abstract

While motion compensation greatly improves video deblurring quality, separately performing motion compensation and video deblurring demands huge computational overhead. This paper proposes a real-time video deblurring framework consisting of a lightweight multi-task unit that supports both video deblurring and motion compensation in an efficient way. The multi-task unit is specifically designed to handle large portions of the two tasks using a single shared network and consists of a multi-task detail network and simple networks for deblurring and motion compensation. The multi-task unit minimizes the cost of incorporating motion compensation into video deblurring and enables real-time deblurring. Moreover, by stacking multiple multi-task units, our framework provides flexible control between the cost and deblurring quality. We experimentally validate the state-of-the-art deblurring quality of our approach, which runs at a much faster speed compared to previous methods and show practical real-time performance (30.99dB@30fps measured on the DVD dataset).

Multi-Task Learning of Video Deblurring and Motion Compensation

MTU

The key component of the proposed framework is a lightweight multi-task unit that simultaneously handles large portions of the deblurring and motion compensation tasks using a single shared network. To maximize the efficiency of our framework, both deblurring and motion compensation tasks need to take full advantage of the shared network in a harmonious way.

Feature Incompatibility between Deblurring and Motion Compensation

It is well known that residual detail features are effective for achieving high performance in restoration tasks including deblurring, for which residual detail learning has widely been adopted. However, for the motion compensation task, we observed that structural information of degraded input is required in addition to the detail information.

overall framework

Structure Injection Scheme

We propose the structure injection scheme, in which the detail features are combined with structural features that are pre-computed from blurry input frames, and use the combined features for the motion compensation task. The structure injection scheme for motion compensation improves the compatibility of the shared features among the two tasks, by preventing the motion compensation task from demanding structural information from the multi-task detail network.

Multi-stacked MTUs

MTU

As our multi-task unit is light-weighted, we can stack several units to achieve high deblurring quality without too much computational overhead. By controlling the number of stacks, we can flexibly control the balance between the model size and deblurring quality. Thanks to the flexibility of our architecture, our network can cover from environments demanding for low-computation to environments where high deblurring quality is desired.

Results

Quantitative Comparison

MTU

Comparison on Motion Compensation Module

MTU

Qualitative Comparison

BibTeX

@InProceedings{Son2022RTDeblur,
    author    = {Hyeongseok Son and Junyong Lee and Sunghyun Cho and Seungyong Lee},
    title     = {Real-Time Video Deblurring via Lightweight Motion Compensation},
    booktitle = {Pacific Graphics},
    year      = {2022},
}