Component-Based Distributed Framework for Coherent and Real-Time Video Dehazing
📝 Abstract
Traditional dehazing techniques, as a well studied topic in image processing, are now widely used to eliminate the haze effects from individual images. However, even the state-of-the-art dehazing algorithms may not provide sufficient support to video analytics, as a crucial pre-processing step for video-based decision making systems (e.g., robot navigation), due to the limitations of these algorithms on poor result coherence and low processing efficiency. This paper presents a new framework, particularly designed for video dehazing, to output coherent results in real time, with two novel techniques. Firstly, we decompose the dehazing algorithms into three generic components, namely transmission map estimator, atmospheric light estimator and haze-free image generator. They can be simultaneously processed by multiple threads in the distributed system, such that the processing efficiency is optimized by automatic CPU resource allocation based on the workloads. Secondly, a cross-frame normalization scheme is proposed to enhance the coherence among consecutive frames, by sharing the parameters of atmospheric light from consecutive frames in the distributed computation platform. The combination of these techniques enables our framework to generate highly consistent and accurate dehazing results in real-time, by using only 3 PCs connected by Ethernet.
💡 Analysis
Traditional dehazing techniques, as a well studied topic in image processing, are now widely used to eliminate the haze effects from individual images. However, even the state-of-the-art dehazing algorithms may not provide sufficient support to video analytics, as a crucial pre-processing step for video-based decision making systems (e.g., robot navigation), due to the limitations of these algorithms on poor result coherence and low processing efficiency. This paper presents a new framework, particularly designed for video dehazing, to output coherent results in real time, with two novel techniques. Firstly, we decompose the dehazing algorithms into three generic components, namely transmission map estimator, atmospheric light estimator and haze-free image generator. They can be simultaneously processed by multiple threads in the distributed system, such that the processing efficiency is optimized by automatic CPU resource allocation based on the workloads. Secondly, a cross-frame normalization scheme is proposed to enhance the coherence among consecutive frames, by sharing the parameters of atmospheric light from consecutive frames in the distributed computation platform. The combination of these techniques enables our framework to generate highly consistent and accurate dehazing results in real-time, by using only 3 PCs connected by Ethernet.
📄 Content
Component-Based Distributed Framework for Coherent and
Real-Time Video Dehazing
Meihua Wang1, Jiaming Mai1, Yun Liang1, Tom Z. J. Fu2, 3, Zhenjie Zhang3, Ruichu Cai2
1College of Mathematics and Informatics, South China Agricultural University, China
2School of Computer Science and Technology, Guangdong University of Technology, China
3Advanced Digital Sciences Center, Illinois at Singapore Pte.Ltd., Singapore
Abstract. Traditional dehazing techniques, as a well-studied topic in image processing, are now widely used to eliminate the haze effects from individual images. However, even the state-of-the-art dehazing algorithms may not provide sufficient support to video analytics, as a crucial pre-processing step for video-based decision making systems (e.g., robot navigation), due to the limitations of these algorithms on poor result coherence and low processing efficiency. This paper presents a new framework, particularly designed for video dehazing, to output coherent results in real time, with two novel techniques. Firstly, we decompose the dehazing algorithms into three generic components, namely transmission map estimator, atmospheric light estimator and haze-free image generator. They can be simultaneously processed by multiple threads in the distributed system, such that the processing efficiency is optimized by automatic CPU resource allocation based on the workloads. Secondly, a cross-frame normalization scheme is proposed to enhance the coherence among consecutive frames, by sharing the parameters of atmospheric light from consecutive frames in the distributed computation platform. The combination of these techniques enables our framework to generate highly consistent and accurate dehazing results in real-time, by using only 3 PCs connected by Ethernet.
1 Introduction Video dehazing is an important module to video analytical systems, especially for video-based decision making applications, such as security surveillance and robot navigation. As an important pre-processing step, video dehazing is expected to recover the visual details of target objects in the video, even when the videos are recorded in extremely foggy environment. Inaccurate and inconsistent outputs of the dehazing component may decisively ruin the usefulness of the whole system, regardless of the performance of other video analytical modules in the rest of the system [1]. While conventional dehazing techniques are fairly mature in image processing, even the state-of- the-art image dehazing algorithms are not directly applicable to video dehazing [2]. This is mainly due to two major limitations on the design of these algorithms. Firstly, image dehazing is supposed to process one image at a time, without considering the coherence between computation results across frames in the video stream. In video dehazing, however, such coherence among consecutive frame is crucial, because the fluctuation on the contrast and color over the target objects could bring additional difficulties for further analysis, e.g. human tracking and detection. It is thus necessary to introduce normalization into Meihua Wang et al. dehazing algorithms to eliminate such undesirable variations. Secondly, most of the algorithms are basically too slow for real-time processing on video, therefore not suitable for real applications. Simple parallelization and redesign for new hardware (e.g., GPU) on these algorithms, may not scale up the processing efficiency, because the computation on one frame may depend on the results of other frames, when normalization solution mentioned above is incorporated [3]. In this paper, we present a new general framework for coherent and real-time video dehazing. By employing our framework, the video analytical system can easily transform an existing image dehazing algorithm into a distributed version, and automatically deploy the algorithm on a distributed platform (e.g., Amazon EC2) for real-time video processing. Moreover, the parallel computation and coherence enforcement are transparent to the programmer, in the sense that the system itself is responsible for computation resource management and result normalization to guarantee processing efficiency and coherence. Such performance guarantees are delivered by devising two novel techniques in our framework. Specifically, the task decomposition and parallelization technique decomposes a wide class of dehazing algorithms into three generic computation components, such that each component can be processed by multiple threads on the distributed platform in parallel. Furthermore, in order to avoid rapid model variation over the frames, an automatic state synchronization mechanism is employed to normalize the atmospheric light parameters across consecutive frames. A demo video is available at https://youtu.be/ZuflaEHp_RE . 1.1 Previous Works Image Dehazing Dozens of image dehazing methods are proposed in the past few years [4]. Bas
This content is AI-processed based on ArXiv data.