Image Fusion Using Laplacian Pyramid Transform

ECE Capstone Design Project, Spring’14

Tianjiao Zeng, Renyi Hu, Yaodong He, and Yunqi Wang

Advisor: Professor Dario Pompili

Introduction:

Image fusion, as a branch of information fusion, has provoked growing interest. It makes use of the complementary information in multiple images, to achieve a higher resolution and intelligibility. The fused images provide more comprehensive and more precise description, which is more suitable for human visual system and machine perception or further image-processing tasks. The fusion algorithm of our capstone design is Laplacian pyramid transform which focused on pixel level. Multi-focus images are selected as main research targets.Mobile computing is used, as the whole process is done on the mobile device without uploading to the cloud. The performance of fused image is much better than original photos, which realizes enhancement of reality.

Motivation:

With the development of technology, especially the mobile devices and cloud technologies, precise and clear images become an essential prerequisite of many softwares. Take one function of Google glass for instance, when you see an object and hope to get some information about it. All you have to do is just a blink (which is to demand the glasses to take a picture), and wait for system to recognize it and provide related information. Obviously, precise and clear images are required in objective recognition. However, it’s quite possible that we only get images with defects on them, or it’s hard to get perfect images in some circumstances. we found that this kind of situation happens a lot. So what should we do with these defective images? Do users have to delete them and take another one? What if it causes too much trouble or they are even unable to do so? Therefore, we wanted to figure out a way to make best use of the images we have. We first thought about doing image processing with the best picture among them, such as filtering, enhancing contrast ratio. However, these methods cannot recover lost information. Then we found that lost information might be clearly shown in another image. So why not combine them together and make the best use of them. That led us to the image fusion. Besides, we weren't just satisfied with achieving this in research but trying to apply this into a real practical situation, which led us to make the Android application.

Results:

We have successfully obtained the corrected images as we wanted, and assessed both visually by comparing to the ideal image and quantitatively using several criteria. The results of evaluation show that the fused image has more information than the original two blurred images.

Snapshots of our Android application:

References:

[1] Zhong ZhangandRick S. Blum,“A Categorizationof Multiscale-Decomposition-Based Image Fusion Schemes with a Performance Study for a Digital Camera Application”, Proc. IEEE, VOL. 87, NO. 8, AUGUST 1999[2] S. G. Mallat, “A theory for multiresolution signal decomposi- tion: The wavelet representation,”IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-11, pp. 674–693, July 1989. . [3] A. Toet, M. A. Hogervorst. Performance comparison of different graylevel image fusion schemes through a universal image quality index [J]. Proc. SPIE, Signal Processing, Sensor Fusion, and Target Recognition XII, 2003, 5096:552-561