MOVIE Index

The MOtion-tuned Video Integrity Evaluation (MOVIE) index is a model and set of algorithms for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos.

It was developed Kalpana Seshadrinathan and Alan Bovik in the Laboratory for Image and Video Engineering (LIVE) at The University of Texas at Austin. It was described in print in the 2010 technical paper "Motion Tuned Spatio-Temporal Quality Assessment of Natural Videos".[1]

Model overview

The MOVIE index is a neuroscience-based model for predicting the perceptual quality of a (possibly compressed or otherwise distorted) motion picture or video against a pristine reference video. Thus, the MOVIE index is a full-reference metric. The MOVIE model is quite different from many other models since it uses neuroscience-based models of how the human brain processes visual signals at various stages along the visual pathway, including the lateral geniculate nucleus, primary visual cortex, and in the motion-sensitive extrastriate cortex visual area MT.

Spatial MOVIE operates by processing spatial and temporal motion picture information in an approximately separable manner. A prediction of the spatial (frame) quality of a video is found by calculating a space-time frequency decomposition of both reference and test (distorted) videos using a Gabor filter bank. Following a process of divisive normalization based on a model of cortical (area V1) processing in the brain, the processed reference and test videos are combined in a weighted difference to produce a prediction of spatial picture quality.

At the same time, a prediction of the temporal (time-varying or inter-frame) motion picture quality is calculated by using the responses of the same Gabor space-time frequency decomposition of reference and test videos, but in a different manner. Temporal MOVIE weights these responses using an excitatory-inhibitory weighting of the Gabor responses to motion-tune them in accordance with a local measurement of video motion. The motion measurements are also made using the space-time filter bank using a perceptually relevant measurement of phase-based optical flow. These measurements on the reference and test videos are then differentially combined and divisively normalized to produce a prediction of temporal picture quality.

The overall MOVIE index is then defined as the simple product of the Spatial and Temporal MOVIE indices, pooled over time (frames).

Performance

The MOVIE index delivers better perceptual motion picture quality predictions than do traditional methods such as the peak signal-to-noise ratio (PSNR) and mean squared error (MSE), which are inconsistent with human visual perception.

It also performs better than more advanced video quality models such as the ANSI/ISO standard VQM, and the popular Structural Similarity (SSIM) model in terms of motion picture quality prediction performance. The MOVIE Index topped all other models in terms of correlation with human judgments of motion picture quality on the LIVE Video Quality Database, which is a tool for assessing the accuracy of picture quality models. The original MOVIE paper was accorded an IEEE Signal Processing Society Best Journal Paper Award in 2013.

Usage

The MOVIE index has found wide acceptance in the Television broadcast industry, where it is used by broadcast and post-production houses globally to assess motion picture quality and by broadcast encoder and statistical multiplexer manufacturers who use it to ensure the products they build deliver the best possible picture quality. It is commercially marketed as part of the Video Clarity line of video quality measurements tools that are used throughout the Television and motion picture industries.

References

  1. Seshadrinathan, K.; Bovik, A.C. (2010-02-01). "Motion Tuned Spatio-Temporal Quality Assessment of Natural Videos". IEEE Transactions on Image Processing. 19 (2): 335–350. doi:10.1109/TIP.2009.2034992. ISSN 1057-7149.

1. K. Seshadrinathan and A.C. Bovik, “Motion-tuned spatio-temporal quality assessment of natural videos,” IEEE Transactions on Image Processing, vol. 19, no. 2, pp. 335–350, February 2010.

2. K. Seshadrinathan, R. Soundararajan, A.C. Bovik and L.K. Cormack, “Study of subjective and objective quality assessment of video,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1427–1441, June 2010.

3. Laboratory for Image and Video Engineering (LIVE)

This article is issued from Wikipedia - version of the 10/18/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.