2016

Conference paper: Multi-dimensional phase unwrapping: a new and efficient linear algebraic formulation using weighted least-squares

Laurent Lamalle, Georgios Gousios and Matthieu Urvoy
Proceedings of the ISMRM 24th Annual Meeting & Exhibition, Automating & Speeding Algorithms session, May 9th, 2016

Abstract   |   Bibtex

Phase information of MR images can provide quantitative access to various physical properties of the examined sample, such as local B0 values, magnetic susceptibility or flow. Phase is a continuous information whose estimation typically requires unwrapping. In this study, we propose a novel phase estimation algorithm which: (1) relies on a numeric scheme that is robust to phase jumps, and (2) is optimized for execution on modern parallel processors.

@inproceedings{Lamalle2016,
abstract = {Phase information of MR images can provide quantitative access to various physical properties of the examined sample, such as local B0 values, magnetic susceptibility or flow. Phase is a continuous information whose estimation typically requires unwrapping. In this study, we propose a novel phase estimation algorithm which: (1) relies on a numeric scheme that is robust to phase jumps, and (2) is optimized for execution on modern parallel processors.},
address = {Singapore},
author = {Lamalle, Laurent and Gousios, Georgios and Urvoy, Matthieu},
booktitle = {Proceedings of the ISMRM 24th Annual Meeting {\&} Exhibition},
title = {{Multi-dimensional phase unwrapping: a new and efficient linear algebraic formulation using weighted least-squares}},
year = {2016}
}

Book section: Bayesian Stroke Lesion Estimation for Automatic Registration of DTI Images

Félix Renard, Matthieu Urvoy, Assia Jaillard
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Revised Selected Papers. Lecture Notes in Computer Sciences, Vol. 9556, pp. 91-103, Springer International Publishing

Abstract   |   Bibtex   |   doi

Diffusion Tensor Imaging (DTI), the Fractional Anisotropy (FA) is used to measure the integrity of the white matter (WM); it is considered as a biomarker for stroke recovery. This measure is highly sensitive to applied pre-processing steps; in particular, the presence of a lesion may result into severe misregistration. In this paper, it is proposed to quantitatively assess the impact of large stroke lesions onto the registration process. To reduce this impact, a new registration algorithm, that localizes the lesion via Bayesian estimation, is proposed.

@incollection{Renard2016,
abstract = {Diffusion Tensor Imaging (DTI), the Fractional Anisotropy (FA) is used to measure the integrity of the white matter (WM); it is considered as a biomarker for stroke recovery. This measure is highly sensitive to applied pre-processing steps; in particular, the presence of a lesion may result into severe misregistration. In this paper, it is proposed to quantitatively assess the impact of large stroke lesions onto the registration process. To reduce this impact, a new registration algorithm, that localizes the lesion via Bayesian estimation, is proposed.},
author = {Renard, F{\'{e}}lix and Urvoy, Matthieu and Jaillard, Assia},
booktitle = {Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Revised Selected Papers},
doi = {10.1007/978-3-319-30858-6_9},
editor = {Crimi, Alessandro and and Menze, Bjoern and and Maier, Oskar and and Reyes, Mauricio and and Handels, Heinz},
isbn = {978-3-319-30858-6},
pages = {91--103},
publisher = {Springer International Publishing},
title = {{Bayesian Stroke Lesion Estimation for Automatic Registration of DTI Images}},
url = {http://link.springer.com/10.1007/978-3-319-30858-6{\_}9},
year = {2016}
}

2015

Conference paper: Bayesian stroke lesion estimation for automatic registration of DTI images

Félix Renard, Matthieu Urvoy, Assia Jaillard
In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2015, Brain Lesion (Brainles) workshop, Munich, Germany

Abstract   |   Bibtex

In Diffusion Tensor Imaging (DTI), the Fractional Anisotropy (FA) is used to measure the integrity of the white matter (WM); it is considered as a biomarker for stroke recovery. This measure is highly sensitive to applied pre-processing steps; in particular, the presence of a lesion may result into severe misregistration. In this paper, it is proposed to quantitatively assess the impact of large stroke lesions onto the registration process. To reduce this impact, a new registration algorithm, that localizes the lesion via Bayesian estimation, is proposed.

@inproceedings{Renard2015,
address = {Munich, Germany},
author = {Renard, Félix and Urvoy, Matthieu and Jaillard, Assia},
booktitle = {},
doi = {},
isbn = {},
pages = {},
publisher = {{International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2015, Brain Lesion (Brainles) workshop}},
title = {{Bayesian stroke lesion estimation for automatic registration of DTI images}},
url = {},
year = {2015}
}

2014

Conference paper: Application of Grubbs’ test for outliers do the detection of watermarks

Matthieu Urvoy, Florent Autrusseau
In Proceedings of the 2nd ACM workshop on Information hiding and multimedia security - IH&MMSec ’14 (pp. 49–60): ACM Press, 2014

Abstract   |   Bibtex   |   Editor's   |   doi

In an era when the protection of intellectual property rights becomes more and more important, providing robust and efficient watermarking techniques is crucial, both in terms of embedding and detection. In this paper, the authors specifically focus on the latter stage. Most often, the detection consists in the comparison of a fixed and non-adaptive decision threshold to a correlation coefficient. This threshold is usually determined either theoretically or experimentally. Here, it is proposed to apply Grubbs? test, a simple statistical test for outliers, on the correlation data in order to take a binary decision about the presence or the absence of the searched watermark. The proposed technique is applied to three algorithms from the literature: the correlation data generated by the detector is fed to Grubbs' test. The obtained results show that Grubbs' test is efficient, robust and reliable. Above all, it automatically adapts to the searched watermark and can be easily applied to most types of watermarking approaches.

@inproceedings{Urvoy2014,
address = {Salzburg, Austria},
author = {Urvoy, Matthieu and Autrusseau, Florent},
booktitle = {Proceedings of the 2nd ACM workshop on Information hiding and multimedia security - IH\&MMSec '14},
doi = {10.1145/2600918.2600931},
isbn = {9781450326476},
pages = {49--60},
publisher = {ACM Press},
title = {{Application of grubbs' test for outliers to the detection of watermarks}},
url = {http://hal.archives-ouvertes.fr/hal-00968698/ http://dl.acm.org/citation.cfm?doid=2600918.2600931},
year = {2014}
}

Journal paper: Perceptual DFT Watermarking With Improved Detection and Robustness to Geometrical Distortions

Matthieu Urvoy, Dalila Goudia, Florent Autrusseau
IEEE Transactions on Information Forensics and Security, 9(7), 1108–1119, 2014 (Impact Factor: 1.9).

Full text          Abstract   |   Bibtex   |   Editor's   |   doi

More than ever, the growing amount of exchanged digital contents calls for efficient and practical techniques to protect intellectual property rights. During the past two decades, watermarking techniques have been proposed to embed and detect information within these contents, with four key requirements at hand: robustness, security, capacity and invisibility. So far, researchers mostly focused on the first three, but seldom addressed the invisibility from a perceptual perspective and instead mostly relied on objective quality metrics. In this paper, a novel DFT watermarking scheme featuring perceptually-optimal visibility versus robustness is proposed. The watermark, a noise-like square patch of coefficients, is embedded by substitution within the Fourier domain; the amplitude component adjusts the watermark strength, and the phase component holds the information. A perceptual model of the Human Visual System (HVS) based on the Contrast Sensitivity Function (CSF) and a local contrast pooling is used to determine the optimal strength at which the mark reaches the visibility threshold. A novel blind detection method is proposed to assess the presence of the watermark. The proposed approach exhibits high robustness to various kind of attacks, including geometrical distortions. Experimental results show that the robustness of the proposed method is globally slightly better than state-of-the-art. A comparative study was conducted at the visibility threshold (from subjective data) and showed that the obtained performances are more stable across various kinds of contents.

@article{Urvoy2014,
author = {Urvoy, Matthieu and Goudia, Dalila and Autrusseau, Florent},
doi = {10.1109/TIFS.2014.2322497},
issn = {1556-6013},
journal = {IEEE Transactions on Information Forensics and Security},
month = jul,
number = {7},
pages = {1108--1119},
title = {{Perceptual DFT Watermarking With Improved Detection and Robustness to Geometrical Distortions}},
url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6811181},
volume = {9},
year = {2014}
}

2013

Book section: Visual Comfort and Fatigue in Stereoscopy

Matthieu Urvoy, Marcus Barkowsky, Jing Li, Patrick Le Callet
In L. Lucas, C. Loscos, & Y. Remion (Eds.), 3D Video: From Capture to Diffusion (pp. 309–329). London, UK: ISTE Ltd, WILEY, 2013.

Bibtex   |   Editor's

@incollection{Urvoy2013c,
address = {London, UK},
author = {Urvoy, Matthieu and Barkowsky, Marcus and Li, Jing and {Le Callet}, Patrick},
booktitle = {3D Video: From Capture to Diffusion},
chapter = {16},
editor = {Lucas, Laurent and Loscos, C\'{e}line and Remion, Yannick},
isbn = {9781848215078},
pages = {309--329},
publisher = {ISTE Ltd, WILEY},
title = {{Visual Comfort and Fatigue in Stereoscopy}},
url = {http://www.iste.co.uk/index.php?f=a\&ACTION=View\&id=714},
year = {2013}
}

Journal paper: How visual fatigue and discomfort impact 3D-TV quality of experience: a comprehensive review of technological, psychophysical, and psychological factors.

Matthieu Urvoy, Marcus Barkowsky, Patrick Le Callet
Annals of Telecommunications - Annales Des Télécommunications, 68(11-12), 641–655, 2013 (Impact Factor: 0.57).

Full text          Abstract   |   Bibtex   |   Editor's   |   doi

The Quality of Experience (QoE) of 3D contents is usually considered to be the combination of the perceived visual quality, the perceived depth quality, and lastly the visual fatigue and comfort. When either fatigue or discomfort are induced, studies tend to show that observers prefer to experience a 2D version of the contents. For this reason, providing a comfortable experience is a prerequisite for observers to actually consider the depth effect as a visualization improvement.
In this paper, we propose a comprehensive review on visual fatigue and discomfort induced by the visualization of 3D stereoscopic contents, in the light of physiological and psychological processes enabling depth perception. First, we review the multitude of manifestations of visual fatigue and discomfort (near triad disorders, symptoms for discomfort), as well as means for detection and evaluation. We then discuss how, in 3D displays, ocular and cognitive conflicts with real world experience may cause fatigue and discomfort; these include the accommodation - vergence conflict, the inadequacy between presented stimuli and observers depth of focus, and the cognitive integration of conflicting depth cues. We also discuss some limits for stereopsis that constrain our ability to perceive depth, and in particular the perception of planar and in-depth motion, the limited fusion range and various stereopsis disorders. Finally, this paper discusses how the different aspects of fatigue and discomfort apply to 3D technologies and contents. We notably highlight the need for respecting a comfort zone and avoiding camera and rendering artifacts. We also discuss the influence of visual attention, exposure duration and training. Conclusions provide guidance for best practices and future research.

@article{Urvoy2013b,
author = {Urvoy, Matthieu and Barkowsky, Marcus and {Le Callet}, Patrick},
doi = {10.1007/s12243-013-0394-3},
issn = {0003-4347},
journal = {annals of telecommunications - annales des t\'{e}l\'{e}communications},
month = sep,
number = {11-12},
pages = {641--655},
title = {{How visual fatigue and discomfort impact 3D-TV quality of experience: a comprehensive review of technological, psychophysical, and psychological factors}},
url = {http://link.springer.com/10.1007/s12243-013-0394-3},
volume = {68},
year = {2013}
}

Book section: Confort et fatigue visuels en stéréoscopie

Matthieu Urvoy, Marcus Barkowsky, Jing Li, Patrick Le Callet
In L. Lucas, C. Loscos, & Y. Rémion (Eds.), Vidéo 3D : Capture, traitement et diffusion ; Traité IC2, série Signal et image (pp. 309–327). Londres: Hermès Sciences Publications Lavoisier, 2013.

Bibtex   |   Editor's

@incollection{Urvoy2013a,
address = {Londres},
author = {Urvoy, Matthieu and Barkowsky, Marcus and Li, Jing and {Le Callet}, Patrick},
booktitle = {Vid\'{e}o 3D : Capture, traitement et diffusion ; Trait\'{e} IC2, s\'{e}rie Signal et image},
chapter = {16},
editor = {Lucas, Laurent and Loscos, C\'{e}line and R\'{e}mion, Yannick},
isbn = {9782746245457},
pages = {309--327},
publisher = {Herm\`{e}s Sciences Publications Lavoisier},
title = {{Confort et fatigue visuels en st\'{e}r\'{e}oscopie}},
url = {http://editions.lavoisier.fr/notice.asp?ouvrage=2762260},
year = {2013}
}

Journal paper: Print and Scan Resilient Image Watermarking

Matthieu Urvoy, Florent Autrusseau
Journal of South China University of Technology, JSCUT 41, 7, pp. 120-125, 2013

Full text          Abstract   |   Bibtex

In this paper, a new watermarking algorithm for still images is introduced, which is designed to be robust against print and scan (i.e. slight geometrical distortions and noise). The digital watermark, a square noise-like pattern is embedded within the Fourier domain. The watermark is modulated at a high carrier frequency. Several embedding methods are evaluated with respect to their robustness against various distortions. The detection is blind; normalized 2D correlation between the watermark and the Fourier coefficients at the carrier frequency is performed.

@article{Urvoy2013,
author = {Urvoy, Matthieu and Autrusseau, Florent},
journal = {Journal of South China University of Technology (JSCUT)},
number = {7},
pages = {120--125},
title = {{Print and Scan Resilient Image Watermarking}},
volume = {41},
year = {2013}
}

Conference paper: Stereoscopic 3D video coding quality evaluation with 2D objective metrics

Kun Wang, Kjell Brunnström, Marcus Barkowsky, Matthieu Urvoy, Marten Sjöström, Patrick Le Callet, Borje Andrén
In A. J. Woods, N. S. Holliman, & G. E. Favalora (Eds.), Proc. SPIE 8648, Stereoscopic Displays and Applications XXIV (p. 86481L), 2013

Full text          Abstract   |   Bibtex   |   Editor's   |   doi

The 3D video quality is of highest importance for the adoption of a new technology from a user’s point of view. In this paper we evaluated the impact of coding artefacts on stereoscopic 3D video quality by making use of several existing full reference 2D objective metrics. We analyzed the performance of objective metrics by comparing to the results of subjective experiment. The results show that pixel based Visual Information Fidelity metrics fits subjective data the best. The 2D stereoscopic video quality seems to have dominant impact on the coding artefacts impaired stereoscopic videos.

@inproceedings{Wang2013,
abstract = {The 3D video quality is of highest importance for the adoption of a new technology from a user’s point of view. In this paper we evaluated the impact of coding artefacts on stereoscopic 3D video quality by making use of several existing full reference 2D objective metrics. We analyzed the performance of objective metrics by comparing to the results of subjective experiment. The results show that pixel based Visual Information Fidelity metrics fits subjective data the best. The 2D stereoscopic video quality seems to have dominant impact on the coding artefacts impaired stereoscopic videos. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.},
author = {Wang, Kun and Brunnstr\"{o}m, Kjell and Barkowsky, Marcus and Urvoy, Matthieu and Sj\"{o}str\"{o}m, Marten and {Le Callet}, Patrick and Tourancheau, Sylvain and Andr\'{e}n, Borje},
booktitle = {Proc. SPIE 8648, Stereoscopic Displays and Applications XXIV},
doi = {10.1117/12.2003664},
editor = {Woods, Andrew J. and Holliman, Nicolas S. and Favalora, Gregg E.},
month = mar,
pages = {86481L},
title = {{Stereoscopic 3D video coding quality evaluation with 2D objective metrics}},
url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.2003664},
year = {2013}
}

2012

Conference paper: Subjective experiment dataset for joint development of hybrid video quality measurement algorithms

Marcus Barkowsky, Nicolas Staelens, Lucjan Janowski, Yao Koudota, Mikolaj Leszczuck, Matthieu Urvoy, Patrick Hummelbrunner, Inigo Sedano, Kjell Brunnström
In Proceedings of Third Workshop on Quality of Experience for Multimedia Content Sharing (QoEMCS 2012), pp. 1–4, 2012.

Full text          Abstract   |   Bibtex

The application area of an objective measurement algorithm for video quality is always limited by the scope of the video datasets that were used during its development and training. This is particularly true for measurements which rely solely on information available at the decoder side, for example hybrid models that analyze the bitstream and the decoded video. This paper proposes a framework which enables researchers to train, test and validate their algorithms on a large database of video sequences in such a way that the – often limited - scope of their development can be taken into consideration. A freely available video database for the development of hybrid models is described containing the network bitstreams, parsed information from these bitstreams for easy access, the decoded video sequences, and subjectively evaluated quality scores.

@inproceedings{Barkowsky2012,
author = {Barkowsky, Marcus and Staelens, Nicolas and Janowski, Lucjan and Koudota, Yao and Leszczuck, Mikolaj and Urvoy, Matthieu and Hummelbrunner, Patrik and Sedano, Inigo and Brunnstr\"{o}m, Kjell},
booktitle = {Proceedings of Third Workshop on Quality of Experience for Multimedia Content Sharing (QoEMCS 2012)},
keywords = {freely available dataset,objective video quality measurement,subjective experiment,video quality standardization},
pages = {1--4},
title = {{Subjective experiment dataset for joint development of hybrid video quality measurement algorithms}},
url = {http://hal.archives-ouvertes.fr/hal-00717861/},
year = {2012}
}

Conference paper: NAMA3DS1-COSPAD1: Subjective video quality assessment database on coding conditions introducing freely available high quality 3D stereoscopic sequences

Matthieu Urvoy, Jesus Gutierrez, Marcus Barkowsky, Romain Cousseau, Yao Koudota, Vincent Ricordel, Patrick Le Callet, Narciso Garcia
In 2012 Fourth International Workshop on Quality of Multimedia Experience (pp. 109–114). Yarra Valley, VIC, Australia. IEEE, 2012.

Full text          Abstract   |   Bibtex   |   Editor's   |   doi

Research in stereoscopic 3D coding, transmission and subjective assessment methodology depends largely on the availability of source content that can be used in cross-lab evaluations. While several studies have already been presented using proprietary content, comparisons between the studies are difficult since discrepant contents are used. Therefore in this paper, a freely available dataset of high quality Full-HD stereoscopic sequences shot with a semiprofessional 3D camera is introduced in detail. The content was designed to be suited for usage in a wide variety of applications, including high quality studies. A set of depth maps was calculated from the stereoscopic pair. As an application example, a subjective assessment has been performed using coding and spatial degradations. The Absolute Category Rating with Hidden Reference method was used. The observers were instructed to vote on video quality only. Results of this experiment are also freely available and will be presented in this paper as a first step towards objective video quality measurement for 3DTV.

@inproceedings{Urvoy2012,
address = {Yarra Valley, VIC, Australia},
author = {Urvoy, Matthieu and Barkowsky, Marcus and Cousseau, Romain and Koudota, Yao and Ricordel, Vincent and {Le Callet}, Patrick and Gutierrez, Jesus and Garcia, Narciso},
booktitle = {2012 Fourth International Workshop on Quality of Multimedia Experience},
doi = {10.1109/QoMEX.2012.6263847},
isbn = {978-1-4673-0726-0},
keywords = {Stereoscopic 3D,coding impairments,depth,free content database,maps,subjective evaluation,video quality},
month = jul,
pages = {109--114},
publisher = {IEEE},
title = {{NAMA3DS1-COSPAD1: Subjective video quality assessment database on coding conditions introducing freely available high quality 3D stereoscopic sequences}},
url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6263847},
year = {2012}
}

2011

PhD thesis: Motion tubes: a new representation for image sequences

Matthieu Urvoy
Université Européenne de Bretagne, Rennes, France, 2011.

Full text          Abstract   |   Bibtex

Within a few years only, the amount of video information transmitted across a large range of communication channels has been critically increasing. It is expected, by 2014, that IP traffic will consist, most exclusively, of video data. In mobiles, video traffic is expected to undergo an increase without precedent as well. Despite the ever-increasing throughput of modern transmission channels, these will not be able to sustain such an increase in payload. More than ever, it is essential to improve our ability to compact the video information. Research, for the past 30 years, provided numerous decorrelation tools that reduce the amount of redundancies across both spatial and temporal dimensions in image sequences. To this day, the classical video compression paradigm locally splits the images into blocks of pixels (macroblocks), and processes the temporal axis on a frame by frame basis, without any obvious continuity. Despite very high compression performances (e.g. AVC and forthcoming HEVC standards), one may still advocate the use of alternative approaches. Disruptive solutions have also been proposed, and notably offer the ability to continuously process the temporal axis. However, they often rely on complex tools (e.g. Wavelets, control grids) whose use is rather delicate in practice. This thesis investigates the viability of an alternative representation that embeds features of both classical and disruptive approaches. Its goal is to exhibit the temporal persistence of the textural information, through a time-continuous description. However, it still relies on blocks, mostly responsible for the popularity of the classical approach. Instead of re-initializing the description at each frame, it is proposed to track the evolution of initial blocks taken from a reference image. A block, and its trajectory across time and space, is called a motion tube. An image sequence is then interpreted as a set of motion tubes. Three major problems have been considered within this thesis. At first, motion tubes need to track both continuous and discontinuous displacements and deformations of individual patches of textures. Above all, it is critical for them to evolve as consistently as possible, which will require dedicated regularization mechanisms. Then, a second problem lies in the texture itself and the way it is used to synthesize images: how to handle non-registered and multi-registered areas. Finally, it is essential for a motion tube to be terminated whenever the corresponding patch of texture disappears or cannot be properly tracked any longer, which raises the problem of quality and efficiency assessment. This has a critical influence on the compactness of the representation. Results will eventually show that tubes can effectively be used to represent image sequences, and compare their performances with those of AVC standard.

@phdthesis{Urvoy2011,
author = {Urvoy, Matthieu},
pages = {252},
school = {Universit\'{e} Europ\'{e}enne de Bretagne},
title = {{Les tubes de mouvement: nouvelle repr\'{e}sentation pour les s\'{e}quences d'images}},
url = {http://tel.archives-ouvertes.fr/tel-00642973/},
year = {2011}
}

2009

Conference paper: Motion tubes for the representation of image sequences

Matthieu Urvoy, Nathalie Cammas, Stéphane Pateux, Olivier Déforges, Marie Babel, Muriel Pressigout
Proceedings of the 2009 IEEE International Conference on Multimedia and Expo, ICME 2009, June 28 - July 2, 2009, New York City, NY, USA; 01/2009

Full text          Abstract   |   Bibtex

In this paper, we introduce a novel way to represent an image sequence, which naturally exhibits the temporal persistence of the textures. Standardized representations have been thoroughly optimized, and getting significant improvements has become more and more difficult. As an alternative, Analysis-Synthesis (AS) coders have focused on the use of texture within a video coder. We introduce here a new AS representation of image sequences that remains close to the classic block-based representation. By tracking textures throughout the sequence, we propose to reconstruct it from a set of moving textures which we call motion tubes. A new motion model is then proposed, which allows for motion field continuities and discontinuities, by hybridizing block matching and a low-computational mesh-based representation. Finally, we propose a bi-predictional framework for motion tubes management.

@inproceedings{Urvoy2009,
address = {Cancun, Mexico},
author = {Urvoy, Matthieu and Cammas, Nathalie and Pateux, St\'{e}phane and D\'{e}forges, Olivier and Babel, Marie and Pressigout, Muriel},
booktitle = {IEEE International Conference on Multimedia and Expo (ICME)},
pages = {105--108},
title = {{Motion tubes for the representation of images sequences}},
url = {http://hal.archives-ouvertes.fr/hal-00373266/en/},
year = {2009}
}