Statistical consensus matching framework for image registration

Abstract

A common method for image alignment in computer vision is finding the maximum consensus transformation for a set of features in the images. This is commonly done using randomized methods such as RANSAC. While relatively robust when strong features are involved, these methods do not deal well with ambiguous features where maximum likelihood does not provide the best match between the images, a common case with modalities such as medical ultrasound, thermal imaging and cross modality registration. They also do not inherently allow for the application of external knowledge regarding possible configurations to aid in the registration. In this paper we present a novel statistical framework for maximum consensus image alignment which is both robust in the presence of weak features (features not providing one-to-one matches) while at the same time providing an inherent natural ability for integrating external knowledge. Our methods is able to collect information not only from finding good matches, but also from improbable and partially ambiguous matches. We demonstrate our framework in the context of medical ultrasound image registration. In our test cases, our method succeeded where other state of the art methods we compared to failed to provide satisfactory results with over 17% of the samples.

Publication
2016 23rd International Conference on Pattern Recognition (ICPR)