Integrated Analysis and Probabilistic Registration of Medical Images with Missing Correspondences

The automatic, robust and reliable registration of medical images is a central problem in medical image computing with high impact on image-guided diagnostics and therapy. Currently available registration methods reach their limits, if strong anatomical or pathologic discrepancies are present in the images and corresponding structures are missing in parts of the images. Another limitation of current registration methods is the lack of information they provide to the user about the local (un)certainty of the estimated transformation and therefore does not allow an assessment of the registration results.

The aim of this project is to enable the robust and reliable registration of images even if one-to-one correspondences are missing in parts of the images. To achieve this, a general probabilistic registration framework based on correspondence probabilities is developed that does not only rely on image intensities but also on additional information extracted by image analysis methods like organ segmentations, landmarks and local image features to align images. The methods to develop will enable the registration of areas with missing local correspondences as well as the objective assessment of the reliability of the local registration results.

The proposed methodical innovations extend the medical application spectrum of image registration algorithms, significantly. For example, the proposed method will facilitate and improve the quality of image-based follow-up studies and clinical monitoring, comparison of pre- and post-operative images as well as image-based statistical studies to reveal spatial distribution patterns of pathological tissues or neuronal activities.

Project Team:

M.Sc. Sandra Schultz
Dr. rer. nat. Jan Ehrhardt
Prof. Dr. rer. nat. habil. Heinz Handels

 

Project_icon_Schultz.png
Created at April 26, 2016 - 10:34am. Last modified at May 23, 2016 - 3:39pm.

Similarity and uncertainty in medical image analysis

Image similarity is one of the most integral parts of medical image computing. Establishing correspondences across medical scans of different patients, time-points or modalities is key to numerous medical image analysis applications such as: atlas-based segmentation, motion estimation, longitudinal studies and multi-modal fusion. Defining similarity across images is challenging due to global and local changes in image contrast, noise and the variety of physical principals, which are used to acquire different modalities. One aim of this project is to improve or develop novel similarity metrics, which are invariant to modality and robust against contrast variations and noise while still being very discriminative for important anatomical or geometric features.

The modality independent neighbourhood descriptor (MIND) [1] is a multi-dimensional local image descriptor (see Fig. 1), which we have developed for multi-modal registration based on the concept of self-similarities. We have demonstrated its improvements over state-of-the-art methods such as mutual information for the alignment of thoracic CT and MRI scans of patients with empyema, a lung disease. It can also be used for registering scans of the same modality, e.g. 4DCT scans, where it has been shown to improve accuracy and robustness. Each MIND descriptor is calculated based on patch distances (within the local neighbourhood of the same scan). Comparison of MIND representations is performed as simple sum of squared/absolute differences of its entries. The self-similarity context (SSC) [2] is an improvement of MIND, which redefines the neighbourhood layout to improve the robustness of the matching and better describe the contextual information. We have derived an efficient quantisation scheme, which enables a very fast evaluation of pair-wise distances using the Hamming weight and shown that is applicable to register challenging image modalities such as MRI and ultrasound.

Uncertainty is inherently an important issue to all image analysis tasks. However, few common algorithms estimate a measure of uncertainty for the calculated results. Automatic error detection would be very valuable in the practical use of medical image analysis tools, as it could provide the clinician with more confidence on whether or not to use a computer-generated analysis. Discrete optimisation techniques (based on Markov random field models) may offer useful solutions to determine uncertainty, because they can be used to infer probabilistic estimates over a flexible range of model parameters. For the application of image registration, we have developed a framework that optimises over a very large space of potential displacements (dense displacement sampling, deeds) [3,4]. It enables very fast and accurate motion estimation and has been evaluated on a large set of lung CT scans. An example outcome of a registration is shown in Fig. 2 (quantitative results can be found at http://empire10.isi.uu.nl/res_deedsmind.php). The uncertainty over all potential motion parameters can be calculated using well-known message passing algorithms and enable us to quantify local registration accuracy. This has been used in [5] to improve segmentation accuracy of MRI brain scans. Not only the displacement vectors contain uncertainties, but also the underlying image representation. In practice, a transformation model parameterised by uniform B-spline control-points is often used. We have presented a more flexible image model in [6], consisting of multiple complementary layers of supervoxels. Estimating the best local image model and determining its trustworthiness forms another part of the current work in this project.

Software developed by M.P. Heinrich for the publications below, can be freely downloaded from: www.mpheinrich.de/software.html
This includes a very efficient and accurate 3D single- and multimodal registration package deeds and reference code to obtain MIND and SSC descriptors.

Fig. 1: Concept for the use of MIND for multimodal registration. MIND descriptors are calculated for all voxels in CT and MRI. Three exemplary locations with different image features: homogenous intensities (liver), corner points at one vertebra and image gradients at the boundary between fat and non-fat tissue are shown. The descriptors are independent of the respective modality and can be easily compared using the L2 norm.

Fig. 2: Animated registration example of lung CT scans. From left to right are shown: axial, coronal and sagittal plane. An overlay of inhale (green) and exhale (magenta) phase before, during and after registration using the deeds approach is visible when playing the video. The motion estimation is able to preserve the sliding of the lungs against the thoracic cage, which is common in respiratory motion.

Selected Publications:

  1. Mattias P. Heinrich, Mark Jenkinson, Manav Bhushan, Tahreema Matin, Fergus V. Gleeson, Sir Michael Brady, Julia A. Schnabel.
    MIND: Modality Independent Neighbourhood Descriptor for Multi-modal Deformable Registration.
    Medical Image Analysis. vol. 16(7) Oct. 2012, pp. 1423–1435
  2. Mattias Paul Heinrich, Mark Jenkinson, Bartlomiej W. Papiez, Sir Michael Brady, Julia A. Schnabel.
    Towards Realtime Multimodal Fusion for Image-Guided Interventions Using Self-similarities
    In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2013, Lecture Notes in Computer Science Volume 8149, 2013, pp 187-194
  3. Mattias P. Heinrich, Mark Jenkinson, Sir Michael Brady, Julia A. Schnabel.
    MRF-based Deformable Registration and Ventilation Estimation of Lung CT.
    IEEE Transaction on Medical Imaging. Vol. 32(7), 1239 - 1248, 2013
  4. MP Heinrich, M Jenkinson, M Brady, JA Schnabel.
    Globally Optimal Registration on a Minimum Spanning Tree using Dense Displacement Sampling
    In: Medical Image Computing and Computer Assisted Intervention (MICCAI) 2012. LNCS 7512, pp. 115-122. Springer, Berlin (2012)
  5. Mattias P. Heinrich, Ivor J.A. Simpson, Mark Jenkinson, Sir Michael Brady, Julia A. Schnabel.
    Uncertainty Estimates for Improved Accuracy of Registration-Based Segmentation Propagation using Discrete Optimisation
    MICCAI Workshop on Segmentation, Algorithms, Theory and Applications (SATA), Nagoya 2013
  6. Mattias P. Heinrich, Mark Jenkinson, Bartlomiej W. Papiez, Fergus V. Gleeson, Sir Michael Brady, Julia A. Schnabel.
    Edge- and Detail-Preserving Sparse Image Representations for Deformable Registration of Chest MRI and CT Volumes.
    In: Information Processing in Medical Imaging (IPMI) 2013. LNCS 7917, 463-474, Springer (2013)

Project Team:

Jun.-Prof. Dr. Mattias P. Heinrich
Dr. Jan Ehrhardt
Prof. Dr. Heinz Handels

Cooperation Partners:

Prof. Dr. Julia A. Schnabel
Institute of Biomedical Engineering
University of Oxford

imi_projekt_heinrich.png
Created at June 2, 2014 - 4:54pm by Wrage. Last modified at June 30, 2014 - 11:23am.

Mammo3D – Automatic Detection and Visualization of Corresponding Structures and Lesions in 2D/3D Image Data of the Female Breast for Multimodal Breast Cancer Diagnosis

Two-dimensional digital mammography is the major imaging modality for breast cancer diagnostic and early detection. To improve diagnostics additional modalities such as magnetic resonance imaging and digital breast tomosynthesis can be used or additional mammograms acquired with a different projection angle (CC or MLO) are generated. A combined analysis of the different medical image data of the same patient would be preferable to facilitate diagnostics and treatments.

The main objective of this project is the automatic detection and visualization of corresponding structures in the acquired image data and the processing of these structures to establish a basis for multimodal breast cancer diagnosis.
The focus of our work lies in the development and evaluation of methods for multidimensional and multimodal image registration and correspondence analysis. The development of deformation models of the female breast poses a further challenge, as the breast is exposed to different compressions and deformations during the acquisition of the different image data.

The project is realized in collaboration with the company IMAGE Information Systems Europe Ltd. and is funded by the German Federal Ministry of Economy and Technolgy BMWi.

Fig. 1: Aim of the project is the detection of corresponding structures in mammograms, tomosynthesis and MRI data.

Selected Publications:

  1. Krüger J, Ehrhardt J, Bischof A, Handels H
    Simulation of Mammographic Breast Compression in 3D MR images using ICP-based B-Spline Deformation for Multimodality Breast Cancer Diagnosis.
    International Journal of Computer Assisted Radiology and Surgery. In Press
  2. Krüger J, Ehrhardt J, Bischof A, Handels H
    Breast Compression Simulation using ICP-based B-Spline Deformation for Correspondence Analysis in Mammography and MRI Datasets.
    Image Processing, SPIE Medical Imaging 2013. :8669-48,1D1-1D8. 2013
  3. Krüger J, Ehrhardt J, Bischof A, Handels H
    Evaluation of a B-Spline-based Breast Compression Simulation for Correspondence Analysis between MRI and Mammographic Image Data.
    Workshop on Breast Image Analysis - In conjunction with the 16th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2013) 17-24, 2013
  4. Ehrhardt J., Krüger J., Bischof A., Handels H.
    Automatic Correspondence Detection in Mammogram and Breast Tomosynthesis Images.
    Image Processing, SPIE Medical Imaging 2012. 8314:831421-1-831421-8. 2012
  5. Krüger J., Ehrhardt J., Bischof A., Barkhausen J., Handels H.
    Automatische Bestimmung von 2D/3D-Korrespondenzen in Mammographien und Tomosynthese-Bilddaten.
    Bildverarbeitung für die Medizin 2012, Informatik aktuell, 99-104, 2012

Project Team:

M. Sc. Matthias Wilms
M.Sc. Julia Krüger
Dr. Jan Ehrhardt
Prof. Dr. Heinz Handels

Cooperation Partners:

Arpad Bischof und Prof. Dr. med. Jörg Barkhausen
Clinic for Radiology and Nuclear Medicine
University Medical Center Schleswig-Holstein, Campus Lübeck

Andreas Berger
IMAGE Information Systems Europe Ltd.

 

Mammo3D_Icon.png
Created at February 20, 2012 - 2:13pm by Kulbe. Last modified at June 30, 2014 - 11:45am.

Integrated 4D Segmentation and Registration of Spatio-temporal Image Data

The introduction of spatio-temporal tomographic image data enabled the analysis of dynamic physiological processes like heart beat or respiratory lung motion. However, its potential is not yet fully capitalized on. A comprehensive diagnostic and therapeutic usage of 4D data requires on the one hand a delineation of clinically relevant structures (segmentation), on the other hand an explicit description of motion characteristics (registration). Classical approaches regard both problems independently, however, a mutual dependency between them exists.

Aim of this project is the development of simultaneous segmentation and registration approaches that allow for a modeling of the mutual dependency. A-priory knowledge about physiology and motion dynamic is introduced by formulating appropriate side conditions. Methods for an automatic parameter detection and refinement allow for an adaption of the proposed segmentation and registration algorithm to specific medical applications. Moreover, the introduction of interaction tools enables the user-driven correction and improvement of results.

The base idea of the approach is illustrated in the figure below. Looking at two 3D images of a spatio-temporal data set (called reference and target image), a given segmentation of the reference image is assumed. Aim is on the one hand the calculation of a segmentation of the target image, on the other hand the motion estimation between target and reference image. An additional term ensures the consistency of segmentation and registration by comparing target and transformed reference segmentation.

The approaches developed in this project are evaluated for the simultaneous segmentation and motion estimation of lung and liver on the base of clinical CT data.

The project is funded by Deutsche Forschungsgemeinschaft (DFG: EH 224/3-1).

Selected Publications:

  1. A. Schmidt-Richberg, H. Handels, J. Ehrhardt:
    Integrated Segmentation and Non-linear Registration for Organ Segmentation and Motion Field Estimation in 4D CT Data.
    Methods Inf Med, 48(4): 334–339, Jan 2009.
  2. A. Schmidt-Richberg, J. Ehrhardt, R. Werner, H. Handels
    Direction-Dependent Regularization for Improved Estimation of Liver and Lung Motion in 4D Image Data.
    In: SPIE Medical Imaging 2010, San Diego, USA, Vol. 7623, 76232Y, 2010.
  3. A. Schmidt-Richberg, J. Ehrhardt, R. Werner, H. Handels
    Slipping Objects in Image Registration: Improved Motion Field Estimation with Direction-dependent Regularization.
    In: G.-Z. Yang et al. (eds.): Medical Image Computing and Computer-Assisted Intervention - MICCAI 2009, London, LNCS Vol. 5761, 755–762, 2009.
  4. J. Ehrhardt, A. Schmidt-Richberg, H. Handels
    Simultaneous Segmentation and Motion Estimation in 4D-CT Data Using a Variational Approach.
    In: J.M. Reinhardt et al. (eds.): Image Processing, SPIE Medical Imaging 2008, San Diego, Vol. 6914, 37-1–37-10, 2008.
  5. J. Ehrhardt, A. Schmidt-Richberg, H. Handels
    A Variational Approach for Combined Segmentation and Estimation of Respiratory Motion in Temporal Image Sequences.
    IEEE International Conference on Computer Vision 2007, ICCV 2007, Rio de Janeiro, Brazil, CD-ROM-Proceedings, IEEE Catalog Number CFP07198-CDR (ISBN 978-1-4244-1631-8), 2007. 

Project Team:

Dipl.-Inf. Alexander Schmidt-Richberg
Dr. Jan Ehrhardt
Prof. Dr. Heinz Handels

 

IconIntegr4DSegmentierung.png
Created at July 12, 2010 - 2:55pm by Kulbe. Last modified at June 30, 2014 - 11:46am.

4D Medical Image Computing for Model-based Analysis of Respiratory Tumor and Organ Motion

Breathing motion is a significant source of error in radiation therapy planning of the thorax and upper abdomen. The development of 4D (= 3D+t) imaging methods opened up the possibility to capture the spatio-temporal behaviour of tumors and inner organs. This project aims at developing methods for modelling, analysis, and visualization of respiratory motion of tumors and inner organs. The project is based on artefact reduced 4D CT patient data with high spatial and temporal resolution. The methods will complement possibilities offered by 4D imaging techniques to improve radiation therapy of thoracic and abdominal tumors.

The main focus of the project is to develop and evaluate improved non-linear registration methods in order to enable a precise estimation of 3D motion fields in the 4D CT image data. These dense vector fields are used for subsequent analysis and modelling of respiratory motion of structures of interest in radiation therapy such as tumors and organs at risk (fig. 1 and 2). Based on the patient collective we study the interpatient variability of tumor and lung motion whereas different lung regions are considered to analyze regional lung motion. Results are used to compare internal target volumes (ITV, i.e. the volume covered by the moving target) for different patients and, e.g., to examine whether it is possible to identify different but typical patterns of regional lung motion.

The project is funded by Deutsche Forschungsgemeinschaft (DFG) (HA 2355/9-1).


Fig. 1: Visualization of the 3D motion field between the phase of end-expiration and end-inspiration. The motion field estimation is based on optical flow based registration. Absolute values of the displacement fields are visualized color-coded. Red arrows indicate displacements of more than 20 mm. Figure taken from Handels et al., IJMI 76S, 433-9, 2007.


Fig. 2: Color-coded visualization of estimated appearance probabilities of lung tumors of two patients, displayed in a 2D slice.

Selected Publications:

  1. Alexander Schmidt-Richberg, Jan Ehrhardt, René Werner, Heinz Handels
    Slipping Objects in Image Registration: Improved Motion Field Estimation with Direction-dependent Regularization
    In: G.-Z. Yang Hawkes D., Reuckert D., Noble A., Taylor C. (eds.), Medical Image Computing and Computer-Assisted Intervention - MICCAI 2009, Part I, LNCS 5761, Springer Verlag, Berlin, 755-762, 2009.
  2. H. Handels, R. Werner, T. Frenzel, D. Säring, W. Lu, D. Low, and J. Ehrhardt:
    4D Medical Image Computing and Visualization of Lung Tumor Mobility in Spatio-temporal CT Image Data, International Journal of Medical Informatics, 76S, S433-S439, 2007.
  3. J. Ehrhardt, R. Werner, T. Frenzel, W. Lu, D. Low,  H. Handels:
    Analysis of Free Breathing Motion Using Artifact Reduced 4D CT Image Data, In: P.W. Pluim, J.M. Reinhardt (eds.), SPIE Medical Imaging 2007: Image Processing, San Diego, Proc. SPIE, Vol. 6512, 1N1-1N11, 2007.
  4. R. Werner, J. Ehrhardt, T. Frenzel, W. Lu, D. Low, H. Handels:
    Analysis of Tumor-influenced Respiratory Dynamics using Motion Artifact Reduced Thoracic 4D CT Images. In: T. Buzug et al. (eds.), Advances in Medical Engineering, Springer Verlag, Berlin, 181-186, 2007.

Project Team:

Dipl.-Inf. Dipl.-Phys. René Werner
Dr. Jan Ehrhardt
Dipl.-Inf. Alexander Schmidt-Richberg
Prof. Dr. Heinz Handels

Cooperation Partners:

Dr. rer. nat. Florian Cremers
Department of Radiotherapy and Radio-Oncology
University Medical Center Hamburg-Eppendorf (UKE)

Dr. med. Dr. rer. nat. Thorsten Frenzel
Ambulanzzentrum des UKE GmbH
Bereich für Strahlentherapie

Prof. Daniel Low and Dr. Wei Lu
Washington University in St. Louis, School of Medicine
St. Louis, MO, USA

 

lunagandtumor03.jpg
Created at July 12, 2010 - 11:32am. Last modified at June 30, 2014 - 11:45am.

Languages

Address

Office
Susanne Petersen

Tel+49 451 3101 5601
Fax+49 451 3101 5604


Ratzeburger Allee 160
23538 Lübeck
Germany