Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Lorenzen is active.

Publication


Featured researches published by Peter Lorenzen.


Medical Image Analysis | 2006

Multi-Modal Image Set Registration and Atlas Formation

Peter Lorenzen; Marcel Prastawa; Brad Davis; Guido Gerig; Elizabeth Bullitt; Sarang C. Joshi

In this paper, we present a Bayesian framework for both generating inter-subject large deformation transformations between two multi-modal image sets of the brain and for forming multi-class brain atlases. In this framework, the estimated transformations are generated using maximal information about the underlying neuroanatomy present in each of the different modalities. This modality independent registration framework is achieved by jointly estimating the posterior probabilities associated with the multi-modal image sets and the high-dimensional registration transformations mapping these posteriors. To maximally use the information present in all the modalities for registration, Kullback-Leibler divergence between the estimated posteriors is minimized. Registration results for image sets composed of multi-modal MR images of healthy adult human brains are presented. Atlas formation results are presented for a population of five infant human brains.


medical image computing and computer assisted intervention | 2005

Unbiased atlas formation via large deformations metric mapping

Peter Lorenzen; Brad Davis; Sarang C. Joshi

The construction of population atlases is a key issue in medical image analysis, and particularly in brain mapping. Large sets of images are mapped into a common coordinate system to study intra-population variability and inter-population differences, to provide voxel-wise mapping of functional sites, and to facilitate tissue and object segmentation via registration of anatomical labels. We formulate the unbiased atlas construction problem as a Fréchet mean estimation in the space of diffeomorphisms via large deformations metric mapping. A novel method for computing constant speed velocity fields and an analysis of atlas stability and robustness using entropy are presented. We address the question: how many images are required to build a stable brain atlas?


international symposium on biomedical imaging | 2004

Large deformation minimum mean squared error template estimation for computational anatomy

Brad Davis; Peter Lorenzen; Sarang C. Joshi

This paper presents a method for large deformation exemplar template estimation. This method generates a representative anatomical template from an arbitrary number of topologically similar images using large deformation minimum mean squared error image registration. The template that we generate is the image that requires the least amount of deformation energy to be transformed into every input image. We show that this method is also useful for image registration. In particular, it provides a means for inverse consistent image registration. This method is computationally practical; computation time grows linearly with the number of input images. Template estimation results are presented for a set of five 3D MR human brain images.


Medical Image Analysis | 2003

Structural and radiometric asymmetry in brain images

Sarang C. Joshi; Peter Lorenzen; Guido Gerig; Elizabeth Bullitt

This paper presents a general framework for analyzing structural and radiometric asymmetry in brain images. In a healthy brain, the left and right hemispheres are largely symmetric across the mid-sagittal plane. Brain tumors may belong to one or both of the following categories: mass-effect, in which the diseased tissue displaces healthy tissue; and infiltrating, in which healthy tissue has become diseased. Mass-effect brain tumors cause structural asymmetry by displacing healthy tissue, and may cause radiometric asymmetry in adjacent normal structures due to edema. Infiltrating tumors have a different radiometric response from healthy tissue. Thus, structural and radiometric asymmetries across the mid-sagittal plane in brain images provide important cues that tumors may be present. We have developed a framework that registers images with their reflections across the mid-sagittal plane. The registration process accounts for tissue displacement through large deformation image warping. Radiometric differences are taken into account through an additive intensity field. We present an efficient multi-scale algorithm for the joint estimation of structural and radiometric asymmetry. Results for nine MR images of patients with tumors and four normal control subjects are presented.


medical image computing and computer assisted intervention | 2004

Multi-class Posterior Atlas Formation via Unbiased Kullback-Leibler Template Estimation

Peter Lorenzen; Brad Davis; Guido Gerig; Elizabeth Bullitt; Sarang C. Joshi

Many medical image analysis problems that involve multi-modal images lend themselves to solutions that involve class posterior density function images. This paper presents a method for large deformation exemplar class posterior density template estimation. This method generates a representative anatomical template from an arbitrary number of topologically similar multi-modal image sets using large deformation minimum Kullback-Leibler divergence registration. The template that we generate is the class posterior that requires the least amount of deformation energy to be transformed into every class posterior density (each characterizing a multi-modal image set). This method is computationally practical; computation times grows linearly with the number of image sets. Template estimation results are presented for a set of five 3D class posterior images representing structures of the human brain.


international symposium on 3d data processing visualization and transmission | 2006

Computational Anatomy to Assess Longitudinal Trajectory of Brain Growth

Guido Gerig; Bradley C. Davis; Peter Lorenzen; Shun Xu; Matthieu Jomier; Joseph Piven; Sarang C. Joshi

This paper addresses the challenging problem of statistics on images by describing average and variability. We describe computational anatomy tools for building 3-D and spatio-temporal 4-D atlases of volumetric image data. The method is based on the previously published concept of unbiased atlas building, calculating the nonlinear average image of a population of images by simultaneous nonlinear deformable registration. Unlike linear averaging, the resulting center average image is sharp and encodes the average structure and geometry of the whole population. Variability is encoded in the set of deformation maps. As a new extension, longitudinal change is assessed by quantifying local deformation between atlases taken at consecutive time points. Morphological differences between groups are analyzed by the same concept but comparing group-specific atlases. Preliminary tests demonstrate that the atlas building shows excellent robustness and a very good convergence, i.e. atlases start to stabilize after 5 images only and do not show significant changes when including more than 10 volumetric images taken from the same population.


international symposium on biomedical imaging | 2004

Model based symmetric information theoretic large deformation multi-modal image registration

Peter Lorenzen; Brad Davis; Sarang C. Joshi

This paper presents a Bayesian framework for generating inverse-consistent inter-subject large deformation transformations between two multi-modal image sets of the brain. In this framework, the estimated transformations are generated using the maximal information about the underlying neuroanatomy present in each of the different modalities. This modality independent registration framework is achieved using the Bayesian paradigm and jointly estimating the posterior densities associated with the multi-modal image sets and the high-dimensional registration transformation mapping the two subjects. To maximally use the information present in all the modalities, Kullback-Leibler divergence between the estimated posteriors is minimized to estimate the registration results for two synthetic image sets of a human neuroanatomy are presented.


medical image computing and computer assisted intervention | 2013

Automated Embryo Stage Classification in Time-Lapse Microscopy Video of Early Human Embryo Development

Yu Wang; Farshid Moussavi; Peter Lorenzen

The accurate and automated measuring of durations of certain human embryo stages is important to assess embryo viability and predict its clinical outcomes in in vitro fertilization (IVF). In this work, we present a multi-level embryo stage classification method to identify the number of cells at every time point of a time-lapse microscopy video of early human embryo development. The proposed method employs a rich set of hand-crafted and automatically learned embryo features for classification and avoids explicit segmentation or tracking of individual embryo cells. It was quantitatively evaluated using a total of 389 human embryo videos, resulting in a 87.92% overall embryo stage classification accuracy.


workshop on biomedical image registration | 2003

High-Dimensional Multi-modal Image Registration

Peter Lorenzen; Sarang C. Joshi

This paper presents a Bayesian framework for generating inter-subject high-dimensional transformations between two multi-modal image sets of the brain. In this framework, the estimated transformations are generated by using the maximal information about the underlying neuroanatomy present in each of the different modalities. This modality independent registration framework is achieved by using the Bayesian paradigm and jointly estimating the posterior densities associated with the multi-modal image sets and the high dimensional registration transformation mapping the two subjects. The methods presented do not assume that the same modalities were used to image the two subjects. To maximally use the information present in all the modalities, relative entropy (or Kullback Leibler divergence) between the estimated posteriors is minimized to estimate the registration. The high-dimensional registration is constrained to be diffeomorphic by using the large deformation fluid formulation.


IEEE Transactions on Medical Imaging | 2014

A Unified Graphical Models Framework for Automated Mitosis Detection in Human Embryos

Farshid Moussavi; Yu Wang; Peter Lorenzen; Jonathan Oakley; Daniel B. Russakoff; Stephen Gould

Time lapse microscopy has emerged as an important modality for studying human embryo development, as mitosis events can provide insight into embryo health and fate. Mitosis detection can happen through tracking of embryonic cells (tracking based), or from low level image features and classifiers (tracking free). Tracking based approaches are challenged by high dimensional search space, weak features, outliers, missing data, multiple deformable targets, and weak motion model. Tracking free approaches are data driven and complement tracking based approaches. We pose mitosis detection as augmented simultaneous segmentation and classification in a conditional random field (CRF) framework that combines both approaches. It uses a rich set of discriminative features and their spatiotemporal context. It performs a dual pass approximate inference that addresses the high dimensionality of tracking and combines results from both components. For 312 clinical sequences we measured division events to within 30 min and observed an improvement of 25.6% and a 32.9% improvement over purely tracking based and tracking free approach respectively, and close to an order of magnitude over a traditional particle filter. While our work was motivated by human embryo development, it can be extended to other detection problems in image sequences of evolving cell populations.

Collaboration


Dive into the Peter Lorenzen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen Gould

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Elizabeth Bullitt

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bradley C. Davis

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Joseph Piven

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge