Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony Santella is active.

Publication


Featured researches published by Anthony Santella.


international conference on computer graphics and interactive techniques | 2002

Stylization and abstraction of photographs

Douglas DeCarlo; Anthony Santella

Good information design depends on clarifying the meaningful structure in an image. We describe a computational approach to stylizing and abstracting photographs that explicitly responds to this design goal. Our system transforms images into a line-drawing style using bold edges and large regions of constant color. To do this, it represents images as a hierarchical structure of parts and boundaries computed using state-of-the-art computer vision. Our system identifies the meaningful elements of this structure using a model of human perception and a record of a users eye movements in looking at the photo; the system renders a new image using transformations that preserve and highlight these visual elements. Our method thus represents a new alternative for non-photorealistic rendering both in its visual style, in its approach to visual form, and in its techniques for interaction.


Nature Methods | 2010

Fast, high-contrast imaging of animal development with scanned light sheet–based structured-illumination microscopy

Philipp J. Keller; Annette D. Schmidt; Anthony Santella; Khaled Khairy; Zhirong Bao; Joachim Wittbrodt; Ernst H. K. Stelzer

Recording light-microscopy images of large, nontransparent specimens, such as developing multicellular organisms, is complicated by decreased contrast resulting from light scattering. Early zebrafish development can be captured by standard light-sheet microscopy, but new imaging strategies are required to obtain high-quality data of late development or of less transparent organisms. We combined digital scanned laser light-sheet fluorescence microscopy with incoherent structured-illumination microscopy (DSLM-SI) and created structured-illumination patterns with continuously adjustable frequencies. Our method discriminates the specimen-related scattered background from signal fluorescence, thereby removing out-of-focus light and optimizing the contrast of in-focus structures. DSLM-SI provides rapid control of the illumination pattern, exceptional imaging quality and high imaging speeds. We performed long-term imaging of zebrafish development for 58 h and fast multiple-view imaging of early Drosophila melanogaster development. We reconstructed cell positions over time from the Drosophila DSLM-SI data and created a fly digital embryo.


human factors in computing systems | 2006

Gaze-based interaction for semi-automatic photo cropping

Anthony Santella; Maneesh Agrawala; Douglas DeCarlo; David Salesin; Michael F. Cohen

We present an interactive method for cropping photographs given minimal information about important content location, provided by eye tracking. Cropping is formulated in a general optimization framework that facilitates adding new composition rules, and adapting the system to particular applications. Our system uses fixation data</ to identify important image content and compute the best crop for any given aspect ratio or size, enabling applications such as automatic snapshot recomposition, adaptive documents, and thumbnailing. We validate our approach with studies in which users compare our crops to ones produced by hand and by a completely automatic approach. Experiments show that viewers prefer our gaze-based crops to uncropped images and fully automatic crops.


eye tracking research & application | 2004

Robust clustering of eye movement recordings for quantification of visual interest

Anthony Santella; Douglas DeCarlo

Characterizing the location and extent of a viewers interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-of-interest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Inverted selective plane illumination microscopy (iSPIM) enables coupled cell identity lineaging and neurodevelopmental imaging in Caenorhabditis elegans

Yicong Wu; Alireza Ghitani; Ryan Christensen; Anthony Santella; Zhuo Du; Gary Rondeau; Zhirong Bao; Daniel A. Colón-Ramos; Hari Shroff

The Caenorhabditis elegans embryo is a powerful model for studying neural development, but conventional imaging methods are either too slow or phototoxic to take full advantage of this system. To solve these problems, we developed an inverted selective plane illumination microscopy (iSPIM) module for noninvasive high-speed volumetric imaging of living samples. iSPIM is designed as a straightforward add-on to an inverted microscope, permitting conventional mounting of specimens and facilitating SPIM use by development and neurobiology laboratories. iSPIM offers a volumetric imaging rate 30× faster than currently used technologies, such as spinning-disk confocal microscopy, at comparable signal-to-noise ratio. This increased imaging speed allows us to continuously monitor the development of C, elegans embryos, scanning volumes every 2 s for the 14-h period of embryogenesis with no detectable phototoxicity. Collecting ∼25,000 volumes over the entirety of embryogenesis enabled in toto visualization of positions and identities of cell nuclei. By merging two-color iSPIM with automated lineaging techniques we realized two goals: (i) identification of neurons expressing the transcription factor CEH-10/Chx10 and (ii) visualization of their neurodevelopmental dynamics. We found that canal-associated neurons use somal translocation and amoeboid movement as they migrate to their final position in the embryo. We also visualized axon guidance and growth cone dynamics as neurons circumnavigate the nerve ring and reach their targets in the embryo. The high-speed volumetric imaging rate of iSPIM effectively eliminates motion blur from embryo movement inside the egg case, allowing characterization of dynamic neurodevelopmental events that were previously inaccessible.


Nature Biotechnology | 2013

Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy

Yicong Wu; Peter Wawrzusin; Justin Senseney; Robert S. Fischer; Ryan Christensen; Anthony Santella; Andrew G. York; Peter Winter; Clare M. Waterman; Zhirong Bao; Daniel A. Colón-Ramos; Matthew J. McAuliffe; Hari Shroff

Optimal four-dimensional imaging requires high spatial resolution in all dimensions, high speed and minimal photobleaching and damage. We developed a dual-view, plane illumination microscope with improved spatiotemporal resolution by switching illumination and detection between two perpendicular objectives in an alternating duty cycle. Computationally fusing the resulting volumetric views provides an isotropic resolution of 330 nm. As the sample is stationary and only two views are required, we achieve an imaging speed of 200 images/s (i.e., 0.5 s for a 50-plane volume). Unlike spinning-disk confocal or Bessel beam methods, which illuminate the sample outside the focal plane, we maintain high spatiotemporal resolution over hundreds of volumes with negligible photobleaching. To illustrate the ability of our method to study biological systems that require high-speed volumetric visualization and/or low photobleaching, we describe microtubule tracking in live cells, nuclear imaging over 14 h during nematode embryogenesis and imaging of neural wiring during Caenorhabditis elegans brain development over 5 h.


non-photorealistic animation and rendering | 2004

Visual interest and NPR: an evaluation and manifesto

Anthony Santella; Douglas DeCarlo

Using eye tracking, we study the way viewers look at photos and image based NPR illustrations. Viewers examine the same number of locations in photos and in NPR images with uniformly high or low detail. In contrast, viewers are attracted to areas where detail is locally preserved in meaningfully abstracted images. This accords with the idea that artists carefully manipulate detail to control interest and understanding. It also validates the method of meaningful abstraction used in DeCarlo and Santella [2002]. Results also suggest eye tracking can be a useful tool for evaluation of NPR systems.


non-photorealistic animation and rendering | 2002

Abstracted painterly renderings using eye-tracking data

Anthony Santella; Douglas DeCarlo

When used by artists, manual interfaces for painterly rendering can yield very satisfying abstract transformations of images. Automatic techniques produce interesting paintings as well, but can only recast pictures in a different style without performing any abstraction. At best, information is removed uniformly across the image, without emphasizing the important content. We describe a new approach for the creation of painterly renderings that draws on a model of human perception and is driven by eye-tracking data. This approach can perform meaningful abstraction using this data, with the minimum interaction possible: the user need only look at the image for several seconds. We demonstrate the effectiveness of this interactive technique and compare it with a fully automatic approach.


BMC Bioinformatics | 2010

A hybrid blob-slice model for accurate and efficient detection of fluorescence labeled nuclei in 3D.

Anthony Santella; Zhuo Du; Sonja Nowotschin; Anna-Katerina Hadjantonakis; Zhirong Bao

BackgroundTo exploit the flood of data from advances in high throughput imaging of optically sectioned nuclei, image analysis methods need to correctly detect thousands of nuclei, ideally in real time. Variability in nuclear appearance and undersampled volumetric data make this a challenge.ResultsWe present a novel 3D nuclear identification method, which subdivides the problem, first segmenting nuclear slices within each 2D image plane, then using a shape model to assemble these slices into 3D nuclei. This hybrid 2D/3D approach allows accurate accounting for nuclear shape but exploits the clear 2D nuclear boundaries that are present in sectional slices to avoid the computational burden of fitting a complex shape model to volume data. When tested over C. elegans, Drosophila, zebrafish and mouse data, our method yielded 0 to 3.7% error, up to six times more accurate as well as being 30 times faster than published performances. We demonstrate our methods potential by reconstructing the morphogenesis of the C. elegans pharynx. This is an important and much studied developmental process that could not previously be followed at this single cell level of detail.ConclusionsBecause our approach is specialized for the characteristics of optically sectioned nuclear images, it can achieve superior accuracy in significantly less time than other approaches. Both of these characteristics are necessary for practical analysis of overwhelmingly large data sets where processing must be scalable to hundreds of thousands of cells and where the time cost of manual error correction makes it impossible to use data with high error rates. Our approach is fast, accurate, available as open source software and its learned shape model is easy to retrain. As our pharynx development example shows, these characteristics make single cell analysis relatively easy and will enable novel experimental methods utilizing complex data sets.


BMC Bioinformatics | 2014

A semi-local neighborhood-based framework for probabilistic cell lineage tracing

Anthony Santella; Zhuo Du; Zhirong Bao

BackgroundAdvances in fluorescence labeling and imaging have made it possible to acquire in vivo records of complex biological processes. Analysis has lagged behind acquisition in part because of the difficulty and computational expense of accurate cell tracking. In vivo analysis requires, at minimum, tracking hundreds of cells over hundreds of time points in complex three dimensional environments. We address this challenge with a computational framework capable of efficiently and accurately tracing entire cell lineages.ResultsThe bulk of the tracking problem—tracking cells during interphase—is straightforward and can be executed with simple and fast methods. Difficult cases originate from detection errors and relatively rare large motions. Therefore, our method focuses computational effort on difficult cases identified by local increases in cell number. We force these cases into tentative cell track bifurcations, which define natural semi-local neighborhoods that permit Bayesian judgment about the underlying cell behavior. The bifurcation judgment process not only correctly tracks through cell divisions and large movements, but also offers corrections to detection errors. We demonstrate that this method enables large scale analysis of Caenorhabditis elegans development, an ideal validation platform because of an invariant cell lineage.ConclusionThe high accuracy achieved by our method suggests that a bifurcation-based semi-local neighborhood provides sufficient information to recognize dependencies between nearby tracking choices, and to interpret difficult tracking cases without reverting to global optimization. Our method makes large amounts of lineage data accessible and opens the door to new types of statistical analysis of complex in vivo processes.

Collaboration


Dive into the Anthony Santella's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hari Shroff

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Yicong Wu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ryan Christensen

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Zhuo Du

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Abhishek Kumar

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge