Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tolga Tasdizen is active.

Publication


Featured researches published by Tolga Tasdizen.


Computer Graphics Forum | 2003

Particle-Based Simulation of Fluids

Simon Premoûe; Tolga Tasdizen; James Bigler; Aaron E. Lefohn; Ross T. Whitaker

Due to our familiarity with how fluids move and interact, as well as their complexity, plausible animation of fluidsremains a challenging problem. We present a particle interaction method for simulating fluids. The underlyingequations of fluid motion are discretized using moving particles and their interactions. The method allows simulationand modeling of mixing fluids with different physical properties, fluid interactions with stationary objects, andfluids that exhibit significant interface breakup and fragmentation. The gridless computational method is suitedfor medium scale problems since computational elements exist only where needed. The method fits well into thecurrent user interaction paradigm and allows easy user control over the desired fluid motion.


ieee visualization | 2002

Geometric surface smoothing via anisotropic diffusion of normals

Tolga Tasdizen; Ross T. Whitaker; Paul Burchard; Stanley Osher

This paper introduces a method for smoothing complex, noisy surfaces, while preserving (and enhancing) sharp, geometric features. It has two main advantages over previous approaches to feature preserving surface smoothing. First is the use of level set surface models, which allows us to process very complex shapes of arbitrary and changing topology. This generality makes it well suited for processing surfaces that are derived directly from measured data. The second advantage is that the proposed method derives from a well-founded formulation, which is a natural generalization of anisotropic diffusion, as used in image processing. This formulation is based on the proposition that the generalization of image filtering entails filtering the normals of the surface, rather than processing the positions of points on a mesh.


IEEE Transactions on Image Processing | 2009

Principal Neighborhood Dictionaries for Nonlocal Means Image Denoising

Tolga Tasdizen

We present an in-depth analysis of a variation of the nonlocal means (NLM) image denoising algorithm that uses principal component analysis (PCA) to achieve a higher accuracy while reducing computational load. Image neighborhood vectors are first projected onto a lower dimensional subspace using PCA. The dimensionality of this subspace is chosen automatically using parallel analysis. Consequently, neighborhood similarity weights for denoising are computed using distances in this subspace rather than the full space. The resulting algorithm is referred to as principal neighborhood dictionary (PND) nonlocal means. We investigate PNDs accuracy as a function of the dimensionality of the projection subspace and demonstrate that denoising accuracy peaks at a relatively low number of dimensions. The accuracy of NLM and PND are also examined with respect to the choice of image neighborhood and search window sizes. Finally, we present a quantitative and qualitative comparison of PND versus NLM and another image neighborhood PCA-based state-of-the-art image denoising algorithm.


ACM Transactions on Graphics | 2003

Geometric surface processing via normal maps

Tolga Tasdizen; Ross T. Whitaker; Paul Burchard; Stanley Osher

We propose that the generalization of signal and image processing to surfaces entails filtering the normals of the surface, rather than filtering the positions of points on a mesh. Using a variational strategy, penalty functions on the surface geometry can be formulated as penalty functions on the surface normals, which are computed using geometry-based shape metrics and minimized using fourth-order gradient descent partial differential equations (PDEs). In this paper, we introduce a two-step approach to implementing geometric processing tools for surfaces: (i) operating on the normal map of a surface, and (ii) manipulating the surface to fit the processed normals. Iterating this two-step process, we efficiently can implement geometric fourth-order flows by solving a set of coupled second-order PDEs. The computational approach uses level set surface models; therefore, the processing does not depend on any underlying parameterization. This paper will demonstrate that the proposed strategy provides for a wide range of surface processing operations, including edge-preserving smoothing and high-boost filtering. Furthermore, the generality of the implementation makes it appropriate for very complex surface models, for example, those constructed directly from measured data.


Molecular Vision | 2011

Exploring the retinal connectome

James R. Anderson; Bryan W. Jones; Carl B. Watt; Margaret V. Shaw; Jia Hui Yang; David L DeMill; James S. Lauritzen; Yanhua Lin; Kevin Rapp; David N. Mastronarde; Pavel Koshevoy; Bradley Grimm; Tolga Tasdizen; Ross T. Whitaker; Robert E. Marc

The synthesis and evaluation of 5α-reductase inhibitory activity of some 4-azasteroid-20-ones and 20-oximes and 3β-hydroxy-, 3β-acetoxy-, or epoxy-substituted C21 steroidal 20-ones and 20-oximes having double bonds in the A and/or B ring are described. Inhibitory activity of synthesized compounds was assessed using 5α-reductase enzyme and [1,2,6,7-3H]testosterone as substrate. All synthesized compounds were less active than finasteride (IC50: 1.2 nM). Three 4-azasteroid-2-oximes (compounds 4, 6 and 8) showed good inhibitory activity (IC50: 26, 10 and 11 nM) and were more active than corresponding 4-azasteroid 20-ones (compounds 3, 5 and 7). 3β-Hydroxy-, 3β-acetoxy- and 1α,2α-, 5α,6α- or 6α,7α-epoxysteroid-20-one and -20-oxime derivatives having double bonds in the A and/or B ring showed no inhibition of 5α-reductase enzyme.


PLOS Biology | 2009

A computational framework for ultrastructural mapping of neural circuitry.

James R. Anderson; Bryan W. Jones; Jia-Hui Yang; Marguerite V. Shaw; Carl B. Watt; Pavel Koshevoy; Joel Spaltenstein; Elizabeth Jurrus; U.V. Kannan; Ross T. Whitaker; David N. Mastronarde; Tolga Tasdizen; Robert E. Marc

Circuitry mapping of metazoan neural systems is difficult because canonical neural regions (regions containing one or more copies of all components) are large, regional borders are uncertain, neuronal diversity is high, and potential network topologies so numerous that only anatomical ground truth can resolve them. Complete mapping of a specific network requires synaptic resolution, canonical region coverage, and robust neuronal classification. Though transmission electron microscopy (TEM) remains the optimal tool for network mapping, the process of building large serial section TEM (ssTEM) image volumes is rendered difficult by the need to precisely mosaic distorted image tiles and register distorted mosaics. Moreover, most molecular neuronal class markers are poorly compatible with optimal TEM imaging. Our objective was to build a complete framework for ultrastructural circuitry mapping. This framework combines strong TEM-compliant small molecule profiling with automated image tile mosaicking, automated slice-to-slice image registration, and gigabyte-scale image browsing for volume annotation. Specifically we show how ultrathin molecular profiling datasets and their resultant classification maps can be embedded into ssTEM datasets and how scripted acquisition tools (SerialEM), mosaicking and registration (ir-tools), and large slice viewers (MosaicBuilder, Viking) can be used to manage terabyte-scale volumes. These methods enable large-scale connectivity analyses of new and legacy data. In well-posed tasks (e.g., complete network mapping in retina), terabyte-scale image volumes that previously would require decades of assembly can now be completed in months. Perhaps more importantly, the fusion of molecular profiling, image acquisition by SerialEM, ir-tools volume assembly, and data viewers/annotators also allow ssTEM to be used as a prospective tool for discovery in nonneural systems and a practical screening methodology for neurogenetics. Finally, this framework provides a mechanism for parallelization of ssTEM imaging, volume assembly, and data analysis across an international user base, enhancing the productivity of a large cohort of electron microscopists.


Medical Image Analysis | 2010

Manifold modeling for brain population analysis

Samuel Gerber; Tolga Tasdizen; P. Thomas Fletcher; Sarang C. Joshi; Ross T. Whitaker

This paper describes a method for building efficient representations of large sets of brain images. Our hypothesis is that the space spanned by a set of brain images can be captured, to a close approximation, by a low-dimensional, nonlinear manifold. This paper presents a method to learn such a low-dimensional manifold from a given data set. The manifold model is generative-brain images can be constructed from a relatively small set of parameters, and new brain images can be projected onto the manifold. This allows to quantify the geometric accuracy of the manifold approximation in terms of projection distance. The manifold coordinates induce a Euclidean coordinate system on the population data that can be used to perform statistical analysis of the population. We evaluate the proposed method on the OASIS and ADNI brain databases of head MR images in two ways. First, the geometric fit of the method is qualitatively and quantitatively evaluated. Second, the ability of the brain manifold model to explain clinical measures is analyzed by linear regression in the manifold coordinate space. The regression models show that the manifold model is a statistically significant descriptor of clinical parameters.


Medical Image Analysis | 2009

Axon tracking in serial block-face scanning electron microscopy

Elizabeth Jurrus; Melissa Hardy; Tolga Tasdizen; P. Thomas Fletcher; Pavel Koshevoy; Chi-Bin Chien; Winfried Denk; Ross T. Whitaker

Electron microscopy is an important modality for the analysis of neuronal structures in neurobiology. We address the problem of tracking axons across large distances in volumes acquired by serial block-face scanning electron microscopy (SBFSEM). Tracking, for this application, is defined as the segmentation of an axon that spans a volume using similar features between slices. This is a challenging problem due to the small cross-sectional size of axons and the low signal-to-noise ratio in our SBFSEM images. A carefully engineered algorithm using Kalman-snakes and optical flow computation is presented. Axon tracking is initialized with user clicks or automatically using the watershed segmentation algorithm, which identifies axon centers. Multiple axons are tracked from slice to slice through a volume, updating the positions and velocities in the model and providing constraints to maintain smoothness between slices. Validation results indicate that this algorithm can significantly speed up the task of manual axon tracking.


Magnetic Resonance in Medicine | 2007

Temporally constrained reconstruction of dynamic cardiac perfusion MRI.

Ganesh Adluru; Suyash P. Awate; Tolga Tasdizen; Ross T. Whitaker; Edward DiBella

Dynamic contrast‐enhanced (DCE) MRI is a powerful technique to probe an area of interest in the body. Here a temporally constrained reconstruction (TCR) technique that requires less k‐space data over time to obtain good‐quality reconstructed images is proposed. This approach can be used to improve the spatial or temporal resolution, or increase the coverage of the object of interest. The method jointly reconstructs the space‐time data iteratively with a temporal constraint in order to resolve aliasing. The method was implemented and its feasibility tested on DCE myocardial perfusion data with little or no motion. The results obtained from sparse k‐space data using the TCR method were compared with results obtained with a sliding‐window (SW) method and from full data using the standard inverse Fourier transform (IFT) reconstruction. Acceleration factors of 5 (R = 5) were achieved without a significant loss in image quality. Mean improvements of 28 ± 4% in the signal‐to‐noise ratio (SNR) and 14 ± 4% in the contrast‐to‐noise ratio (CNR) were observed in the images reconstructed using the TCR method on sparse data (R = 5) compared to the standard IFT reconstructions from full data for the perfusion datasets. The method has the potential to improve dynamic myocardial perfusion imaging and also to reconstruct other sparse dynamic MR acquisitions. Magn Reson Med 57:1027–1036, 2007.


IEEE Transactions on Image Processing | 2000

Improving the stability of algebraic curves for applications

Tolga Tasdizen; Jean Philippe Tarel; David B. Cooper

An algebraic curve is defined as the zero set of a polynomial in two variables. Algebraic curves are practical for modeling shapes much more complicated than conics or superquadrics. The main drawback in representing shapes by algebraic curves has been the lack of repeatability in fitting algebraic curves to data. Usually, arguments against using algebraic curves involve references to mathematicians Wilkinson (and Runge). The first goal of this article is to understand the stability issue of algebraic curve fitting. Then a fitting method based on ridge regression and restricting the representation to well behaved subsets of polynomials is proposed, and its properties are investigated. The fitting algorithm is of sufficient stability for very fast position-invariant shape recognition, position estimation, and shape tracking, based on invariants and new representations. Among appropriate applications are shape-based indexing into image databases.

Collaboration


Dive into the Tolga Tasdizen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Devrim Unay

Bahçeşehir University

View shared research outputs
Researchain Logo
Decentralizing Knowledge