Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Çağatay Demiralp is active.

Publication


Featured researches published by Çağatay Demiralp.


IEEE Transactions on Medical Imaging | 2015

The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

Bjoern H. Menze; András Jakab; Stefan Bauer; Jayashree Kalpathy-Cramer; Keyvan Farahani; Justin S. Kirby; Yuliya Burren; Nicole Porz; Johannes Slotboom; Roland Wiest; Levente Lanczi; Elizabeth R. Gerstner; Marc-André Weber; Tal Arbel; Brian B. Avants; Nicholas Ayache; Patricia Buendia; D. Louis Collins; Nicolas Cordier; Jason J. Corso; Antonio Criminisi; Tilak Das; Hervé Delingette; Çağatay Demiralp; Christopher R. Durst; Michel Dojat; Senan Doyle; Joana Festa; Florence Forbes; Ezequiel Geremia

In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.


IEEE Transactions on Visualization and Computer Graphics | 2003

Visualizing diffusion tensor MR images using streamtubes and streamsurfaces

Song Zhang; Çağatay Demiralp; David H. Laidlaw

We present a new method for visualizing 3D volumetric diffusion tensor MR images. We distinguish between linear anisotropy and planar anisotropy and represent values in the two regimes using streamtubes and streamsurfaces, respectively. Streamtubes represent structures with primarily linear diffusion, typically fiber tracts; streamtube direction correlates with tract orientation. The cross-sectional shape and color of each streamtube represent additional information from the diffusion tensor at each point. Streamsurfaces represent structures in which diffusion is primarily planar. Our algorithm chooses a very small representative subset of the streamtubes and streamsurfaces for display. We describe the set of metrics used for the culling process, which reduces visual clutter and improves interactivity. We also generate anatomical landmarks to identify the locations of such structures as the eyes, skull surface, and ventricles. The final models are complex surface geometries that can be imported into many interactive graphics software environments. We describe a virtual environment to interact with these models. Expert feedback from doctors studying changes in white-matter structures after gamma-knife capsulotomy and preoperative planning for brain tumor surgery shows that streamtubes correlate well with major neural structures, the 2D section and geometric landmarks are important in understanding the visualization, and the stereo and interactivity from the virtual environment aid in understanding the complex geometric models.


medical image computing and computer-assisted intervention | 2012

Decision forests for tissue-specific segmentation of high-grade gliomas in multi-channel MR.

Darko Zikic; Ben Glocker; Ender Konukoglu; Antonio Criminisi; Çağatay Demiralp; Jamie Shotton; Owen M. Thomas; Tilak Das; Raj Jena; Stephen J. Price

We present a method for automatic segmentation of high-grade gliomas and their subregions from multi-channel MR images. Besides segmenting the gross tumor, we also differentiate between active cells, necrotic core, and edema. Our discriminative approach is based on decision forests using context-aware spatial features, and integrates a generative model of tissue appearance, by using the probabilities obtained by tissue-specific Gaussian mixture models as additional input for the forest. Our method classifies the individual tissue types simultaneously, which has the potential to simplify the classification task. The approach is computationally efficient and of low model complexity. The validation is performed on a labeled database of 40 multi-channel MR images, including DTI. We assess the effects of using DTI, and varying the amount of training data. Our segmentation results are highly accurate, and compare favorably to the state of the art.


IEEE Transactions on Visualization and Computer Graphics | 2006

CAVE and fishtank virtual-reality displays: a qualitative and quantitative comparison

Çağatay Demiralp; Cullen D. Jackson; David B. Karelitz; Song Zhang; David H. Laidlaw

We present the results from a qualitative and quantitative user study comparing fishtank virtual-reality (VR) and CAVE displays. The results of the qualitative study show that users preferred the fishtank VR display to the CAVE system for our scientific visualization application because of perceived higher resolution, brightness and crispness of imagery, and comfort of use. The results of the quantitative study show that users performed an abstract visual search task significantly more quickly and more accurately on the fishtank VR display system than in the CAVE. The same study also showed that visual context had no significant effect on task performance for either of the platforms. We suggest that fishtank VR displays are more effective than CAVEs for applications in which the task occurs outside the users reference frame, the user views and manipulates the virtual world from the outside in, and the size of the virtual object that the user interacts with is smaller than the users body and fits into the fishtank VR display. The results of both studies support this proposition


ieee visualization | 2001

An immersive virtual environment for DT-MRI volume visualization applications: a case study

Song Zhang; Çağatay Demiralp; Daniel F. Keefe; M. DaSilva; David H. Laidlaw; Benjamin D. Greenberg; Peter J. Basser; Carlo Pierpaoli; E. A. Chiocca; Thomas S. Deisboeck

We describe a virtual reality environment for visualizing tensor-valued volumetric datasets acquired with diffusion tensor magnetic resonance imaging (DT-MRI). We have prototyped a virtual environment that displays geometric representations of the volumetric second-order diffusion tensor data and are developing interaction and visualization techniques for two application areas: studying changes in white-matter structures after gamma-knife capsulotomy and pre-operative planning for brain tumor surgery. Our feedback shows that compared to desktop displays, our system helps the user better interpret the large and complex geometric models, and facilitates communication among a group of users.


IEEE Transactions on Visualization and Computer Graphics | 2014

Learning Perceptual Kernels for Visualization Design

Çağatay Demiralp; Michael S. Bernstein; Jeffrey Heer

Visualization design can benefit from careful consideration of perception, as different assignments of visual encoding variables such as color, shape and size affect how viewers interpret data. In this work, we introduce perceptual kernels: distance matrices derived from aggregate perceptual judgments. Perceptual kernels represent perceptual differences between and within visual variables in a reusable form that is directly applicable to visualization evaluation and automated design. We report results from crowd-sourced experiments to estimate kernels for color, shape, size and combinations thereof. We analyze kernels estimated using five different judgment types-including Likert ratings among pairs, ordinal triplet comparisons, and manual spatial arrangement-and compare them to existing perceptual models. We derive recommendations for collecting perceptual similarities, and then demonstrate how the resulting kernels can be applied to automate visualization design decisions.


IEEE Transactions on Visualization and Computer Graphics | 2012

Exploring Brain Connectivity with Two-Dimensional Neural Maps

Radu Jianu; Çağatay Demiralp; David H. Laidlaw

We introduce two-dimensional neural maps for exploring connectivity in the brain. For this, we create standard streamtube models from diffusion-weighted brain imaging data sets along with neural paths hierarchically projected into the plane. These planar neural maps combine desirable properties of low-dimensional representations, such as visual clarity and ease of tract-of-interest selection, with the anatomical familiarity of 3D brain models and planar sectional views. We distribute this type of visualization both in a traditional stand-alone interactive application and as a novel, lightweight web-accessible system. The web interface integrates precomputed neural-path representations into a geographical digital-maps framework with associated labels, metrics, statistics, and linkouts. Anecdotal and quantitative comparisons of the present method with a recently proposed 2D point representation suggest that our representation is more intuitive and easier to use and learn. Similarly, users are faster and more accurate in selecting bundles using the 2D path representation than the 2D point representation. Finally, expert feedback on the web interface suggests that it can be useful for collaboration as well as quick exploration of data.


ieee visualization | 2003

Subjective Usefulness of CAVE and Fish Tank VR Display Systems for a Scientific Visualization Application

Çağatay Demiralp; David H. Laidlaw; Cullen D. Jackson; Daniel F. Keefe; Song Zhang

The scientific visualization community increasingly uses VR display systems, but useful interaction paradigms for these systems are still an active research subject. It can be helpful to know the relative merits of different VR systems for different applications and tasks. In this paper, we report on the subjective usefulness of two virtual reality (VR) display systems, a CAVE and a Fish Tank VR display, for a scientific visualization application (see Figure 1). We conducted an anecdotal study to learn five domainexpert users’ impressions about the relative usefulness of the two VR systems for their purposes of using the application. Most of the users preferred the Fish Tank display because of perceived display resolution, crispness, brightness and more comfortable use. Whereas, they found the larger scale of objects, expanded field of view, and suitability for gestural expressions and natural interaction in the CAVE more useful. The term “Fish Tank VR” is used to describe desktop systems that display stereo image of a 3D scene, which is viewed on a monitor using perspective projection coupled to the head position of the observer [Ware et al. 1993]. A CAVE is a room-size, immersive VR display environment where the stereoscopic view of the virtual world is generated according to the user’s head position and orientation [Cruz-Neira et al. 1993]. Some related work compares Fish Tank VR displays with Head Mounted Stereo Displays (HMD) and conventional desktop displays. In [Ware et al. 1993; Arthur et al. 1993], the authors compare Fish Tank VR with an HMD and conventional desktop systems. [Pausch et al. 1997] showed that HMDs can improve performance, compared to conventional desktop systems, in a generic search task when the target is not present. However, a later study showed that these findings do not apply to desktop VR; Fish Tank VR and desktop VR have a significant advantage over HMD VR in performing a generic search task [Robertson et al. 1997]. [Bowman et al. 2001] compared HMD with Tabletop (workbench) and CAVE systems for search and rotation tasks respectively They found that HMD users performed significantly better than CAVE users for a natural rotation task. For a difficult search task, they also showed that subjects perform differently depending on which display they encountered first. Bowman and his colleagues’ work shares similar motivations to ours. We go beyond their work with a direct comparison of CAVE and Fish Tank VR platforms. Also, most of previous studies have evaluated VR systems by looking at user performance for a few generic tasks such as rotation and visual search on experiment specific, simple applications. For most of the real visualization applications it may be difficult to reduce the interactions into a set of simple, generic tasks. Consequently, it is not clear how well the results of these studies apply to real visualization applications. This point is elucidated in a recent study that presented the importance of application specific user studies using tasks that reflect end user’s needs [Swan II et al. 2003]. In this study, the authors compare user performance for an application specific task across desktop, CAVE, workbench and display wall platforms. They found that the users performed tasks fastest using the desktop and slowest usFigure 1: The visualization application running in the CAVE (left image) and on the Fish Tank VR display (right image).


IEEE Computer Graphics and Applications | 2014

Visual Embedding: A Model for Visualization

Çağatay Demiralp; Carlos Eduardo Scheidegger; Gordon L. Kindlmann; David H. Laidlaw; Jeffrey Heer

The authors propose visual embedding as a model for automatically generating and evaluating visualizations. A visual embedding is a function from data points to a space of visual primitives that measurably preserves structures in the data (domain) within the mapped perceptual space (range). The authors demonstrate its use with three examples: coloring of neural tracts, scatterplots with icons, and evaluation of alternative diffusion tensor glyphs. They discuss several techniques for generating visual-embedding functions, including probabilistic graphical models for embedding in discrete visual spaces. They also describe two complementary approaches--crowdsourcing and visual product spaces--for building visual spaces with associated perceptual--distance measures. In addition, they recommend several research directions for further developing the visual-embedding model.


IEEE Transactions on Visualization and Computer Graphics | 2009

Coloring 3D Line Fields Using Boy’s Real Projective Plane Immersion

Çağatay Demiralp; John F. Hughes; David H. Laidlaw

We introduce a new method for coloring 3D line fields and show results from its application in visualizing orientation in DTI brain data sets. The method uses Boys surface, an immersion of RP2 in 3D. This coloring method is smooth and one-to-one except on a set of measure zero, the double curve of Boys surface.

Collaboration


Dive into the Çağatay Demiralp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Song Zhang

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey Heer

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Bahador Saket

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Endert

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge