Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Danna Gurari is active.

Publication


Featured researches published by Danna Gurari.


workshop on applications of computer vision | 2015

How to Collect Segmentations for Biomedical Images? A Benchmark Evaluating the Performance of Experts, Crowdsourced Non-experts, and Algorithms

Danna Gurari; Diane H. Theriault; Mehrnoosh Sameki; Brett C. Isenberg; Tuan A. Pham; Alberto Purwada; Patricia Solski; Matthew L. Walker; Chentian Zhang; Joyce Wong; Margrit Betke

Analyses of biomedical images often rely on demarcating the boundaries of biological structures (segmentation). While numerous approaches are adopted to address the segmentation problem including collecting annotations from domain-experts and automated algorithms, the lack of comparative benchmarking makes it challenging to determine the current state-of-art, recognize limitations of existing approaches, and identify relevant future research directions. To provide practical guidance, we evaluated and compared the performance of trained experts, crowd sourced non-experts, and algorithms for annotating 305 objects coming from six datasets that include phase contrast, fluorescence, and magnetic resonance images. Compared to the gold standard established by expert consensus, we found the best annotators were experts, followed by non-experts, and then algorithms. This analysis revealed that online paid crowd sourced workers without domain-specific backgrounds are reliable annotators to use as part of the laboratory protocol for segmenting biomedical images. We also found that fusing the segmentations created by crowd sourced internet workers and algorithms yielded improved segmentation results over segmentations created by single crowd sourced or algorithm annotations respectively. We invite extensions of our work by sharing our data sets and associated segmentation annotations (http://www.cs.bu.edu/~betke/Biomedical Image Segmentation).


medical image computing and computer assisted intervention | 2012

Hierarchical partial matching and segmentation of interacting cells

Zheng Wu; Danna Gurari; Joyce Y. Wong; Margrit Betke

We propose a method that automatically tracks and segments living cells in phase-contrast image sequences, especially for cells that deform and interact with each other or clutter. We formulate the problem as a many-to-one elastic partial matching problem between closed curves. We introduce Double Cyclic Dynamic Time Warping for the scenario where a collision event yields a single boundary that encloses multiple touching cells and that needs to be cut into separate cell boundaries. The resulting individual boundaries may consist of segments to be connected to produce closed curves that match well with the individual cell boundaries before the collision event. We show how to convert this partial-curve matching problem into a shortest path problem that we then solve efficiently by reusing the computed shortest path tree. We also use our shortest path algorithm to fill the gaps between the segments of the target curves. Quantitative results demonstrate the benefit of our method by showing maintained accurate recognition of individual cell boundaries across 8068 images containing multiple cell interactions.


human factors in computing systems | 2017

CrowdVerge: Predicting If People Will Agree on the Answer to a Visual Question

Danna Gurari; Kristen Grauman

Visual question answering systems empower users to ask any question about any image and receive a valid answer. However, existing systems do not yet account for the fact that a visual question can lead to a single answer or multiple different answers. While a crowd often agrees, disagreements do arise for many reasons including that visual questions are ambiguous, subjective, or difficult. We propose a model, CrowdVerge, for automatically predicting from a visual question whether a crowd would agree on one answer. We then propose how to exploit these predictions in a novel application to efficiently collect all valid answers to visual questions. Specifically, we solicit fewer human responses when answer agreement is expected and more human responses otherwise. Experiments on 121,811 visual questions asked by sighted and blind people show that, compared to existing crowdsourcing systems, our system captures the same answer diversity with typically 14-23% less crowd involvement.


workshop on applications of computer vision | 2013

SAGE: An approach and implementation empowering quick and reliable quantitative analysis of segmentation quality

Danna Gurari; Suele Ki Kim; Eugene Yang; Brett C. Isenberg; Tuan A. Pham; Alberto Purwada; Patricia Solski; Matthew L. Walker; Joyce Wong; Margrit Betke

Finding the outline of an object in an image is a fundamental step in many vision-based applications. It is important to demonstrate that the segmentation found accurately represents the contour of the object in the image. The discrepancy measure model for segmentation analysis focuses on selecting an appropriate discrepancy measure to compute a score that indicates how similar a query segmentation is to a gold standard segmentation. Observing that the score depends on the gold standard segmentation, we propose a framework that expands this approach by introducing the consideration of how to establish the gold standard segmentation. The framework shows how to obtain project-specific performance indicators in a principled way that links annotation tools, fusion methods, and evaluation algorithms into a unified model we call SAGE. We also describe a freely available implementation of SAGE that enables quick segmentation validation against either a single annotation or a fused annotation. Finally, three studies are presented to highlight the impact of annotation tools, an-notators, and fusion methods on establishing trusted gold standard segmentations for cell and artery images.


computer vision and pattern recognition | 2016

ICORD: Intelligent Collection of Redundant Data — A Dynamic System for Crowdsourcing Cell Segmentations Accurately and Efficiently

Mehrnoosh Sameki; Danna Gurari; Margrit Betke

Segmentation is a fundamental step in analyzing biological structures in microscopy images. When state-of-the-art automated methods are found to produce inaccurate boundaries, interactive segmentation can be effective. Since the inclusion of domain experts is typically expensive and does not scale, crowdsourcing has been considered. Due to concerns about the quality of crowd work, quality control methods that rely on a fixed number of redundant annotations have been used. We here introduce a collection strategy that dynamically assesses the quality of crowd work. We propose ICORD (Intelligent Collection Of Redundant annotation Data), a system that predicts the accuracy of a segmented region from analysis of (1) its geometric and intensity-based features and (2) the crowd workers behavioral features. Based on this score, ICORD dynamically determines if the annotation accuracy is satisfactory or if a higher-quality annotation should be sought out in another round of crowdsourcing. We tested ICORD on phase contrast and fluorescence images of 270 cells. We compared the performance of ICORD and a popular baseline method for which we aggregated 1,350 crowd-drawn cell segmentations. Our results show that ICORD collects annotations both accurately and efficiently. Accuracy levels are within 3 percentage points of those of the baseline. More importantly, due to its dynamic nature, ICORD vastly outperforms the baseline method with respect to efficiency. ICORD only uses between 27% and 50% of the resources, i.e., collection time and cost, that the baseline method requires.


International Journal of Computer Vision | 2018

Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the Segmentation(s)

Danna Gurari; Kun He; Bo Xiong; Jianming Zhang; Mehrnoosh Sameki; Suyog Dutt Jain; Stan Sclaroff; Margrit Betke; Kristen Grauman

We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as “ambiguous” or “not ambiguous” to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid “ground truth” foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47% of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths.


conference on computers and accessibility | 2018

BrowseWithMe: An Online Clothes Shopping Assistant for People with Visual Impairments

Abigale Stangl; Esha Kothari; Suyog Dutt Jain; Tom Yeh; Kristen Grauman; Danna Gurari

Our interviews with people who have visual impairments show clothes shopping is an important activity in their lives. Unfortunately, clothes shopping web sites remain largely inaccessible. We propose design recommendations to address online accessibility issues reported by visually impaired study participants and an implementation, which we call BrowseWithMe, to address these issues. BrowseWithMe employs artificial intelligence to automatically convert a product web page into a structured representation that enables a user to interactively ask the BrowseWithMe system what the user wants to learn about a product (e.g., What is the price? Can I see a magnified image of the pants?). This enables people to be active solicitors of the specific information they are seeking rather than passive listeners of unparsed information. Experiments demonstrate BrowseWithMe can make online clothes shopping more accessible and produce accurate image descriptions.


Archive | 2005

Harmonic Imaging Using a Mechanical Sector, B-MODE

Danna Gurari

HARMONIC IMAGING USING A MECHANICAL SECTOR, B-MODE ULTRASOUND SYSTEM by Danna Gurari ADVISOR: Dr. William D. Richard August 2005 Saint Louis Missouri An ultrasound imaging system includes transmitting ultrasound waves into a human body, collecting the reflections, manipulating the reflections, and then displaying them on computer screen as a grayscale image. The standard approach for ultrasound imaging is to use the fundamental frequency from the reflected signal to form images. However, it has been shown that images generated using the harmonic content have improved resolution as well as reduced noise, resulting in clearer images. Although harmonic imaging has been shown to return improved images, this has never been shown with a B-mode, mechanical sector ultrasound system. In this thesis, we demonstrated such a system. First there is a discussion of the theory of harmonic imaging, then a description of the ultrasound system used, and finally experimental results. to my parents, Shaula and Eitan Gurari


computer vision and pattern recognition | 2016

Pull the Plug? Predicting If Computers or Humans Should Segment Images

Danna Gurari; Suyog Dutt Jain; Margrit Betke; Kristen Grauman


computer vision and pattern recognition | 2018

VizWiz Grand Challenge: Answering Visual Questions From Blind People

Danna Gurari; Qing Li; Abigale Stangl; Anhong Guo; Chi Lin; Kristen Grauman; Jiebo Luo; Jeffrey P. Bigham

Collaboration


Dive into the Danna Gurari's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kristen Grauman

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joyce Wong

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Suyog Dutt Jain

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Abigale Stangl

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge