Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mehrnoosh Sameki is active.

Publication


Featured researches published by Mehrnoosh Sameki.


computer vision and pattern recognition | 2015

Salient Object Subitizing

Jianming Zhang; Shugao Ma; Mehrnoosh Sameki; Stan Sclaroff; Margrit Betke; Zhe L. Lin; Xiaohui Shen; Brian L. Price; Radomir Mech

People can immediately and precisely identify that an image contains 1, 2, 3 or 4 items by a simple glance. The phenomenon, known as Subitizing, inspires us to pursue the task of Salient Object Subitizing (SOS), i.e. predicting the existence and the number of salient objects in a scene using holistic cues. To study this problem, we propose a new image dataset annotated using an online crowdsourcing marketplace. We show that a proposed subitizing technique using an end-to-end Convolutional Neural Network (CNN) model achieves significantly better than chance performance in matching human labels on our dataset. It attains 94% accuracy in detecting the existence of salient objects, and 42–82% accuracy (chance is 20%) in predicting the number of salient objects (1, 2, 3, and 4+), without resorting to any object localization process. Finally, we demonstrate the usefulness of the proposed subitizing technique in two computer vision applications: salient object detection and object proposal.


workshop on applications of computer vision | 2015

How to Collect Segmentations for Biomedical Images? A Benchmark Evaluating the Performance of Experts, Crowdsourced Non-experts, and Algorithms

Danna Gurari; Diane H. Theriault; Mehrnoosh Sameki; Brett C. Isenberg; Tuan A. Pham; Alberto Purwada; Patricia Solski; Matthew L. Walker; Chentian Zhang; Joyce Wong; Margrit Betke

Analyses of biomedical images often rely on demarcating the boundaries of biological structures (segmentation). While numerous approaches are adopted to address the segmentation problem including collecting annotations from domain-experts and automated algorithms, the lack of comparative benchmarking makes it challenging to determine the current state-of-art, recognize limitations of existing approaches, and identify relevant future research directions. To provide practical guidance, we evaluated and compared the performance of trained experts, crowd sourced non-experts, and algorithms for annotating 305 objects coming from six datasets that include phase contrast, fluorescence, and magnetic resonance images. Compared to the gold standard established by expert consensus, we found the best annotators were experts, followed by non-experts, and then algorithms. This analysis revealed that online paid crowd sourced workers without domain-specific backgrounds are reliable annotators to use as part of the laboratory protocol for segmenting biomedical images. We also found that fusing the segmentations created by crowd sourced internet workers and algorithms yielded improved segmentation results over segmentations created by single crowd sourced or algorithm annotations respectively. We invite extensions of our work by sharing our data sets and associated segmentation annotations (http://www.cs.bu.edu/~betke/Biomedical Image Segmentation).


computer vision and pattern recognition | 2016

ICORD: Intelligent Collection of Redundant Data — A Dynamic System for Crowdsourcing Cell Segmentations Accurately and Efficiently

Mehrnoosh Sameki; Danna Gurari; Margrit Betke

Segmentation is a fundamental step in analyzing biological structures in microscopy images. When state-of-the-art automated methods are found to produce inaccurate boundaries, interactive segmentation can be effective. Since the inclusion of domain experts is typically expensive and does not scale, crowdsourcing has been considered. Due to concerns about the quality of crowd work, quality control methods that rely on a fixed number of redundant annotations have been used. We here introduce a collection strategy that dynamically assesses the quality of crowd work. We propose ICORD (Intelligent Collection Of Redundant annotation Data), a system that predicts the accuracy of a segmented region from analysis of (1) its geometric and intensity-based features and (2) the crowd workers behavioral features. Based on this score, ICORD dynamically determines if the annotation accuracy is satisfactory or if a higher-quality annotation should be sought out in another round of crowdsourcing. We tested ICORD on phase contrast and fluorescence images of 270 cells. We compared the performance of ICORD and a popular baseline method for which we aggregated 1,350 crowd-drawn cell segmentations. Our results show that ICORD collects annotations both accurately and efficiently. Accuracy levels are within 3 percentage points of those of the baseline. More importantly, due to its dynamic nature, ICORD vastly outperforms the baseline method with respect to efficiency. ICORD only uses between 27% and 50% of the resources, i.e., collection time and cost, that the baseline method requires.


International Journal of Computer Vision | 2018

Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the Segmentation(s)

Danna Gurari; Kun He; Bo Xiong; Jianming Zhang; Mehrnoosh Sameki; Suyog Dutt Jain; Stan Sclaroff; Margrit Betke; Kristen Grauman

We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as “ambiguous” or “not ambiguous” to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid “ground truth” foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47% of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths.


International Journal of Human-computer Interaction | 2018

Exploration of Assistive Technologies Used by People with Quadriplegia Caused by Degenerative Neurological Diseases

Wenxin Feng; Mehrnoosh Sameki; Margrit Betke

ABSTRACT Various assistive devices and interfaces to access the computer have been developed for people with severe motor impairments. This article explores how effective these technologies are for individuals with quadriplegia caused by degenerative neurological diseases. The following questions are studied: (1) What activities are performed? (2) What tools are used? (3) What are the advantages and limitations of the tools? (4) How do users learn about and choose assistive technologies? (5) Why are some technologies abandoned? Results of a qualitative study with 15 participants indicate that study participants have strong needs for efficient text entry and communication that are not met. A lack of information about technology options limits the choices of several of the study participants. The study revealed that automated interface personalization and adaptation to disease progression should be important design goals for future assistive technologies that support users with quadriplegia caused by degenerative neurological diseases.


national conference on artificial intelligence | 2015

Predicting Quality of Crowdsourced Image Segmentations from Crowd Behavior

Mehrnoosh Sameki; Danna Gurari; Margrit Betke


national conference on artificial intelligence | 2016

Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation System

Danna Gurari; Mehrnoosh Sameki; Margrit Betke


national conference on artificial intelligence | 2016

Rigorously Collecting Commonsense Judgments for Complex Question-Answer Content

Mehrnoosh Sameki; Aditya Barua; Praveen Paritosh


arXiv: Human-Computer Interaction | 2016

Dynamic Allocation of Crowd Contributions for Sentiment Analysis during the 2016 U.S. Presidential Election.

Mehrnoosh Sameki; Mattia Gentil; Kate K. Mays; Lei Guo; Margrit Betke

Collaboration


Dive into the Mehrnoosh Sameki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danna Gurari

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Xiong

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge