Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sachin Mehta is active.

Publication


Featured researches published by Sachin Mehta.


workshop on applications of computer vision | 2016

Region graph based method for multi-object detection and tracking using depth cameras

Sachin Mehta; Balakrishnan Prabhakaran

In this paper, we propose a multi-object detection and tracking method using depth cameras. Depth maps are very noisy and obscure in object detection. We first propose a region-based method to suppress high magnitude noise which cannot be filtered using spatial filters. Second, the proposed method detect Region of Interests by temporal learning which are then tracked using weighted graph-based approach. We demonstrate the performance of the proposed method on standard depth camera datasets with and without object occlusions. Experimental results show that the proposed, method is able to suppress high magnitude noise in depth maps and detect/track the objects (with and without occlusion).


international conference on image processing | 2014

3D content fingerprinting

Sachin Mehta; Balakrishnan Prabhakaran

Fingerprint is a set of features that uniquely characterizes a video. The aim of content fingerprinting is to determine the duplicate videos over the Internet. In this paper, a method for content fingerprinting of Depth-Image-Based-Rendering (DIBR) 3D videos is proposed. The proposed method is two pronged approach: (i) histogram based global fingerprint and (ii) keypoint based local fingerprint. Though global fingerprint is fast and robust towards DIBR 3D pre-processing, it is not robust against severe distortions such as change in brightness. To make the proposed method robust against such severe distortions, we have complemented the global fingerprint with widely used keypoint based local fingerprint. Experimental results show that the proposed two pass method as robust as keypoint based local fingerprinting method. Additionally, the proposed method improves the video matching time of keypoint based local fingerprint method by 60% to 100%.


medical image computing and computer-assisted intervention | 2018

Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images.

Sachin Mehta; Ezgi Mercan; Jamen Bartlett; Donald L. Weaver; Joann G. Elmore; Linda G. Shapiro

In this paper, we introduce a conceptually simple network for generating discriminative tissue-level segmentation masks for the purpose of breast cancer diagnosis. Our method efficiently segments different types of tissues in breast biopsy images while simultaneously predicting a discriminative map for identifying important areas in an image. Our network, Y-Net, extends and generalizes U-Net by adding a parallel branch for discriminative map generation and by supporting convolutional block modularity, which allows the user to adjust network efficiency without altering the network topology. Y-Net delivers state-of-the-art segmentation accuracy while learning 6.6x fewer parameters than its closest competitors. The addition of descriptive power from Y-Nets discriminative segmentation masks improve diagnostic classification accuracy by 7% over state-of-the-art methods for diagnostic classification. Source code is available at: this https URL.


international conference on pattern recognition applications and methods | 2018

Automated Diagnosis of Breast Cancer and Pre-invasive Lesions on Digital Whole Slide Images.

Ezgi Mercan; Sachin Mehta; Jamen Bartlett; Donald L. Weaver; Joann G. Elmore; Linda G. Shapiro

Digital whole slide imaging has the potential to change diagnostic pathology by enabling the use of computeraided diagnosis systems. To this end, we used a dataset of 240 digital slides that are interpreted and diagnosed by an expert panel to develop and evaluate image features for diagnostic classification of breast biopsy whole slides to four categories: benign, atypia, ductal carcinoma in-situ and invasive carcinoma. Starting with a tissue labeling step, we developed features that describe the tissue composition of the image and the structural changes. In this paper, we first introduce two models for the semantic segmentation of the regions of interest into tissue labels: an SVM-based model and a CNN-based model. Then, we define an image feature that consists of superpixel tissue label frequency and co-occurrence histograms based on the tissue label segmentations. Finally, we use our features in two diagnostic classification schemes: a four-class classification, and an alternative classification that is one-diagnosis-at-a-time starting with invasive versus benign and ending with atypia versus ductal carcinoma in-situ (DCIS). We show that our features achieve competitive results compared to human performance on the same dataset. Especially at the critical atypia vs. DCIS threshold, our system outperforms pathologists by achieving an 83% accuracy.


Multimedia Systems | 2016

Scene-based fingerprinting method for traitor tracing

Sachin Mehta; Rajarathnam Nallusamy; Balakrishnan Prabhakaran

In this paper, scene-based fingerprinting method for traitor tracing is proposed which is computationally less complex and handles large user group, say 1011 users while requiring few frames to embed the watermark. The proposed method uses QR code as a watermark due to its three main features: (1) inherent templates, (2) noise resiliency, and (3) compact size. The proposed method creates the QR code watermark on-the-fly which is then segmented and embedded parallely inside the scenes of video using the watermarking key. The features of QR code, segmentation, and watermarking key not only help the proposed method in supporting a large user group but also make it computationally fast. Further, synchronization issues may arise due to addition and deletion of scenes. To avoid such scenarios, the proposed method matches the inherent templates present in QR code with the templates present in the segments of the extracted watermark. Experimental results show that the proposed method is computationally efficient and is robust against attacks such as collusion, scene dropping, scene addition, and other common signal processing attacks.


workshop on applications of computer vision | 2018

Learning to Segment Breast Biopsy Whole Slide Images

Sachin Mehta; Ezgi Mercan; Jamen Bartlett; Donald L. Weaver; Joann G. Elmore; Linda G. Shapiro


workshop on applications of computer vision | 2018

DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels

Sachin Mehta; Amar P. Azad; Saneem A. Chemmengath; Vikas Raykar; Shivkumar Kalyanaraman


european conference on computer vision | 2018

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

Sachin Mehta; Mohammad Rastegari; Anat Caspi; Linda G. Shapiro; Hannaneh Hajishirzi


empirical methods in natural language processing | 2018

Pyramidal Recurrent Unit for Language Modeling

Sachin Mehta; Rik Koncel-Kedziorski; Mohammad Rastegari; Hannaneh Hajishirzi


arXiv: Computer Vision and Pattern Recognition | 2017

Identifying Most Walkable Direction for Navigation in an Outdoor Environment.

Sachin Mehta; Hannaneh Hajishirzi; Linda G. Shapiro

Collaboration


Dive into the Sachin Mehta's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ezgi Mercan

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge