Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yachna Sharma is active.

Publication


Featured researches published by Yachna Sharma.


international conference of the ieee engineering in medicine and biology society | 2008

Simple quantification of multiplexed Quantum Dot staining in clinical tissue samples

Matthew L. Caldwell; Richard A. Moffitt; Jian Liu; R. Mitchell Parry; Yachna Sharma; May D. Wang

In this paper, we present a simple method for the processing and quantification of multiplexed Quantum Dot (QD) labeled images of clinical cancer tissue samples. QDs provide several features which make them ideal for reliable quantification, including long-term signal stability, high signal-to-noise ratios, as well as narrow emission bandwidths. Deconvolution of QD spectra is accomplished in a batch mode in which unmixing parameters are preserved across samples to allow for quantitative and reproducible comparisons. After unmixing the QD images, we segment each one to exclude acellular regions. We use a simple average intensity to quantify the level of QD staining for each image. We illustrate the viability of this approach by testing it on 28 tissue samples using a tissue microarray. We show that using as few as two QD protein targets (MDM-2, and B-actin), the Renal Cell Carcinoma (RCC) samples are distinguishable from adjacent normal tissue samples. A simple linear discriminant results in 100% classification of 25 RCC samples and 3 normal samples. This suggests that multiplexed QDs can be used to properly diagnose RCC from otherwise healthy tissue. We expect to apply this work to larger panels of more robust QD biomarker targets to aid in clinical decision-making for the diagnosis and prognosis of diseases, such as cancer.


international symposium on biomedical imaging | 2014

Automated surgical OSATS prediction from videos

Yachna Sharma; Thomas Plötz; Nils Hammerld; Sebastian Mellor; Roisin McNaney; Patrick Olivier; Sandeep Deshmukh; A. W. McCaskie; Irfan A. Essa

The assessment of surgical skills is an essential part of medical training. The prevalent manual evaluations by expert surgeons are time consuming and often their outcomes vary substantially from one observer to another. We present a video-based framework for automated evaluation of surgical skills based on the Objective Structured Assessment of Technical Skills (OSATS) criteria. We encode the motion dynamics via frame kernel matrices, and represent the motion granularity by texture features. Linear discriminant analysis is used to derive a reduced dimensionality feature space followed by linear regression to predict OSATS skill scores. We achieve statistically significant correlation (p-value <;0.01) between the ground-truth (given by domain experts) and the OSATS scores predicted by our framework.


computer assisted radiology and surgery | 2016

Automated video-based assessment of surgical skills for training and evaluation in medical schools

Aneeq Zia; Yachna Sharma; Vinay Bettadapura; Eric L. Sarin; Thomas Ploetz; Mark A. Clements; Irfan A. Essa

PurposeRoutine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities.MethodWe compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis.ResultsWe were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos.ConclusionOur evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.


medical image computing and computer assisted intervention | 2015

Automated Assessment of Surgical Skills Using Frequency Analysis

Aneeq Zia; Yachna Sharma; Vinay Bettadapura; Eric L. Sarin; Mark A. Clements; Irfan A. Essa

We present an automated framework for visual assessment of the expertise level of surgeons using the OSATS Objective Structured Assessment of Technical Skills criteria. Video analysis techniques for extracting motion quality via frequency coefficients are introduced. The framework is tested on videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.


Journal of Pathology Informatics | 2011

Feasibility analysis of high resolution tissue image registration using 3-D synthetic data

Yachna Sharma; Richard A. Moffitt; Todd H. Stokes; Qaiser Chaudry; May D. Wang

Background: Registration of high-resolution tissue images is a critical step in the 3D analysis of protein expression. Because the distance between images (~4-5μm thickness of a tissue section) is nearly the size of the objects of interest (~10-20μm cancer cell nucleus), a given object is often not present in both of two adjacent images. Without consistent correspondence of objects between images, registration becomes a difficult task. This work assesses the feasibility of current registration techniques for such images. Methods: We generated high resolution synthetic 3-D image data sets emulating the constraints in real data. We applied multiple registration methods to the synthetic image data sets and assessed the registration performance of three techniques (i.e., mutual information (MI), kernel density estimate (KDE) method [1], and principal component analysis (PCA)) at various slice thicknesses (with increments of 1μm) in order to quantify the limitations of each method. Results: Our analysis shows that PCA, when combined with the KDE method based on nuclei centers, aligns images corresponding to 5μm thick sections with acceptable accuracy. We also note that registration error increases rapidly with increasing distance between images, and that the choice of feature points which are conserved between slices improves performance. Conclusions: We used simulation to help select appropriate features and methods for image registration by estimating best-case-scenario errors for given data constraints in histological images. The results of this study suggest that much of the difficulty of stained tissue registration can be reduced to the problem of accurately identifying feature points, such as the center of nuclei.


international conference of the ieee engineering in medicine and biology society | 2010

Automated classification of renal cell carcinoma subtypes using bag-of-features

S. Hussain Raza; R. Mitchell Parry; Yachna Sharma; Qaiser Chaudry; Richard A. Moffitt; A. N. Young; May D. Wang

Color variation in medical images degrades the classification performance of computer aided diagnosis systems. Traditionally, color segmentation algorithms mitigate this variability and improve performance. However, consistent and robust segmentation remains an open research problem. In this study, we avoid the tenuous phase of color segmentation by adapting a bag-of-features approach using scale invariant features for classification of renal cell carcinoma subtypes. Previous work shows that features from each subtype match those from expertly chosen template images. In this paper, we show that the performance of this match-based methodology greatly depends on the quality of the template images. To avoid this uncertainty, we propose a bag-of-features approach that does not require expert knowledge and instead learns a “vocabulary” of morphological characteristics from training data. We build a support vector machine using feature histograms and evaluate this method using 40 iterations of 3-fold cross validation. We achieve classification accuracy above 90% for a heterogeneous dataset labeled by an expert pathologist, showing its potential for future clinical applications.


international conference of the ieee engineering in medicine and biology society | 2009

Automated classification of renal cell carcinoma subtypes using scale invariant feature transform

S. Hussain Raza; Yachna Sharma; Qaiser Chaudry; Andrew N. Young; May D. Wang

The task of analyzing tissue biopsies performed by a pathologist is challenging and time consuming. It suffers from intra- and inter-user variability. Computer assisted diagnosis (CAD) helps to reduce such variations and speed up the diagnostic process. In this paper, we propose an automatic computer assisted diagnostic system for renal cell carcinoma subtype classification using scale invariant features. We capture the morphological distinctness of various subtypes and we have used them to classify a heterogeneous data set of renal cell carcinoma biopsy images. Our technique does not require color segmentation and minimizes human intervention. We circumvent user subjectivity using automated analysis and cater for intra-class heterogeneities using multiple class templates. We achieve a classification accuracy of 83% using a Bayesian classifier.


bioinformatics and bioengineering | 2008

Improving renal cell carcinoma classification by automatic region of interest selection

Qaiser Chaudry; Syed Hussain Raza; Yachna Sharma; Andrew N. Young; May D. Wang

In this paper, we present an improved automated system for classification of pathological image data of renal cell carcinoma. The task of analyzing tissue biopsies, generally performed manually by expert pathologists, is extremely challenging due to the variability in the tissue morphology, the preparation of tissue specimen, and the image acquisition process. Due to the complexity of this task and heterogeneity of patient tissue, this process suffers from inter-observer and intra-observer variability. In continuation of our previous work, which proposed a knowledge-based automated system, we observe that real life clinical biopsy images which contain necrotic regions and glands significantly degrade the classification process. Following the pathologistpsilas technique of focusing on selected region of interest (ROI), we propose a simple ROI selection process which automatically rejects the glands and necrotic regions thereby improving the classification accuracy. We were able to improve the classification accuracy from 90% to 95% on a significantly heterogeneous image data set using our technique.


computer assisted radiology and surgery | 2018

Video and accelerometer-based motion analysis for automated surgical skills assessment

Aneeq Zia; Yachna Sharma; Vinay Bettadapura; Eric L. Sarin; Irfan A. Essa

PurposeBasic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data).MethodsWe conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce “entropy-based” features—approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment.ResultsWe report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment.ConclusionAutomated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.


international symposium on biomedical imaging | 2012

Development of a novel 2D color map for interactive segmentation of histological images

Qaiser Chaudry; Yachna Sharma; Syed Hussain Raza; May D. Wang

We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed methods results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

Collaboration


Dive into the Yachna Sharma's collaboration.

Top Co-Authors

Avatar

May D. Wang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Irfan A. Essa

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Qaiser Chaudry

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vinay Bettadapura

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aneeq Zia

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Moffitt

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Syed Hussain Raza

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark A. Clements

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge