Vikas Raykar
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vikas Raykar.
Medical Physics | 2013
Nicholas Petrick; Berkman Sahiner; Samuel G. Armato; Alberto Bert; Loredana Correale; Silvia Delsanto; Matthew T. Freedman; David Fryd; David Gur; Lubomir M. Hadjiiski; Zhimin Huo; Yulei Jiang; Lia Morra; Sophie Paquerault; Vikas Raykar; Frank W. Samuelson; Ronald M. Summers; Georgia D. Tourassi; Hiroyuki Yoshida; Bin Zheng; Chuan Zhou; Heang Ping Chan
Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and best practices for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD systems effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in clinical practice.
Medical Physics | 2013
Nicholas Petrick; Berkman Sahiner; Samuel G. Armato; Alberto Bert; Loredana Correale; Silvia Delsanto; Matthew T. Freedman; David Fryd; David Gur; Lubomir M. Hadjiiski; Zhimin Huo; Yulei Jiang; Lia Morra; Sophie Paquerault; Vikas Raykar; Frank W. Samuelson; Ronald M. Summers; Georgia D. Tourassi; Hiroyuki Yoshida; Bin Zheng; Chuan Zhou; Heang Ping Chan
Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and best practices for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD systems effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in clinical practice.
european conference on machine learning | 2015
Vikas Raykar; Amrita Saha
A conventional textbook prescription for building good predictive models is to split the data into three parts: training set (for model fitting), validation set (for model selection), and test set (for final model assessment). Predictive models can potentially evolve over time as developers improve their performance either by acquiring new data or improving the existing model. The main contribution of this paper is to discuss problems encountered and propose workflows to manage the allocation of newly acquired data into different sets in such dynamic model building and updating scenarios. Specifically we propose three different workflows (parallel dump, serial waterfall, and hybrid) for allocating new data into the existing training, validation, and test splits. Particular emphasis is laid on avoiding the bias due to the repeated use of the existing validation or the test set.
Proceedings of the 6th IBM Collaborative Academia Research Exchange Conference (I-CARE) on I-CARE 2014 | 2014
Sachin Kumar; Vikas Raykar; Priyanka Agrawal
Most predictive models built for binary decision problems compute a real valued score as an intermediate step and then apply a threshold on this score to make a final decision. Conventionally, the threshold is chosen which optimizes a desired performance metric (such as accuracy, F-score, precision@k, recall@k, etc.) on the training set. However very often in practice it so happens that the same threshold when applied to a test set, results in a sub-optimal performance because of drift in test distribution. In this work we propose a method that adaptively changes the threshold such that the optimal performance achieved on the training set is maintained. The method is completely unsupervised and is based on fitting a parametric mixture model to the test scores and choosing the threshold that optimizes a performance metric based on the corresponding parametric approximation.
Medical Physics | 2013
Nicholas Petrick; Berkman Sahiner; Samuel G. Armato; Alberto Bert; Loredana Correale; Silvia Delsanto; Matthew T. Freedman; David Fryd; David Gur; Lubomir M. Hadjiiski; Zhimin Huo; Yulei Jiang; Lia Morra; Sophie Paquerault; Vikas Raykar; Frank W. Samuelson; Ronald M. Summers; Georgia D. Tourassi; Hiroyuki Yoshida; Bin Zheng; Chuan Zhou; Heang Ping Chan
Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and best practices for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD systems effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in clinical practice.
neural information processing systems | 2014
Sarath Chandar A P; Stanislas Lauly; Hugo Larochelle; Mitesh M. Khapra; Balaraman Ravindran; Vikas Raykar; Amrita Saha
international conference on computational linguistics | 2014
Noam Slonim; Ehud Aharoni; Carlos Alzate; Roy Bar-Haim; Yonatan Bilu; Lena Dankin; Iris Eiron; Daniel Hershcovich; Shay Hummel; Mitesh M. Khapra; Tamar Lavee; Ran Levy; Paul Matchen; Anatoly Polnarov; Vikas Raykar; Ruty Rinott; Amrita Saha; Naama Zwerdling; David Konopnicki; Dan Gutfreund
Archive | 2015
Ehud Aharoni; Indrajit Bhattacharya; Yonatan Bilu; Dan Gutfreund Klein; Daniel Hershcovich; Vikas Raykar; Ruty Rinott; Godbole Shantanu; Noam Slonim
Archive | 2015
Mitesh M. Khapra; Vikas Raykar; Amrita Saha; Noam Slonim; Ashish Verma
workshop on applications of computer vision | 2018
Sachin Mehta; Amar P. Azad; Saneem A. Chemmengath; Vikas Raykar; Shivkumar Kalyanaraman