Michael Gillam
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Gillam.
Archive | 2006
Juan P. Wachs; Helman Stern; Yael Edan; Michael Gillam; Craig Feied; Mark Smith; Jon Handler
In this paper, we consider a vision-based system that can interpret a user’s gestures in real time to manipulate objects within a medical data visualization environment. Dynamic navigation gestures are translated to commands based on their relative positions on the screen. Static gesture poses are identified to execute non-directional commands. This is accomplished by using Haar-like features to represent the shape of the hand. These features are then input to a Fuzzy C-Means Clustering algorithm for pose classification. A probabilistic neighborhood search algorithm is employed to automatically select a small number of Haar features, and to tune the fuzzy c-means classification algorithm. The gesture recognition system was implemented in a sterile medical data-browser environment. Test results on four interface tasks showed that the use of a few Haar features with the supervised FCM yielded successful performance rates of 95 to 100%. In addition a small exploratory test of the Adaboost Haar system was made to detect a single hand gesture, and assess its suitability for hand gesture recognition.
PLOS ONE | 2014
Mohsen Bayati; Mark Braverman; Michael Gillam; Karen Mack; George Ruiz; Mark Smith; Eric Horvitz
Background Several studies have focused on stratifying patients according to their level of readmission risk, fueled in part by incentive programs in the U.S. that link readmission rates to the annual payment update by Medicare. Patient-specific predictions about readmission have not seen widespread use because of their limited accuracy and questions about the efficacy of using measures of risk to guide clinical decisions. We construct a predictive model for readmissions for congestive heart failure (CHF) and study how its predictions can be used to perform patient-specific interventions. We assess the cost-effectiveness of a methodology that combines prediction and decision making to allocate interventions. The results highlight the importance of combining predictions with decision analysis. Methods We construct a statistical classifier from a retrospective database of 793 hospital visits for heart failure that predicts the likelihood that patients will be rehospitalized within 30 days of discharge. We introduce a decision analysis that uses the predictions to guide decisions about post-discharge interventions. We perform a cost-effectiveness analysis of 379 additional hospital visits that were not included in either the formulation of the classifiers or the decision analysis. We report the performance of the methodology and show the overall expected value of employing a real-time decision system. Findings For the cohort studied, readmissions are associated with a mean cost of
International Journal of Intelligent Computing in Medical Sciences & Image Processing | 2008
Juan P. Wachs; Helman Stern; Yael Edan; Michael Gillam; Craig Feied; Mark Smithd; Jon Handler
13,679 with a standard error of
Archive | 2007
Juan P. Wachs; Helman Stern; Yael Edan; Michael Gillam; Craig Feied; Mark Smith; Jon Handler
1,214. Given a post-discharge plan that costs
world automation congress | 2006
Uri Kartoun; Helman Stern; Yael Edan; Craig Feied; Jonathan Handler; Mark Smith; Michael Gillam
1,300 and that reduces 30-day rehospitalizations by 35%, use of the proposed methods would provide an 18.2% reduction in rehospitalizations and save 3.8% of costs. Conclusions Classifiers learned automatically from patient data can be joined with decision analysis to guide the allocation of post-discharge support to CHF patients. Such analyses are especially valuable in the common situation where it is not economically feasible to provide programs to all patients.
Journal of The Medical Library Association | 2010
James Shedlock; Michelle Frisque; Steve Hunt; Linda J. Walton; Jonathan Handler; Michael Gillam
A gesture interface is developed for users, such as doctors/surgeons, to browse medical images in a sterile medical environment. A vision-based gesture capture system interprets user’s gestures in real-time to manipulate objects in an image visualization environment. A color distribution model of the gamut of colors of the users hand or glove is built at the start of each session resulting in an independent system. The gesture system relies on real-time robust tracking of the user’s hand based on a color-motion fusion model, in which the relative weight applied to the motion and color cues are adaptively determined according to the state of the system. Dynamic navigation gestures are translated to commands based on their relative positions on the screen. A state machine switches between other gestures such as zoom and rotate, as well as a sleep state. Performance evaluation included gesture recognition accuracy, task learning, and rotation accuracy. Fast task learning rates were found with convergence after ten trials. A beta test of a system prototype was conducted during a live brain biopsy operation, where neurosurgeons were able to browse through MRI images of the patient’s brain using the sterile hand gesture interface. The surgeons indicated the system was easy to use and fast with high overall satisfaction.
Proceedings of SPIE | 2010
Yingxuan Zhu; Prabhdeep Singh; Khan M. Siddiqui; Michael Gillam
In this paper, we design a sterile gesture interface for users, such as doctors/surgeons, to browse medical images in a dynamic medical environment. A vision-based gesture capture system interprets user’s gestures in real-time to navigate through and manipulate an image and data visualization environment. Dynamic navigation gestures are translated to commands based on their relative positions on the screen. The gesture system relies on tracking of the user’s hand based on color-motion cues. A state machine switches from navigation gestures to others such as zoom and rotate. A prototype of the gesture interface was tested in an operating room by neurosurgeons conducting a live operation. Surgeon’s feedback was very positive.
Journal of the American Medical Informatics Association | 2008
Juan P. Wachs; Helman Stern; Yael Edan; Michael Gillam; Jonathan Handler; Craig Feied; Mark Smith
This paper presents a method for autonomous recharging of a mobile robot, a necessity for achieving long-term robotic activity without human intervention. A recharging station is designed consisting of a stationary docking station and a docking mechanism mounted to an ER-1 Evolution Robotics robot. The docking station and docking mechanism serve as a dual-power source, providing a mechanical and electrical connection between the recharging system of the robot and a laptop placed on it. Docking strategy algorithms use vision based navigation. The result is a significantly low-cost, high-entrance angle tolerant system. Iterative improvements to the system, to resist environmental perturbations and implement obstacle avoidance, ultimately resulted in a docking success rate of 100 percent over 50 trials.
Academic Emergency Medicine | 2000
Jonathan Handler; Michael Gillam; Arthur B. Sanders; Richard S. Klasco
QUESTION How can the users access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). SETTING The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. METHOD The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. MAIN RESULTS Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. CONCLUSION Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case.
Academic Emergency Medicine | 2004
Jonathan Handler; Craig Feied; Kevin M. Coonan; John Vozenilek; Michael Gillam; Peter Peacock; Rich Sinert; Mark Smith
Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.