James A. McCart
University of South Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James A. McCart.
Journal of the American Medical Informatics Association | 2013
James A. McCart; Donald J. Berndt; Jay Jarman; Dezon Finch; Stephen L. Luther
OBJECTIVE To determine how well statistical text mining (STM) models can identify falls within clinical text associated with an ambulatory encounter. MATERIALS AND METHODS 2241 patients were selected with a fall-related ICD-9-CM E-code or matched injury diagnosis code while being treated as an outpatient at one of four sites within the Veterans Health Administration. All clinical documents within a 48-h window of the recorded E-code or injury diagnosis code for each patient were obtained (n=26 010; 611 distinct document titles) and annotated for falls. Logistic regression, support vector machine, and cost-sensitive support vector machine (SVM-cost) models were trained on a stratified sample of 70% of documents from one location (dataset Atrain) and then applied to the remaining unseen documents (datasets Atest-D). RESULTS All three STM models obtained area under the receiver operating characteristic curve (AUC) scores above 0.950 on the four test datasets (Atest-D). The SVM-cost model obtained the highest AUC scores, ranging from 0.953 to 0.978. The SVM-cost model also achieved F-measure values ranging from 0.745 to 0.853, sensitivity from 0.890 to 0.931, and specificity from 0.877 to 0.944. DISCUSSION The STM models performed well across a large heterogeneous collection of document titles. In addition, the models also generalized across other sites, including a traditionally bilingual site that had distinctly different grammatical patterns. CONCLUSIONS The results of this study suggest STM-based models have the potential to improve surveillance of falls. Furthermore, the encouraging evidence shown here that STM is a robust technique for mining clinical documents bodes well for other surveillance-related topics.
Information & Management | 2010
Varol O. Kayhan; James A. McCart; Anol Bhattacherjee
Cross-bidding is a new strategy used in online auctions. The bidder simultaneously monitors several identical auctions, taking advantage of their price differential. We examined the determinants and outcomes of cross-bidding behavior and the contingent factors that shape it. Using empirical data, we demonstrated that cross-bidders can realize significant price discounts compared to non-cross-bidders; the number of experienced bidders in an auction market contributes to more cross-bidding; and this effect is positively moderated by market liquidity of the product being auctioned.
decision support systems | 2013
James A. McCart; Balaji Padmanabhan; Donald J. Berndt
The long tail has attracted substantial theoretical as well as practical interest, yet there have been few empirical studies that have explicitly examined the factors that drive online conversions at these sites. This research tests several hypotheses derived from Information Foraging Theory (IFT) that pertain to goal achievement on long tail Web sites. IFT introduced concepts of information patches and information scent to model information seeking behavior of individuals, but has mostly been tested in production rule environments where the theory is used to simulate user behavior. Testing IFT-driven hypotheses on real data required learning information patches and scents using an inductive approach and in this paper we adapt existing algorithms for these discovery tasks. Our results based on clickstream data from forty-seven small business Web sites show both the existence of valuable information patches and information scent trails as well as their importance in explaining conversion on these sites. The majority of the hypotheses were supported and we discuss the implications of this for researchers and practitioners.
acm transactions on management information systems | 2015
Donald J. Berndt; James A. McCart; Dezon Finch; Stephen L. Luther
Text analytic methods are often aimed at extracting useful information from the vast array of unstructured, free format text documents that are created by almost all organizational processes. The success of any text mining application rests on the quality of the underlying data being analyzed, including both predictive features and outcome labels. In this case study, some focused experiments regarding data quality are used to assess the robustness of Statistical Text Mining (STM) algorithms when applied to clinical progress notes. In particular, the experiments consider the impacts of task complexity (by removing signals), training set size, and target outcome quality. While this research is conducted using a dataset drawn from the medical domain, the data quality issues explored are of more general interest.
IEEE Transactions on Education | 2008
James A. McCart; Jay Jarman
Over one in ten students surveyed have admitted to copying programs in courses with computer assignments. The ease with which digital coursework can be copied and the impracticality of manually checking for plagiarized projects in large courses has only compounded the problem. As current research has focused predominantly on detecting plagiarism for textual applications such as source code and documents, there exists a gap in detecting plagiarism in graphically-driven applications. This paper focuses on the effectiveness of a technological tool in detecting plagiarized projects in a course using Microsoft Access. Seven semesters of data were collected from a large technology-oriented course in which the tool had been in use. Comparing semesters before and after the technological tool was introduced demonstrates a significant decrease in the number of projects being duplicated. The results indicate combining technology and policy can be effective in curtailing blatant plagiarism within large technology courses.
American Journal of Public Health | 2015
Stephen L. Luther; James A. McCart; Donald J. Berndt; Bridget Hahm; Dezon Finch; Jay Jarman; Philip Foulis; William A. Lapcevic; Robert R. Campbell; Ronald I. Shorr; Keryl Motta Valencia; Gail Powell-Cope
OBJECTIVES We determined whether statistical text mining (STM) can identify fall-related injuries in electronic health record (EHR) documents and the impact on STM models of training on documents from a single or multiple facilities. METHODS We obtained fiscal year 2007 records for Veterans Health Administration (VHA) ambulatory care clinics in the southeastern United States and Puerto Rico, resulting in a total of 26 010 documents for 1652 veterans treated for fall-related injury and 1341 matched controls. We used the results of an STM model to predict fall-related injuries at the visit and patient levels and compared them with a reference standard based on chart review. RESULTS STM models based on training data from a single facility resulted in accuracy of 87.5% and 87.1%, F-measure of 87.0% and 90.9%, sensitivity of 92.1% and 94.1%, and specificity of 83.6% and 77.8% at the visit and patient levels, respectively. Results from training data from multiple facilities were almost identical. CONCLUSIONS STM has the potential to improve identification of fall-related injuries in the VHA, providing a model for wider application in the evolving national EHR system.
System | 2017
Donald J. Berndt; David Boogers; Saurav Chakraborty; James A. McCart
In this paper, we introduce a small-scale heterogeneous agent-based model of the US corporate bond market. The model includes a realistic micro-grounded ecology of investors that trade a set of bonds through dealers. Using the model, we simulate market dynamics that emerge from agent behaviors in response to basic exogenous factors (such as interest rate shocks) and the introduction of regulatory policies and constraints. A first experiment focuses on the liquidity transformation provided by mutual funds and investigates the conditions under which redemption-driven bond sales may trigger market instability. We simulate the effects of increasing mutual fund market shares in the presence of market-wide repricing of risk (in the form of a 100 basis point increase in the expected returns). The simulations highlight robust-yet-fragile aspects of the growing liquidity transformation provided by mutual funds, with an inflection point beyond which redemption-driven negative feedback loops trigger market instability.
international conference on management of data | 2016
Donald J. Berndt; David Boogers; James A. McCart
The paper presents an agent-based modeling approach for the analysis of liquidity in corporate bond markets. Bond market liquidity is hard to measure empirically and its evolution is hard to predict due to its non-linear nature, with significant feedback loops between asset, funding and collateral markets. We discuss the applicability of agent-based modeling and present an initial model using a stylized market microstructure.
Communications of The ACM | 2009
James A. McCart; Varol O. Kayhan; Anol Bhattacherjee
PLOS ONE | 2014
Christina Dillahunt-Aspillaga; Dezon Finch; Jill Massengale; Tracy Kretzmer; Stephen L. Luther; James A. McCart