Scott E. Friedman
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Scott E. Friedman.
multimedia signal processing | 2009
Jana Zujovic; Lisa Gandy; Scott E. Friedman; Bryan Pardo; Thrasyvoulos N. Pappas
This paper describes an approach to automatically classify digital pictures of paintings by artistic genre. While the task of artistic classification is often entrusted to human experts, recent advances in machine learning and multimedia feature extraction has made this task easier to automate. Automatic classification is useful for organizing large digital collections, for automatic artistic recommendation, and even for mobile capture and identification by consumers. Our evaluation uses variableresolution painting data gathered across Internet sources rather than solely using professional high-resolution data. Consequently, we believe this solution better addresses the task of classifying consumer-quality digital captures than other existing approaches. We include a comparison to existing feature extraction and classification methods as well as an analysis of our own approach across classifiers and feature vectors.
Retina-the Journal of Retinal and Vitreous Diseases | 2010
Anat Loewenstein; Joseph R Ferencz; Yaron Lang; Itamar Yeshurun; Ayala Pollack; Ruth Siegal; Tova Lifshitz; Joseph Karp; Guri Bronner; Justin Brown; Sam E. Mansour; Scott E. Friedman; Mark Michels; Richards Johnston; Moshe Rapp; Moshe Havilio; Omer Rafaeli; Yair Manor
Purpose: The primary purpose of this study was to evaluate the ability of a home device preferential hyperacuity perimeter to discriminate between patients with choroidal neovascularization (CNV) and intermediate age-related macular degeneration (AMD), and the secondary purpose was to investigate the dependence of sensitivity on lesion characteristics. Methods: All participants were tested with the home device in an unsupervised mode. The first part of this work was retrospective using tests performed by patients with intermediate AMD and newly diagnosed CNV. In the second part, the classifier was prospectively challenged with tests performed by patients with intermediate AMD and newly diagnosed CNV. The dependence of sensitivity on lesion characteristics was estimated with tests performed by patients with CNV of both parts. Results: In 66 eyes with CNV and 65 eyes with intermediate AMD, both sensitivity and specificity were 0.85. In the retrospective part (34 CNV and 43 intermediate AMD), sensitivity and specificity were 0.85 ± 0.12 (95% confidence interval) and 0.84 ± 0.11 (95% confidence interval), respectively. In the prospective part (32 CNV and 22 intermediate AMD), sensitivity and specificity were 0.84 ± 0.13 (95% confidence interval) and 0.86 ± 0.14 (95% confidence interval), respectively. Chi-square analysis showed no dependence of sensitivity on type (P = 0.44), location (P = 0.243), or size (P = 0.73) of the CNV lesions. Conclusion: A home device preferential hyperacuity perimeter has good sensitivity and specificity in discriminating between patients with newly diagnosed CNV and intermediate AMD. Sensitivity is not dependent on lesion characteristics.
international joint conference on artificial intelligence | 2011
Scott E. Friedman; Kenneth D. Forbus
Learning concepts via instruction and expository texts is an important problem for modeling human learning and for making autonomous AI systems. This paper describes a computational model of the self-explanation effect, whereby conceptual knowledge is repaired by integrating and explaining new material. Our model represents conceptual knowledge with compositional model fragments, which are used to explain new material via model formulation. Preferences are computed over explanations and conceptual knowledge, along several dimensions. These preferences guide knowledge integration and question-answering. Our simulation learns about the human circulatory system, using facts from a circulatory system passage used in a previous cognitive psychology experiment. We analyze the simulations performance, showing that individual differences in sequences of models learned by students can be explained by different parameter settings in our model.
Archive | 2003
Scott E. Friedman; Anand Krishnan; Nicholas Leidenfrost
Common collection objects such as hash tables are included in modern runtime libraries because of their widespread use and efficient implementation. While operating systems and programming languages continue to improve their real-time features, common implementations of hash tables and other collection objects are not necessarily suitable for real-time or embedded-systems. In this paper, we present an algorithm for managing hash tables that is suitable for such systems. The algorithm has been implemented and deployed in place of Java’s Hashtable class. We present evidence of the algorithm’s performance, experimental results documenting our algorithm’s suitability for real-time, and lessons learned from migrating this data structure to real-time and embedded platforms. Sponsored by DARPA under contract F33615–00–C–1697; contact author [email protected] This work was done while this author was on sabbatical from the EECS Department, University of Kansas
Cognitive Science | 2018
Scott E. Friedman; Kenneth D. Forbus; Bruce Sherin
People use commonsense science knowledge to flexibly explain, predict, and manipulate the world around them, yet we lack computational models of how this commonsense science knowledge is represented, acquired, utilized, and revised. This is an important challenge for cognitive science: Building higher order computational models in this area will help characterize one of the hallmarks of human reasoning, and it will allow us to build more robust reasoning systems. This paper presents a novel assembled coherence (AC) theory of human conceptual change, whereby people revise beliefs and mental models by constructing and evaluating explanations using fragmentary, globally inconsistent knowledge. We implement AC theory with Timber, a computational model of conceptual change that revises its beliefs and generates human-like explanations in commonsense science. Timber represents domain knowledge using predicate calculus and qualitative model fragments, and uses an abductive model formulation algorithm to construct competing explanations for phenomena. Timber then (a) scores competing explanations with respect to previously accepted beliefs, using a cost function based on simplicity and credibility, (b) identifies a low-cost, preferred explanation and accepts its constituent beliefs, and then (c) greedily alters previous explanation preferences to reduce global cost and thereby revise beliefs. Consistency is a soft constraint in Timber; it is biased to select explanations that share consistent beliefs, assumptions, and causal structure with its other, preferred explanations. In this paper, we use Timber to simulate the belief changes of students during clinical interviews about how the seasons change. We show that Timber produces and revises a sequence of explanations similar to those of the students, which supports the psychological plausibility of AC theory.
Kidney International | 1996
Stuart J. Shankland; Jeffrey W. Pippin; Raimund Pichler; Katherine L. Gordon; Scott E. Friedman; Leslie I. Gold; Richard J. Johnson; William G. Couser
national conference on artificial intelligence | 2015
Matthew D. McLure; Scott E. Friedman; Kenneth D. Forbus
Proceedings of the Annual Meeting of the Cognitive Science Society | 2010
Matthew D. McLure; Scott E. Friedman; Kenneth D. Forbus
national conference on artificial intelligence | 2010
Scott E. Friedman; Kenneth D. Forbus
Archive | 2001
Scott E. Friedman; Nicholas Leidenfrost; Benjamin C. Brodie; Ron K. Cytron