Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Leeds is active.

Publication


Featured researches published by Daniel Leeds.


IEEE Transactions on Biomedical Engineering | 2007

A Framework for the Analysis of Acoustical Cardiac Signals

Zeeshan Syed; Daniel Leeds; Daniel Curtis; Francesca Nesta; Robert A. Levine; John V. Guttag

Skilled cardiologists perform cardiac auscultation, acquiring and interpreting heart sounds, by implicitly carrying out a sequence of steps. These include discarding clinically irrelevant beats, selectively tuning in to particular frequencies and aggregating information across time to make a diagnosis. In this paper, we formalize a series of analytical stages for processing heart sounds, propose algorithms to enable computers to approximate these steps, and investigate the effectiveness of each step in extracting relevant information from actual patient data. Through such reasoning, we provide insight into the relative difficulty of the various tasks involved in the accurate interpretation of heart sounds. We also evaluate the contribution of each analytical stage in the overall assessment of patients. We expect our framework and associated software to be useful to educators wanting to teach cardiac auscultation, and to primary care physicians, who can benefit from presentation tools for computer-assisted diagnosis of cardiac disorders. Researchers may also employ the comprehensive processing provided by our framework to develop more powerful, fully automated auscultation applications


computer-based medical systems | 2006

Audio-Visual Tools for Computer-Assisted Diagnosis of Cardiac Disorders

Zeeshan Syed; Daniel Leeds; Dorothy Curtis; John V. Guttag; Francesca Nesta; Robert A. Levine

The process of interpreting heart sounds is restricted by human auditory limitations. Shortcomings such as insensitivity to frequency changes, slow responses to rapidly occurring changes in acoustic signals and an inability to discriminate the presence of soft pathological sounds are the source of inaccuracies and persist even with experience. This restricts both the practice and teaching of auscultation. In this paper we propose and evaluate a suite of presentation tools for computer-assisted auscultation. We explore the use of digital signal processing techniques to slow down heart sounds while preserving frequency content, differential enhancement across frequency scales to amplify pathological disease signatures, visualization of the signal to measure changes in signal energy across time and presentation of a representative prototypical signal for the patient


Frontiers in Computational Neuroscience | 2014

Exploration of complex visual feature spaces for object perception

Daniel Leeds; John A. Pyles; Michael J. Tarr

The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each units selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation.


NeuroImage | 2016

A method for real-time visual stimulus selection in the study of cortical object perception

Daniel Leeds; Michael J. Tarr

The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each units image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond.


Journal of Vision | 2015

Mixing hierarchical edge detection and medial axis models of object perception

Daniel Leeds; Michael J. Tarr

The nature of the visual representations underlying object perception in the brain is poorly understood. Past studies often have focused on the ability of one of two representational approaches to properly model biological object vision. Hierarchical edge models - pooling selected local edge information across space - have been used to help account for human neuroimaging data in response to photographs of 60 real-world objects (Leeds 2013). In contrast, medial axis models have been used successfully to probe mid- and high-level featural selectivities of single neurons in the cortical object perception pathway (Hung 2012). Both approaches, however, leave significant levels of unexplained variance in the neural data. To better model cortical object representations, we fit a weighted-sum mixture of models from the hierarchical and medial axis approaches - SIFT (Lowe 2004) and Shock Graph (Siddiqi 1999) - to neural data. We used fMRI to compare single and mixed-model responses with voxel population searchlight responses to a set of 60 object pictures. Representational distance matrices (Kriegeskorte 2008) were computed for each model combination and each voxel population, to serve as the basis of model-neural comparison. We found that weighted mixtures of the hierarchical and medial axis models exhibited varied results: from insignificant improvements (in lateral occipital cortex) to modest improvements (a 13% increased correlation in fusiform cortex) over either individual model in accounting for cortical representations in the brain regions associated with object vision. The fitted combination of the two models used weights of roughly consistent ratio across regions and subjects, reflective of the ratio of typical interobject distance values produced by the two model metrics. Each of the two models thus appears to account for a subset of the representational strategies realized in the human brain, in what is a sometimes mutually complementary and sometimes mutually equivalent manner. Meeting abstract presented at VSS 2015.


international conference on multisensor fusion and integration for intelligent systems | 2016

Landmark detection with surprise saliency using convolutional neural networks

Feng Tang; Damian M. Lyons; Daniel Leeds

Landmarks can be used as a reference to enable people or robots to localize themselves or to navigate in their environment. Automatic definition and extraction of appropriate landmarks from the environment has proven to be a challenging task when pre-defined landmarks are not present. We propose a novel computational model of automatic landmark detection from a single image without any pre-defined landmark database. The hypothesis is that if an object looks abnormal due to its atypical scene context (what we call surprise saliency), it then may be considered as a good landmark because it is unique and easy to spot by different viewers (or the same viewer at different times). We leverage state-of-the-art algorithms based on convolutional neural networks to recognize scenes and objects. For each detected object, a surprise saliency score, a fusion of scene and object information, is calculated to determine if it is a good landmark. In order to evaluate the performance of the proposed model, we collected a landmark image dataset which consists of landmark images, as we have defined them with surprise saliency above, and non-landmark images. The experimental results show that our model achieves good performance in automatic landmark detection and automatic landmark image classification.


Journal of Vision | 2016

Semantic object grouping in the visual cortex seen through MVPA

Daniel Leeds; David Shutov

Acknowledgments Fordham University Faculty Research Grants; Carnegie Mellon University Shared data from labs of Michael Tarr (Leeds et al. 2013) and Tom Mitchell (Sudre et al., 2012) Semantic encoding in perception of visual objects • The hierarchy of semantic information encoded in the brain is unclear • Huth (2012) proposed a subset of categories to predict voxel-level firing for semantic properties; Sudre (2012) proposed a larger set of semantic properties to predict MEG activity in broader cortical regions • We study semantic properties of Sudre (2012), using representational dissimilarity analysis (Kriegeskorte 2008) and more fine-grained BOLD MVPA • We identify more spatially-localized ROIs in mid-level vision with a subset of studied semantic representations, partially consistent with Sudre (2012) fMRI study • Participants shown photos of 60 real-world objects, 6 x each, passive viewing


Journal of Vision | 2014

Real-time fMRI search for the visual components of object perception

Daniel Leeds; John A. Pyles; Michael J. Tarr

Acknowledgments NSF IGERT, R.K. Mellon Foundation, Pennsylvania Department of Health’s Commonwealth Universal Research Enhancement Program, NIH EUREKA Award #1R01MH084195A01, and the Temporal Dynamic of Learning Center at UCSD (NSF Science of Learning Center #SMA-1041755) Cortical perception of complex visual properties • The visual features encoded by midand high-level cortical visual regions are not obvious • Very limited number of stimuli can be shown in neuroimaging study, compared to diversity of potentially cortically-relevant features • We used realtime fMRI to explore cortical responses to specific features within restricted visual feature spaces for complex real-world or novel objects


Journal of Vision | 2013

Comparing visual representations across human fMRI and computational vision

Daniel Leeds; Darren Seibert; John A. Pyles; Michael J. Tarr


Archive | 2010

Proprioceptive Localization for Mobile Manipulators

Mehmet Remzi Dogar; Vishal Hemrajani; Daniel Leeds; Breelyn Kane; Siddhartha S. Srinivasa

Collaboration


Dive into the Daniel Leeds's collaboration.

Top Co-Authors

Avatar

Michael J. Tarr

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

John A. Pyles

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Darren Seibert

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John V. Guttag

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dorothy Curtis

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge