Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph F. Murray is active.

Publication


Featured researches published by Joseph F. Murray.


Neural Computation | 2003

Dictionary learning algorithms for sparse representation

Kenneth Kreutz-Delgado; Joseph F. Murray; Bhaskar D. Rao; Kjersti Engan; Te-Won Lee; Terrence J. Sejnowski

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial 25 words or less), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).


Neural Computation | 2010

Convolutional networks can learn to generate affinity graphs for image segmentation

Srinivas C. Turaga; Joseph F. Murray; Viren Jain; Fabian Roth; Moritz Helmstaedter; Kevin L. Briggman; Winfried Denk; H. Sebastian Seung

Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions. We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms. In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.


international conference on computer vision | 2007

Supervised Learning of Image Restoration with Convolutional Networks

Viren Jain; Joseph F. Murray; Fabian Roth; Srinivas C. Turaga; V. Zhigulin; Kevin L. Briggman; Moritz Helmstaedter; Winfried Denk; H.S. Seung

Convolutional networks have achieved a great deal of success in high-level vision problems such as object recognition. Here we show that they can also be used as a general method for low-level image processing. As an example of our approach, convolutional networks are trained using gradient learning to solve the problem of restoring noisy or degraded images. For our training data, we have used electron microscopic images of neural circuitry with ground truth restorations provided by human experts. On this dataset, Markov random field (MRF), conditional random field (CRF), and anisotropic diffusion algorithms perform about the same as simple thresholding, but superior performance is obtained with a convolutional network containing over 34,000 adjustable parameters. When restored by this convolutional network, the images are clean enough to be used for segmentation, whereas the other approaches fail in this respect. We do not believe that convolutional networks are fundamentally superior to MRFs as a representation for image processing algorithms. On the contrary, the two approaches are closely related. But in practice, it is possible to train complex convolutional networks, while even simple MRF models are hindered by problems with Bayesian learning and inference procedures. Our results suggest that high model complexity is the single most important factor for good performance, and this is possible with convolutional networks.


IEEE Transactions on Reliability | 2002

Improved disk-drive failure warnings

Gordon F. Hughes; Joseph F. Murray; Kenneth Kreutz-Delgado; Charles Elkan

Improved methods are proposed for disk-drive failure prediction. The SMART (self monitoring and reporting technology) failure prediction system is currently implemented in disk-drives. Its purpose is to predict the near-term failure of an individual hard disk-drive, and issue a backup warning to prevent data loss. Two experimental tests of SMART show only moderate accuracy at low false-alarm rates. (A rate of 0.2% of total drives per year implies that 20% of drive returns would be good drives, relative to /spl ap/1% annual failure rate of drives). This requirement for very low false-alarm rates is well known in medical diagnostic tests for rare diseases, and methodology used there suggests ways to improve SMART. Two improved SMART algorithms are proposed. They use the SMART internal drive attribute measurements in present drives. The present warning-algorithm based on maximum error thresholds is replaced by distribution-free statistical hypothesis tests. These improved algorithms are computationally simple enough to be implemented in drive microprocessor firmware code. They require only integer sort operations to put several hundred attribute values in rank order. Some tens of these ranks are added up and the SMART warning is issued if the sum exceeds a prestored limit. These new algorithms were tested on 3744 drives of 2 models. They gave 3-4 times higher correct prediction accuracy than error thresholds on will-fail drives, at 0.2% false-alarm rate. The highest accuracies achievable are modest (40%-60%). Care was taken to test will-fail drive prediction accuracy on data independent of the algorithm design data. Additional work is needed to verify and apply these algorithms in actual drive design. They can also be useful in drive failure analysis engineering. It might be possible to screen drives in manufacturing using SMART attributes. Marginal drives might be detected before substantial final test time is invested in them, thereby decreasing manufacturing cost, and possibly decreasing overall field failure rates.


asilomar conference on signals, systems and computers | 2001

An improved FOCUSS-based learning algorithm for solving sparse linear inverse problems

Joseph F. Murray; Kenneth Kreutz-Delgado

We develop an improved algorithm for solving blind sparse linear inverse problems where both the dictionary (possibly overcomplete) and the sources are unknown. The algorithm is derived in the Bayesian framework by the maximum a posteriori method, with the choice of prior distribution restricted to the class of concave/Schur-concave functions, which has been shown previously to be a sufficient condition for sparse solutions. This formulation leads to a constrained and regularized minimization problem which can be solved in part using the FOCUSS (focal underdetermined system solver) algorithm for vector selection. We introduce three key improvements in the algorithm: an efficient way of adjusting the regularization parameter; column normalization that restricts the learned dictionary; reinitialization to escape from local optima. Experiments were performed using synthetic data with matrix sizes up to 64/spl times/128; the algorithm solves the blind identification problem, recovering both the dictionary and the sparse sources. The improved algorithm is much more accurate than the original FOCUSS-dictionary learning algorithm when using large matrices. We also test our algorithm on natural images, and show that a learned overcomplete representation can encode the data more efficiently than a complete basis at the same level of accuracy.


signal processing systems | 2006

Learning Sparse Overcomplete Codes for Images

Joseph F. Murray; Kenneth Kreutz-Delgado

Images can be coded accurately using a sparse set of vectors from a learned overcomplete dictionary, with potential applications in image compression and feature selection for pattern recognition. We present a survey of algorithms that perform dictionary learning and sparse coding and make three contributions. First, we compare our overcomplete dictionary learning algorithm (FOCUSS-CNDL) with overcomplete independent component analysis (ICA). Second, noting that once a dictionary has been learned in a given domain the problem becomes one of choosing the vectors to form an accurate, sparse representation, we compare a recently developed algorithm (sparse Bayesian learning with adjustable variance Gaussians, SBL-AVG) to well known methods of subset selection: matching pursuit and FOCUSS. Third, noting that in some cases it may be necessary to find a non-negative sparse coding, we present a modified version of the FOCUSS algorithm that can find such non-negative codings. Efficient parallel implementations in VLSI could make these algorithms more practical for many applications.


ACM Transactions on Storage | 2005

Reliability and security of RAID storage systems and D2D archives using SATA disk drives

Gordon F. Hughes; Joseph F. Murray

Information storage reliability and security is addressed by using personal computer disk drives in enterprise-class nearline and archival storage systems. The low cost of these serial ATA (SATA) PC drives is a tradeoff against drive reliability design and demonstration test levels, which are higher in the more expensive SCSI and Fibre Channel drives. This article discusses the tradeoff between SATA which has the advantage that fewer higher capacity drives are needed for a given system storage capacity, which further reduces cost and allows higher drive failure rates, and the use of additional storage system redundancy and drive failure prediction to maintain system data integrity using less reliable drives. RAID stripe failure probability is calculated using typical ATA and SCSI drive failure rates, for single and double parity data reconstruction failure, and failure due to drive unrecoverable block errors. Reliability improvement from drive failure prediction is also calculated, and can be significant. Todays SATA drive specifications for unrecoverable block errors appear to allow stripe reconstruction failure, and additional in-drive parity blocks are suggested as a solution. The possibility of using low cost disks data for backup and archiving is discussed, replacing higher cost magnetic tape. This requires significantly better RAID stripe failure probability, and suitable drive technology alternatives are discussed. The failure rate of nonoperating drives is estimated using failure analysis results from ≈4000 drives. Nonoperating RAID stripe failure rates are thereby estimated. User data security needs to be assured in addition to reliability, and to extend past the point where physical control of drives is lost, such as when drives are removed from systems for data vaulting, repair, sale, or discard. Today, over a third of resold drives contain unerased user data. Security is proposed via the existing SATA drive secure-erase command, or via the existing SATA drive password commands, or by data encryption. Finally, backup and archival disc storage is compared to magnetic tape, a technology with a proven reliability record over the full half-century of digital data storage. In contrast, tape archives are not vulnerable to tape transport failure modes. Only failure modes in the archived tapes and reels will make data unrecoverable.


Neural Computation | 2007

Visual Recognition and Inference Using Dynamic Overcomplete Sparse Learning

Joseph F. Murray; Kenneth Kreutz-Delgado

We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.


Proceedings of the 2004 14th IEEE Signal Processing Society Workshop Machine Learning for Signal Processing, 2004. | 2004

Sparse image coding using learned overcomplete dictionaries

Joseph F. Murray; Kenneth Kreutz-Delgado

Images can be coded accurately using a sparse set of vectors from an overcomplete dictionary, with potential applications in image compression and feature selection for pattern recognition. We discuss algorithms that perform sparse coding and make three contributions. First, we compare our overcomplete dictionary learning algorithm (FOCUSS-CNDL) with overcomplete independent component analysis (ICA). Second, noting that once a dictionary has been learned in a given domain the problem becomes one of choosing the vectors to form an accurate, sparse representation, we compare a recently developed algorithm (sparse Bayesian learning with adjustable variance Gaussians) to well known methods of subset selection: matching pursuit and FOCUSS. Third, noting that in some cases it may be necessary to find a non-negative sparse coding, we present a modified version of the FOCUSS algorithm that can find such non-negative codings


asilomar conference on signals, systems and computers | 2010

Parameterized deformation sparse coding via tree-structured parameter search

Brandon Burdge; Kenneth Kreutz-Delgado; Joseph F. Murray

Representing transformation invariances in data is known to be valuable in many domains. We consider a method by which prior knowledge about the structure of such invariances can be exploited using a novel algorithm for sparse coding across a learned dictionary of atoms combined with a parameterized deformation function that captures invariant structure. We demonstrate the value of this on both reconstructing signals, as well as improved unsupervised grouping based on invariant sparse representations.

Collaboration


Dive into the Joseph F. Murray's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brandon Burdge

University of California

View shared research outputs
Top Co-Authors

Avatar

Fabian Roth

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Srinivas C. Turaga

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Viren Jain

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bhaskar D. Rao

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge