Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Javier Bernal is active.

Publication


Featured researches published by Javier Bernal.


Cytometry Part A | 2011

Comparison of Segmentation Algorithms For Fluorescence Microscopy Images of Cells

Alden A. Dima; John T. Elliott; James J. Filliben; Michael Halter; Adele P. Peskin; Javier Bernal; Marcin Kociolek; Mary Brady; Hai C. Tang; Anne L. Plant

The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold‐based segmentation techniques are less accurate than k‐means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley‐Liss, Inc.


Computers & Geosciences | 2000

LINDENS: a program for lineament length and density analysis

Antonio M. Casas; Angel L. Cortés; Adolfo Maestro; M.Asunción Soriano; Andrés Riaguas; Javier Bernal

Analysis of lineaments from satellite images normally includes the determination of their orientation and density. The spatial variation in the orientation and/or number of lineaments must be obtained by means of a network of cells, the lineaments included in each cell being analysed separately. The program presented in this work, LINDENS, allows the density of lineaments (number of lineaments per km2 and length of lineaments per km2) to be estimated. It also provides a tool for classifying the lineaments contained in different cells, so that their orientation can be represented in frequency histograms and/or rose diagrams. The input file must contain the planar coordinates of the beginning and end of each lineament. The density analysis is done by creating a network of square cells, and counting the number of lineaments that are contained within each cell, that have one of their ends within the cell or that cross-cut the cell boundary. The lengths of lineaments are then calculated. To obtain a representative density map the cell size must be fixed according to: (1) the average lineament length; (2) the distance between the lineaments; and (3) the boundaries of zones with low densities due to lithology or outcrop features. An example from the Neogene Duero Basin (Northern Spain) is provided to test the reliability of the density maps obtained with different cell sizes.


17th International Symposium on Automation and Robotics in Construction | 2000

Field Demonstration of Laser Scanning for Excavation Measurement

Geraldine S. Cheok; Robert R. Lipman; Christopher Witzgall; Javier Bernal; William C. Stone

The use of a scanning laser to measure terrain changes due to excavation at a construction site is described. The objective at this phase of the project is to develop the tools necessary to measure terrain changes in real-time. This paper focuses on adaptations required to extend previously developed scanning procedures and post-processing algorithms for an indoor laboratory environment to a large outdoor area such as a construction site. The challenges encountered, techniques that worked or didn’t work, and lessons learned are discussed.


Information Sciences | 2015

A dual representation simulated annealing algorithm for the bandwidth minimization problem on graphs

Jose Torres-Jimenez; Idelfonso Izquierdo-Marquez; Alberto Garcia-Robledo; Aldo Gonzalez-Gomez; Javier Bernal; Raghu N. Kacker

A dual representation with neighborhood composed of three perturbation operators.Tuning of DRSA using a full factorial design.Benchmark with 113 instances, 31 improved and the matching of the remaining 82.Wilcoxon signed-rank test used to compare DRSA with GRASP-PR, SA, and VNS. The bandwidth minimization problem on graphs (BMPG) consists of labeling the vertices of a graph with the integers from 1 to n (n is the number of vertices) such that the maximum absolute difference between labels of adjacent vertices is as small as possible. In this work we develop a DRSA (Dual Representation Simulated Annealing) algorithm to solve BMPG. The main novelty of DRSA is an internal dual representation of the problem used in conjunction with a neighborhood function composed of three perturbation operators. The evaluation function of DRSA is able to discriminate among solutions of equal bandwidth by taking into account all absolute differences between labels of adjacent vertices. For better performance, the parameters of DRSA and the probabilities for selecting the perturbation operators were tuned by extensive experimentation carried out using a full factorial design. The benchmark for the proposed algorithm consists of 113 instances of the Harwell-Boeing sparse matrix collection; the results of DRSA included 31 new upper bounds and the matching of 82 best-known solutions (22 solutions are optimal). We used Wilcoxon signed-rank test to compare best solutions produced by DRSA against best solutions produced by three state of the art methods: greedy randomized adaptive search procedure with path relinking, simulated annealing, and variable neighborhood search; according to the comparisons done, the quality of the solutions with DRSA is significantly better than that obtained with the other three algorithms.


Applied Mathematics Letters | 2005

A numerical method for mass spectral data analysis

Anthony J. Kearsley; William E. Wallace; Javier Bernal; Charles M. Guttman

The new generation of mass spectrometers produces an astonishing amount of high-quality data in a brief period of time, leading to inevitable data analysis bottlenecks. Automated data analysis algorithms are required for rapid and repeatable processing of mass spectra containing hundreds of peaks, the part of the spectra containing information. New data processing algorithms must work with minimal user input, both to save operator time and to eliminate inevitable operator bias. Toward this end an accurate mathematical algorithm is presented that automatically locates and calculates the area beneath peaks. The promising numerical performance of this algorithm applied to raw data is presented.


computer vision and pattern recognition | 2015

A fast algorithm for elastic shape distances between closed planar curves

Günay Doğan; Javier Bernal; Charles Hagwood

Effective computational tools for shape analysis are needed in many areas of science and engineering. We address this and propose a new fast iterative algorithm to compute the elastic geodesic distance between shapes of closed planar curves. The original algorithm for this has cubic time complexity with respect to the number of nodes per curve. Hence it is not suitable for large shape data sets. We aim for large-scale shape analysis and thus propose an iterative algorithm based on the original one but with quadratic time complexity. In practice, we observe subquadratic, almost linear running times, and that our algorithm scales very well with large numbers of nodes. The key to our algorithm is the decoupling of the optimization for the starting point and rotation from that of the reparametrization, and the development of fast dynamic programming and iterative nonlinear constrained optimization algorithms that work in tandem to compute optimal reparametrizations fast.


IEEE Transactions on Medical Imaging | 2012

Evaluation of Segmentation Algorithms on Cell Populations Using CDF Curves

Charles Hagwood; Javier Bernal; Michael Halter; John T. Elliott

Cell segmentation is a critical step in the analysis pipeline for most imaging cytometry experiments and evaluating the performance of segmentation algorithms is important for aiding the selection of segmentation algorithms. Four popular algorithms are evaluated based on their cell segmentation performance. Because segmentation involves the classification of pixels belonging to regions within the cell or belonging to background, these algorithms are evaluated based on their total misclassification error. Misclassification error is particularly relevant in the analysis of quantitative descriptors of cell morphology involving pixel counts, such as projected area, aspect ratio and diameter. Since the cumulative distribution function captures completely the stochastic properties of a population of misclassification errors it is used to compare segmentation performance.


IEEE Transactions on Medical Imaging | 2013

Testing Equality of Cell Populations Based on Shape and Geodesic Distance

Charles Hagwood; Javier Bernal; Michael Halter; John T. Elliott; Tegan Brennan

Image cytometry has emerged as a valuable in vitro screening tool and advances in automated microscopy have made it possible to readily analyze large cellular populations of image data. The purpose of this paper is to illustrate the viability of using cell shape to test equality of cell populations based on image data. Shape space theory is reviewed, from which differences between shapes can be quantified in terms of geodesic distance. Several multivariate nonparametric statistical hypothesis tests are adapted to test equality of cell populations. It is illustrated that geodesic distance can be a better feature than cell spread area and roundness in distinguishing between cell populations. Tests based on geodesic distance are able to detect natural perturbations of cells, whereas Kolmogorov-Smirnov tests based on area and roundness are not.


computer vision and pattern recognition | 2016

Fast Dynamic Programming for Elastic Registration of Curves

Javier Bernal; Günay Doğan; Charles Hagwood

Curve registration problems in data analysis and computer vision can often be reduced to the problem of matching two functions defined on an interval. Dynamic Programming (DP) is an effective approach to solve this problem. In this paper, we propose a DP algorithm that runs in O(N) time to compute optimal diffeomorphisms for elastic registration of curves with N nodes. This algorithm contrasts favorably with other DP algorithms used for this problem: the commonly used algorithm of quadratic time complexity, and the algorithm that guarantees a globally optimal solution with O(N4) time complexity. Key to our computational efficiency is the savings achieved by reducing our search space, focusing on thin strips around graphs of estimates of optimal diffeomorphism. Estimates and strips are obtained with a multigrid approach: an optimal diffeomorphism obtained from a lower resolution grid using DP is progressively projected to ones of higher resolution until full resolution is attained. Additionally, our DP algorithm is designed so that it can handle nonuniformly discretized curves. This enables us to realize further savings in computations, since in the case of complicated curves requiring large numbers of nodes for a high-fidelity representation, we can distribute curve nodes adaptively, focusing nodes in parts of high variation. We demonstrate effectiveness of our DP algorithm on several registration problems in elastic shape analysis, and functional data analysis.


Journal of Research of the National Institute of Standards and Technology | 2004

Integer Representation of Decimal Numbers for Exact Computations

Javier Bernal; Christoph J. Witzgall

A scheme is presented and software is documented for representing as integers input decimal numbers that have been stored in a computer as double precision floating point numbers and for carrying out multiplications, additions and subtractions based on these numbers in an exact manner. The input decimal numbers must not have more than nine digits to the left of the decimal point. The decimal fractions of their floating point representations are all first rounded off at a prespecified location, a location no more than nine digits away from the decimal point. The number of digits to the left of the decimal point for each input number besides not being allowed to exceed nine must then be such that the total number of digits from the leftmost digit of the number to the location where round-off is to occur does not exceed fourteen.

Collaboration


Dive into the Javier Bernal's collaboration.

Top Co-Authors

Avatar

Charles Hagwood

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Geraldine S. Cheok

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Anthony J. Kearsley

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Charles M. Guttman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph J. Witzgall

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Günay Doğan

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

John T. Elliott

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Halter

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Robert R. Lipman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

William C. Stone

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge