Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José-Jesús Fernández is active.

Publication


Featured researches published by José-Jesús Fernández.


Bioinformatics | 2011

Fast tomographic reconstruction on multicore computers

J. I. Agulleiro; José-Jesús Fernández

SUMMARY Tomo3D implements a multithreaded vectorized approach to tomographic reconstruction that takes full advantage of the computer power in modern multicore computers. Full resolution tomograms are generated at high speed on standard computers with no special system requirements. Tomo3D has the most common reconstruction methods implemented, namely weighted Back-projection (WBP) and simultaneous iterative reconstruction technique (SIRT). It proves to be competitive with current graphic processor unit solutions in terms of processing time, in the order of a few seconds with WBP or minutes with SIRT. The program is compatible with standard packages, which easily allows integration in the electron tomography workflow.


Journal of Structural Biology | 2008

Sharpening high resolution information in single particle electron cryomicroscopy

José-Jesús Fernández; Daniel Luque; José R. Castón; José L. Carrascosa

Advances in single particle electron cryomicroscopy have made possible to elucidate routinely the structure of biological specimens at subnanometer resolution. At this resolution, secondary structure elements are discernable by their signature. However, identification and interpretation of high resolution structural features are hindered by the contrast loss caused by experimental and computational factors. This contrast loss is traditionally modeled by a Gaussian decay of structure factors with a temperature factor, or B-factor. Standard restoration procedures usually sharpen the experimental maps either by applying a Gaussian function with an inverse ad hoc B-factor, or according to the amplitude decay of a reference structure. EM-BFACTOR is a program that has been designed to widely facilitate the use of the novel method for objective B-factor determination and contrast restoration introduced by Rosenthal and Henderson [Rosenthal, P.B., Henderson, R., 2003. Optimal determination of particle orientation, absolute hand, and contrast loss in single-particle electron cryomicroscopy. J. Mol. Biol. 333, 721-745]. The program has been developed to interact with the most common packages for single particle electron cryomicroscopy. This sharpening method has been further investigated via EM-BFACTOR, concluding that it helps to unravel the high resolution molecular features concealed in experimental density maps, thereby making them better suited for interpretation. Therefore, the method may facilitate the analysis of experimental data in high resolution single particle electron cryomicroscopy.


computer and information technology | 2010

Improving the Performance of the Sparse Matrix Vector Product with GPUs

Francisco Vázquez; Gloria Ortega; José-Jesús Fernández; Ester M. Garzón

Sparse matrices are involved in linear systems, eigensystems and partial differential equations from a wide spectrum of scientific and engineering disciplines. Hence, sparse matrix vector product (SpMV) is considered as key operation in engineering and scientific computing. For these applications the optimization of the sparse matrix vector product (SpMV) is very relevant. However, the irregular computation involved in SpMV prevents the optimum exploitation of computational architectures when the sparse matrices are very large. Graphics Processing Units (GPUs) have recently emerged as platforms that yield outstanding acceleration factors. SpMV implementations for GPUs have already appeared on the scene. This work proposes and evaluates new implementations of SpMV for GPUs called ELLR-T. They are based on the format ELLPACK-R, which allows storage of the sparse matrix in a regular manner. A comparative evaluation against a variety of storage formats previously proposed has been carried out based on a representative set of test matrices. The results show that: (1) the SpMV is highly accelerated with GPUs; (2) the performance strongly depends on the specific pattern of the matrix; and (3) the implementations ELLR-T achieve higher overall performance. Consequently, the new implementations of SpMV, ELLR-T, described in this paper can help to exploit the GPUs, because, they achieve high performance and they can be easily joined in the engineering and scientific computing.


Journal of Structural Biology | 2002

High-performance electron tomography of complex biological specimens.

José-Jesús Fernández; Albert Lawrence; Javier Roca; Inmaculada García; Mark H. Ellisman; J.M. Carazo

We have evaluated reconstruction methods using smooth basis functions in the electron tomography of complex biological specimens. In particular, we have investigated series expansion methods, with special emphasis on parallel computation. Among the methods investigated, the component averaging techniques have proven to be most efficient and have generally shown fast convergence rates. The use of smooth basis functions provides the reconstruction algorithms with an implicit regularization mechanism, very appropriate for noisy conditions. Furthermore, we have applied high-performance computing (HPC) techniques to address the computational requirements demanded by the reconstruction of large volumes. One of the standard techniques in parallel computing, domain decomposition, has yielded an effective computational algorithm which hides the latencies due to interprocessor communication. We present comparisons with weighted back-projection (WBP), one of the standard reconstruction methods in the areas of computational demand and reconstruction quality under noisy conditions. These techniques yield better results, according to objective measures of quality, than the weighted backprojection techniques after a very few iterations. As a consequence, the combination of efficient iterative algorithms and HPC techniques has proven to be well suited to the reconstruction of large biological specimens in electron tomography, yielding solutions in reasonable computation times.


Concurrency and Computation: Practice and Experience | 2011

A new approach for sparse matrix vector product on NVIDIA GPUs

Francisco Vázquez; José-Jesús Fernández; Ester M. Garzón

The sparse matrix vector product (SpMV) is a key operation in engineering and scientific computing and, hence, it has been subjected to intense research for a long time. The irregular computations involved in SpMV make its optimization challenging. Therefore, enormous effort has been devoted to devise data formats to store the sparse matrix with the ultimate aim of maximizing the performance. Graphics Processing Units (GPUs) have recently emerged as platforms that yield outstanding acceleration factors. SpMV implementations for NVIDIA GPUs have already appeared on the scene. This work proposes and evaluates a new implementation of SpMV for NVIDIA GPUs based on a new format, ELLPACK‐R, that allows storage of the sparse matrix in a regular manner. A comparative evaluation against a variety of storage formats previously proposed has been carried out based on a representative set of test matrices. The results show that, although the performance strongly depends on the specific pattern of the matrix, the implementation based on ELLPACK‐R achieves higher overall performance. Moreover, a comparison with standard state‐of‐the‐art superscalar processors reveals that significant speedup factors are achieved with GPUs. Copyright


Micron | 2012

Computational methods for electron tomography

José-Jesús Fernández

Electron tomography (ET) has emerged as a powerful technique to address fundamental questions in molecular and cellular biology. It makes possible visualization of the molecular architecture of complex viruses, organelles and cells at a resolution of a few nanometres. In the last decade ET has allowed major breakthroughs that have provided exciting insights into a wide range of biological processes. In ET the biological sample is imaged with an electron microscope, and a series of images is taken from the sample at different views. Prior to imaging, the sample has to be specially prepared to withstand the conditions within the microscope. Subsequently, those images are processed and combined to yield the three-dimensional reconstruction or tomogram. Afterwards, a number of computational steps are necessary to facilitate the interpretation of the tomogram, such as noise reduction, segmentation and analysis of subvolumes. As the computational demands are huge in some of the stages, high performance computing (HPC) techniques are used to make the problem affordable in reasonable time. This article intends to comprehensively review the methods, technologies and tools involved in the different computational stages behind structural studies by ET, from image acquisition to interpretation of tomograms. The HPC techniques usually employed to cope with the computational demands are also briefly described.


Ultramicroscopy | 1997

A spectral estimation approach to contrast transfer function detection in electron microscopy

José-Jesús Fernández; JoséR. Sanjurjo; J.M. Carazo

Abstract In this work we approach the task of estimating the contrast transfer function (CTF) of a transmission electron microscope by applying mathematical tools extracted from the field of spectral estimation in multidimensional signal processing, such as periodogram averaging and autoregressive modelling. We prove that the clarity and precision by which the CTF can be detected using these approaches is far better than by any conventional method based on the Fourier transform amplitude alone. We also present a unified signal processing-based framework in which recent development in CTF detection approach can be studied, helping to understand their respective benefits and problems.


Journal of Parallel and Distributed Computing | 2004

Three-dimensional reconstruction of cellular structures by electron microscope tomography and parallel computing

José-Jesús Fernández; José María Carazo; Inmaculada García

Electron microscope tomography has emerged as the leading technique for structure determination of cellular components with a resolution of a few nanometers, opening up exciting perspectives for visualizing the molecular architecture of the cytoplasm. This work describes and analyzes the parallelization of tomographic reconstruction algorithms for their application in electron microscope tomography of cellular structures. Efficient iterative algorithms that are characterized by a fast convergence rate have been used to tackle the image reconstruction problem. The use of smooth basis functions provides the reconstruction algorithms with an implicit regularization mechanism, very appropriate for highly noisy conditions such as those present in high-resolution electron tomographic studies. Parallel computing techniques have been applied so as to face the computational requirements demanded by the reconstruction of large volumes. An efficient domain decomposition scheme has been devised that leads to a parallel approach with capabilities of interprocessor communication latency hiding. The combination of efficient iterative algorithms and parallel computing techniques have proved to be well suited for the reconstruction of large biological specimens in electron tomography, yielding solutions in reasonable computational times. This work concludes that parallel computing will be the key to afford high-resolution structure determination of cells, so that the location of molecular signatures in their native cellular context can be made a reality.


Ultramicroscopy | 2003

A method for estimating the CTF in electron microscopy based on ARMA models and parameter adjustment

Javier A. Velázquez-Muriel; Carlos Oscar S. Sorzano; José-Jesús Fernández; J.M. Carazo

In this work, a powerful parametric spectral estimation technique, 2D-auto regressive moving average modeling (ARMA), has been applied to contrast transfer function (CTF) detection in electron microscopy. Parametric techniques such as auto regressive (AR) and ARMA models allow a more exact determination of the CTF than traditional methods based only on the Fourier transform of the complete image or parts of it and performing some average (periodogram averaging). Previous works revealed that AR models can be used to improve CTF estimation and the detection of its zeros. ARMA models reduce the model order and the computing time, and more interestingly, achieve increased accuracy. ARMA models are generated from electron microscopy (EM) images, and then a stepwise search algorithm is used to fit all the parameters of a theoretical CTF model in the ARMA model previously calculated. Furthermore, this adjustment is truly two-dimensional, allowing astigmatic images to be properly treated. Finally, an individual CTF can be assigned to every point of the micrograph, by means of an interpolation at the functional level, provided that a CTF has been estimated in each one of a set of local areas. The user need only know a few a priori parameters of the experimental conditions of his micrographs, for turning this technique into an automatic and very powerful tool for CTF determination, prior to CTF correction in 3D-EM. The programs developed for the above tasks have been integrated into the X-Windows-based Microscopy Image Processing Package (Xmipp) software package, and are fully accessible at www.biocomp.cnb.uam.es.


Journal of Structural Biology | 2008

High performance computing in structural determination by electron cryomicroscopy.

José-Jesús Fernández

Computational advances have significantly contributed to the current role of electron cryomicroscopy (cryoEM) in structural biology. The needs for computational power are constantly growing with the increasing complexity of algorithms and the amount of data needed to push the resolution limits. High performance computing (HPC) is becoming paramount in cryoEM to cope with those computational needs. Since the nineties, different HPC strategies have been proposed for some specific problems in cryoEM and, in fact, some of them are already available in common software packages. Nevertheless, the literature is scattered in the areas of computer science and structural biology. In this communication, the HPC approaches devised for the computation-intensive tasks in cryoEM (single particles and tomography) are retrospectively reviewed and the future trends are discussed. Moreover, the HPC capabilities available in the most common cryoEM packages are surveyed, as an evidence of the importance of HPC in addressing the future challenges.

Collaboration


Dive into the José-Jesús Fernández's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jose-Roman Bilbao-Castro

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roberto Marabini

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

J.M. Carazo

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Oscar S. Sorzano

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge