Nicolas Gac
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicolas Gac.
Journal of Biological Chemistry | 2002
Giovanni Maga; Giuseppe Villani; Kristijan Ramadan; Igor Shevelev; Nicolas Gac; Luis Blanco; Giuseppina Blanca; Silvio Spadari; Ulrich Hübscher
Proliferating cell nuclear antigen (PCNA) has been shown to interact with a variety of DNA polymerases (pol) such as pol δ, pol ε, pol ι, pol κ, pol η, and pol β. Here we show that PCNA directly interacts with the newly discovered pol λ cloned from human cells. This interaction stabilizes the binding of pol λ to the primer template, thus increasing its affinity for the hydroxyl primer and its processivity in DNA synthesis. However, no effect of PCNA was detected on the rate of nucleotide incorporation or discrimination efficiency by pol λ. PCNA was found to stimulate efficient synthesis by pol λ across an abasic (AP) site. When compared with pol δ, human pol λ showed the ability to incorporate a nucleotide in front of the lesion. Addition of PCNA led to efficient elongation past the AP site by pol λ but not by pol δ. However, when tested on a template containing a bulky DNA lesion, such as the major cisplatin Pt-d(GpG) adduct, PCNA could not allow translesion synthesis by pol λ. Our results suggest that the complex between PCNA and pol λ may play an important role in the bypass of abasic sites in human cells.
Eurasip Journal on Embedded Systems | 2008
Nicolas Gac; Stéphane Mancini; Michel Desvignes; Dominique Houzet
Back-projection (BP) is a costly computational step in tomography image reconstruction such as positron emission tomography (PET). To reduce the computation time, this paper presents a pipelined, prefetch, and parallelized architecture for PET BP (3PA-PET). The key feature of this architecture is its original memory access strategy, masking the high latency of the external memory. Indeed, the pattern of the memory references to the data acquired hinders the processing unit. The memory access bottleneck is overcome by an efficient use of the intrinsic temporal and spatial locality of the BP algorithm. A loop reordering allows an efficient use of general purpose processors caches, for software implementation, as well as the 3D predictive and adaptive cache (3D-AP cache), when considering hardware implementations. Parallel hardware pipelines are also efficient thanks to a hierarchical 3D-AP cache: each pipeline performs a memory reference in about one clock cycle to reach a computational throughput close to 100%. The 3PA-PET architecture is prototyped on a system on programmable chip (SoPC) to validate the system and to measure its expected performances. Time performances are compared with a desktop PC, a workstation, and a graphic processor unit (GPU).
acm symposium on applied computing | 2006
Nicolas Gac; Stéphane Mancini; Michel Desvignes
The reduction of image reconstruction time is needed to spread the use of PET for research and routine clinical practice. In this purpose, this article presents a hardware/software architecture for the acceleration of 3D backprojection based upon an efficient 2D backprojection. This architecture has been designed in order to provide a high level of parallelism thanks to an efficient management of the memory accesses which would have been otherwise strongly slowed by the external memory. The reconstruction system is embedded in a SoPC platform (System on Programmable Chip), the new generation of reconfigurable circuit. The originality of this architecture comes from the design of a 2D Adaptative and Predictive Cache (2D-AP Cache) which has proved to be an efficient way to overcome the memory access bottleneck. Thanks to a hierarchical use of this cache, several backprojection operators can run in parallel, accelerating in this manner noteworthy the reconstruction process. This 2D reconstruction system will next be used to speed up 3D image reconstruction.
Journal of Biological Chemistry | 2011
Giuseppe Villani; Ulrich Hübscher; Nadege Gironis; Sinikka Parkkinen; Helmut Pospiech; Igor Shevelev; Giulia di Cicco; Enni Markkanen; Juhani E. Syväoja; Nicolas Gac
DNA polymerase (pol) ϵ is thought to be the leading strand replicase in eukaryotes, whereas pols λ and β are believed to be mainly involved in re-synthesis steps of DNA repair. DNA elongation by the human pol ϵ is halted by an abasic site (apurinic/apyrimidinic (AP) site). In this study, we present in vitro evidence that human pols λ, β, and η can perform translesion synthesis (TLS) of an AP site in the presence of pol ϵ, likely by initiating the 3′OHs created at the lesion by the arrested pol ϵ. However, in the case of pols λ and β, this TLS requires the presence of a DNA gap downstream from the product synthesized by the pol ϵ, and the optimal gap for efficient TLS is different for the two polymerases. The presence of gaps did not affect the TLS capacity of human pol η. Characterization of the reaction products showed that pol β inserted dAMP opposite the AP site, whereas gap filling synthesis by pol λ resulted in single or double deletions opposite the lesion. The synthesis up to the AP site by pol ϵ and the subsequent TLS by pols λ and β are not influenced by human processivity factor proliferating cell nuclear antigen and human single-stranded DNA-binding protein replication protein A. The bypass capacity of pol λ at the AP site is greatly reduced when a truncated form of the enzyme, which has lost the BRCA1 C-terminal and proline-rich domains, is used. Collectively, our in vitro results support the existence of a mechanism of gap-directed TLS at an AP site involving a switch between the replicative pol ϵ and the repair pols λ and β.
BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING (MAXENT 2014) | 2015
Li Wang; Nicolas Gac; Ali Mohammad-Djafari
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 5123 to 81923 voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image imple...
Fundamenta Informaticae | 2017
Li Wang; Ali Mohammad-Djafari; Nicolas Gac
X-ray Computed Tomography (CT) has become a hot topic in both medical and industrial applications in recent decades. Reconstruction by using a limited number of projections is a significant research domain. In this paper, we propose to solve the X-ray CT reconstruction problem by using the Bayesian approach with a hierarchical structured prior model basing on the multilevel Haar transformation. In the proposed model, the multilevel Haar transformation is used as the sparse representation of a piecewise continuous image, and a generalized Student-t distribution is used to enforce its sparsity. The simulation results compare the performance of the proposed method with some state-of-the-art methods.
international conference on acoustics, speech, and signal processing | 2016
Li Wang; Ali Mohammad-Djafari; Nicolas Gac; Mircea Dumitru
In order to improve the quality of X-ray Computed Tomography (CT) reconstruction for Non Destructive Testing (NDT), we propose a hierarchical prior modeling with a Bayesian approach. In this paper we present a new hierarchical structure for the inverse problem of CT by using a multivariate Student-t prior which enforces sparsity and preserves edges. This model can be adapted to the piecewise continuous image reconstruction problems. We demonstrate the feasibility of this method by comparing with some other state of the art methods. In this paper, we show simulation results in 2D where the image is the middle slice of the Shepp-Logan object but the algorithms are adapted to the big data size problem, which is one of the principal difficulties in the 3D CT reconstruction problem.
international conference on acoustics, speech, and signal processing | 2017
Li Wang; Ali Mohammad-Djafari; Nicolas Gac
The 3D X-ray Computed Tomography (CT) is used in many domains. In medical imaging and industrial Non Destructive Testing (NDT) applications, this technique becomes of great interest. In these applications, very often, we need not only to reconstruct the image, but also to detect the contours between the homogeneous regions of the piecewise continuous image. Generally, contours are obtained by a post processing from the reconstructed image. In this paper, we propose a method to estimate image and contour simultaneously. For this we use the Bayesian approach with a prior model in which the relationship between the image and its contour is considered by using a hierarchical Markovian model, and use a sparsity enforcing prior model for the contours. This proposed method can be used for reconstructions when the image is piecewise continuous. The simulation results are compared with some state of the art methods, and they show the efficiency of simultaneously reconstructing and edge detecting by using proposed method.
BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: Proceedings of the 36th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2016) | 2017
Li Wang; Ali Mohammad-Djafari; Nicolas Gac
In recent decades X-ray Computed Tomography (CT) image reconstruction has been largely developed in both medical and industrial domain. In this paper, we propose using the Bayesian inference approach with a new hierarchical prior model. In the proposed model, a generalised Student-t distribution is used to enforce the Haar transformation of images to be sparse. Comparisons with some state of the art methods are presented. It is shown that by using the proposed model, the sparsity of sparse representation of images is enforced, so that edges of images are preserved. Simulation results are also provided to demonstrate the effectiveness of the new hierarchical model for reconstruction with fewer projections.
37th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering | 2017
Mircea Dumitru; Li Wang; Ali Mohammad-Djafari; Nicolas Gac
The Bayesian approach is considered for inverse problems with a typical forward model accounting for errors and a priori sparse solutions. Solutions with sparse structure are enforced using heavy-tailed prior distributions. The particular case of such prior expressed via normal variance mixtures with conjugate laws for the mixing distribution is the main interest of this paper. Such a prior is considered in this paper, namely, the Student-t distribution. Iterative algorithms are derived via posterior mean estimation. The mixing distribution parameters appear in updating equations and are also used for the initialization. For the choice of mixing distribution parameters, three model selection strategies are considered: (i) parameters approximating the mixing distribution with Jeffrey law, i.e., keeping the mixing distribution well defined but as close as possible to the Jeffreys priors, (ii) based on the prior distribution form, fixing the parameters corresponding to the form inducing the most sparse solution and (iii) based on the sparsity mechanism, fixing the hyperparameters using the statistical measures of the mixing and prior distribution. For each strategy of model selection, the theoretical advantages and drawbacks are discussed and the corresponding simulations are reported for a 1D direct sparsity application in a biomedical context. We show that the third strategy seems to provide the best parameter selection strategy for this context.