Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amin Ardeshirdavani is active.

Publication


Featured researches published by Amin Ardeshirdavani.


Science Translational Medicine | 2014

Effective diagnosis of genetic disease by computational phenotype analysis of the disease-associated genome

Tomasz Zemojtel; Sebastian Köhler; Luisa Mackenroth; Marten Jäger; Jochen Hecht; Peter Krawitz; Luitgard Graul-Neumann; Sandra C. Doelken; Nadja Ehmke; Malte Spielmann; Nancy Christine Øien; Michal R. Schweiger; Ulrike Krüger; Götz Frommer; Björn Fischer; Uwe Kornak; Ricarda Flöttmann; Amin Ardeshirdavani; Yves Moreau; Suzanna E. Lewis; Melissa Haendel; Damian Smedley; Denise Horn; Stefan Mundlos; Peter N. Robinson

Patients with genetic disease of unknown causes can be rapidly diagnosed by bioinformatic analysis of disease-associated DNA sequences and phenotype. Efficient Diagnosis of Genetic Disease We know which genes are mutated in almost 3000 inherited human diseases and have good descriptions of how these mutations affect the human phenotype. Now, Zemojtel et al. have coupled this knowledge with rapid sequencing of these genes in a group of 40 patients with undiagnosed genetic diseases. Bioinformatic matching of the patients’ clinical characteristics and their disease gene sequences to databases of current genetic and phenotype knowledge enabled the authors to successfully diagnose almost 30% of the patients. The process required only about 2 hours of a geneticists’ time. Zemojtel et al. have made their tools available to the community, enabling a fast straightforward process by which clinicians and patients can easily identify the genetic basis of inherited disease in certain people. Less than half of patients with suspected genetic disease receive a molecular diagnosis. We have therefore integrated next-generation sequencing (NGS), bioinformatics, and clinical data into an effective diagnostic workflow. We used variants in the 2741 established Mendelian disease genes [the disease-associated genome (DAG)] to develop a targeted enrichment DAG panel (7.1 Mb), which achieves a coverage of 20-fold or better for 98% of bases. Furthermore, we established a computational method [Phenotypic Interpretation of eXomes (PhenIX)] that evaluated and ranked variants based on pathogenicity and semantic similarity of patients’ phenotype described by Human Phenotype Ontology (HPO) terms to those of 3991 Mendelian diseases. In computer simulations, ranking genes based on the variant score put the true gene in first place less than 5% of the time; PhenIX placed the correct gene in first place more than 86% of the time. In a retrospective test of PhenIX on 52 patients with previously identified mutations and known diagnoses, the correct gene achieved a mean rank of 2.1. In a prospective study on 40 individuals without a diagnosis, PhenIX analysis enabled a diagnosis in 11 cases (28%, at a mean rank of 2.4). Thus, the NGS of the DAG followed by phenotype-driven bioinformatic analysis allows quick and effective differential diagnostics in medical genetics.


Nature Methods | 2013

eXtasy: variant prioritization by genomic data fusion

Alejandro Sifrim; Dusan Popovic; Léon-Charles Tranchevent; Amin Ardeshirdavani; Ryo Sakai; Peter Konings; Joris Vermeesch; Jan Aerts; Bart De Moor; Yves Moreau

Massively parallel sequencing greatly facilitates the discovery of novel disease genes causing Mendelian and oligogenic disorders. However, many mutations are present in any individual genome, and identifying which ones are disease causing remains a largely open problem. We introduce eXtasy, an approach to prioritize nonsynonymous single-nucleotide variants (nSNVs) that substantially improves prediction of disease-causing variants in exome sequencing data by integrating variant impact prediction, haploinsufficiency prediction and phenotype-specific gene prioritization.


Nucleic Acids Research | 2016

Candidate gene prioritization with Endeavour

Léon-Charles Tranchevent; Amin Ardeshirdavani; Sarah Elshal; Daniel Alcaide; Jan Aerts; Didier Auboeuf; Yves Moreau

Genomic studies and high-throughput experiments often produce large lists of candidate genes among which only a small fraction are truly relevant to the disease, phenotype or biological process of interest. Gene prioritization tackles this problem by ranking candidate genes by profiling candidates across multiple genomic data sources and integrating this heterogeneous information into a global ranking. We describe an extended version of our gene prioritization method, Endeavour, now available for six species and integrating 75 data sources. The performance (Area Under the Curve) of Endeavour on cross-validation benchmarks using ‘gold standard’ gene sets varies from 88% (for human phenotypes) to 95% (for worm gene function). In addition, we have also validated our approach using a time-stamped benchmark derived from the Human Phenotype Ontology, which provides a setting close to prospective validation. With this benchmark, using 3854 novel gene–phenotype associations, we observe a performance of 82%. Altogether, our results indicate that this extended version of Endeavour efficiently prioritizes candidate genes. The Endeavour web server is freely available at https://endeavour.esat.kuleuven.be/.


Nucleic Acids Research | 2016

Beegle: from literature mining to disease-gene discovery

Sarah Elshal; Léon-Charles Tranchevent; Alejandro Sifrim; Amin Ardeshirdavani; Jesse Davis; Yves Moreau

Disease-gene identification is a challenging process that has multiple applications within functional genomics and personalized medicine. Typically, this process involves both finding genes known to be associated with the disease (through literature search) and carrying out preliminary experiments or screens (e.g. linkage or association studies, copy number analyses, expression profiling) to determine a set of promising candidates for experimental validation. This requires extensive time and monetary resources. We describe Beegle, an online search and discovery engine that attempts to simplify this process by automating the typical approaches. It starts by mining the literature to quickly extract a set of genes known to be linked with a given query, then it integrates the learning methodology of Endeavour (a gene prioritization tool) to train a genomic model and rank a set of candidate genes to generate novel hypotheses. In a realistic evaluation setup, Beegle has an average recall of 84% in the top 100 returned genes as a search engine, which improves the discovery engine by 12.6% in the top 5% prioritized genes. Beegle is publicly available at http://beegle.esat.kuleuven.be/.


Genome Medicine | 2014

NGS-Logistics: federated analysis of NGS sequence variants across multiple locations

Amin Ardeshirdavani; Erika Souche; Luc Dehaspe; Jeroen Van Houdt; Joris Vermeesch; Yves Moreau

As many personal genomes are being sequenced, collaborative analysis of those genomes has become essential. However, analysis of personal genomic data raises important privacy and confidentiality issues. We propose a methodology for federated analysis of sequence variants from personal genomes. Specific base-pair positions and/or regions are queried for samples to which the user has access but also for the whole population. The statistics results do not breach data confidentiality but allow further exploration of the data; researchers can negotiate access to relevant samples through pseudonymous identifiers. This approach minimizes the impact on data confidentiality while enabling powerful data analysis by gaining access to important rare samples. Our methodology is implemented in an open source tool called NGS-Logistics, freely available at https://ngsl.esat.kuleuven.be.


Nucleic Acids Research | 2015

Galahad: a web server for drug effect analysis from gene expression

Griet Laenen; Amin Ardeshirdavani; Yves Moreau; Lieven Thorrez

Galahad (https://galahad.esat.kuleuven.be) is a web-based application for analysis of drug effects. It provides an intuitive interface to be used by anybody interested in leveraging microarray data to gain insights into the pharmacological effects of a drug, mainly identification of candidate targets, elucidation of mode of action and understanding of off-target effects. The core of Galahad is a network-based analysis method of gene expression. As an input, Galahad takes raw Affymetrix human microarray data from treatment versus control experiments and provides quality control and data exploration tools, as well as computation of differential expression. Alternatively, differential expression values can be uploaded directly. Using these differential expression values, drug target prioritization and both pathway and disease enrichment can be calculated and visualized. Drug target prioritization is based on the integration of the gene expression data with a functional protein association network. The web site is free and open to all and there is no login requirement.


Bioinformatics | 2018

pBRIT: gene prioritization by correlating functional and phenotypic annotations through integrative data fusion

Ajay Anand Kumar; Lut Van Laer; Maaike Alaerts; Amin Ardeshirdavani; Yves Moreau; Kris Laukens; Bart Loeys; Geert Vandeweyer

Motivation Computational gene prioritization can aid in disease gene identification. Here, we propose pBRIT (prioritization using Bayesian Ridge regression and Information Theoretic model), a novel adaptive and scalable prioritization tool, integrating Pubmed abstracts, Gene Ontology, Sequence similarities, Mammalian and Human Phenotype Ontology, Pathway, Interactions, Disease Ontology, Gene Association database and Human Genome Epidemiology database, into the prediction model. We explore and address effects of sparsity and inter‐feature dependencies within annotation sources, and the impact of bias towards specific annotations. Results pBRIT models feature dependencies and sparsity by an Information‐Theoretic (data driven) approach and applies intermediate integration based data fusion. Following the hypothesis that genes underlying similar diseases will share functional and phenotype characteristics, it incorporates Bayesian Ridge regression to learn a linear mapping between functional and phenotype annotations. Genes are prioritized on phenotypic concordance to the training genes. We evaluated pBRIT against nine existing methods, and on over 2000 HPO‐gene associations retrieved after construction of pBRIT data sources. We achieve maximum AUC scores ranging from 0.92 to 0.96 against benchmark datasets and of 0.80 against the time‐stamped HPO entries, indicating good performance with high sensitivity and specificity. Our model shows stable performance with regard to changes in the underlying annotation data, is fast and scalable for implementation in routine pipelines. Availability and implementation http://biomina.be/apps/pbrit/; https://bitbucket.org/medgenua/pbrit.


BMC Bioinformatics | 2015

NGS-Logistics : data infrastructure for efficient analysis of NGS sequence variants across multiple centers

Amin Ardeshirdavani; Erika Souche; Luc Dehaspe; Jeroen Van Houdt; Joris Vermeesch; Yves Moreau

Background Next-Generation Sequencing (NGS) is a key tool in genomics, in particular in research and diagnostics of human Mendelian, oligogenic, and complex disorders [1]. Multiple projects now aim at mapping the human genetic variation on a large scale, such as the 1,000 Genomes Project, the UK 100k Genome Project. Meanwhile with the dramatic decrease of the price and turnaround time, large amounts of human sequencing data have been generated over the past decade [2]. As of January 2014, about 2,555 sequencers were spread over 920 centers across the world [3]. As a result, about 100,000 human exome have been sequenced so far [4]. Crucially, the speed at which NGS data is produced greatly surpasses Moore’s law [5] and challenges our ability to conveniently store, exchange, and analyze this data. Data pre-processing is needed to extract reliable information from sequencing data and it can be divided into two major steps: primary analysis (image analysis and base calling) and secondary analysis. When looking for variation in the human genome, secondary analysis consists of aligning/mapping the reads against the reference genome and scanning the alignment for variation. Both raw data and mapped reads are large files occupying significant disk storage space. The collection of files resulting from the analysis of a single whole genome study can take up to 50Gb of disk space. This raises significant issues in terms of computing and data storage and transfer, with off-site data transfer currently being a key bottleneck. Moreover, the analysis of NGS data also raises the major challenge of how to reconcile federated analysis of personal genomic data and confidentiality of data to protect privacy. In many situations, the analysis of data from a single study alone will be much less powerful than if it can be correlated with other studies. In particular, when investigating a mutation of interest, it is extremely useful to obtain data about other patients or controls sharing similar mutations. However, personal genome data (whole genome, exome, transcriptome data, etc.) is sensitive personal data. Confidentiality of this data must be guaranteed at all times and only duly authorized researchers should access such personal data.


IACR Cryptology ePrint Archive | 2017

Privacy-Preserving Genome-Wide Association Study is Practical.

Charlotte Bonte; Eleftheria Makri; Amin Ardeshirdavani; Jaak Simm; Yves Moreau; Frederik Vercauteren


Abstract book | 2016

Towards a Belgian reference set

Erika Souche; Amin Ardeshirdavani; Yves Moreau; Gert Matthijs; Joris Vermeesch

Collaboration


Dive into the Amin Ardeshirdavani's collaboration.

Top Co-Authors

Avatar

Yves Moreau

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Joris Vermeesch

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Erika Souche

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Jeroen Van Houdt

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Luc Dehaspe

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Yves Moreau

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Griet Laenen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Jan Aerts

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Lieven Thorrez

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge