Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yassene Mohammed is active.

Publication


Featured researches published by Yassene Mohammed.


Biomedical Engineering Online | 2005

A finite element method model to simulate laser interstitial thermo therapy in anatomical inhomogeneous regions

Yassene Mohammed; Janko F Verhey

BackgroundLaser Interstitial ThermoTherapy (LITT) is a well established surgical method. The use of LITT is so far limited to homogeneous tissues, e.g. the liver. One of the reasons is the limited capability of existing treatment planning models to calculate accurately the damage zone. The treatment planning in inhomogeneous tissues, especially of regions near main vessels, poses still a challenge. In order to extend the application of LITT to a wider range of anatomical regions new simulation methods are needed. The model described with this article enables efficient simulation for predicting damaged tissue as a basis for a future laser-surgical planning system. Previously we described the dependency of the model on geometry. With the presented paper including two video files we focus on the methodological, physical and mathematical background of the model.MethodsIn contrast to previous simulation attempts, our model is based on finite element method (FEM). We propose the use of LITT, in sensitive areas such as the neck region to treat tumours in lymph node with dimensions of 0.5 cm – 2 cm in diameter near the carotid artery. Our model is based on calculations describing the light distribution using the diffusion approximation of the transport theory; the temperature rise using the bioheat equation, including the effect of microperfusion in tissue to determine the extent of thermal damage; and the dependency of thermal and optical properties on the temperature and the injury. Injury is estimated using a damage integral. To check our model we performed a first in vitro experiment on porcine muscle tissue.ResultsWe performed the derivation of the geometry from 3D ultrasound data and show for this proposed geometry the energy distribution, the heat elevation, and the damage zone. Further on, we perform a comparison with the in-vitro experiment. The calculation shows an error of 5% in the x-axis parallel to the blood vessel.ConclusionsThe FEM technique proposed can overcome limitations of other methods and enables an efficient simulation for predicting the damage zone induced using LITT. Our calculations show clearly that major vessels would not be damaged. The area/volume of the damaged zone calculated from both simulation and in-vitro experiment fits well and the deviation is small. One of the main reasons for the deviation is the lack of accurate values of the tissue optical properties. In further experiments this needs to be validated.


Journal of Proteomics | 2014

PeptidePicker: a scientific workflow with web interface for selecting appropriate peptides for targeted proteomics experiments.

Yassene Mohammed; Dominik Domanski; Angela M. Jackson; Derek Smith; André M. Deelder; Magnus Palmblad; Christoph H. Borchers

UNLABELLEDnOne challenge in Multiple Reaction Monitoring (MRM)-based proteomics is to select the most appropriate surrogate peptides to represent a target protein. We present here a software package to automatically generate these most appropriate surrogate peptides for an LC/MRM-MS analysis. Our method integrates information about the proteins, their tryptic peptides, and the suitability of these peptides for MRM which is available online in UniProtKB, NCBIs dbSNP, ExPASy, PeptideAtlas, PRIDE, and GPMDB. The scoring algorithm reflects our knowledge in choosing the best candidate peptides for MRM, based on the uniqueness of the peptide in the targeted proteome, its physiochemical properties, and whether it previously has been observed. The modularity of the workflow allows further extension and additional selection criteria to be incorporated. We have developed a simple Web interface where the researcher provides the protein accession number, the subject organism, and peptide-specific options. Currently, the software is designed for human and mouse proteomes, but additional species can be easily be added. Our software improved the peptide selection by eliminating human error, considering multiple data sources and all of the isoforms of the protein, and resulted in faster peptide selection - approximately 50 proteins per hour compared to 8 per day.nnnBIOLOGICAL SIGNIFICANCEnCompiling a list of optimal surrogate peptides for target proteins to be analyzed by LC/MRM-MS has been a cumbersome process, in which expert researchers retrieved information from different online repositories and used their own reasoning to find the most appropriate peptides. Our scientific workflow automates this process by integrating information from different data sources including UniProt, Global Proteome Machine, NCBIs dbSNP, and PeptideAtlas, simulating the researchers reasoning, and incorporating their knowledge of how to select the best proteotypic peptides for an MRM analysis. The developed software can help to standardize the selection of peptides, eliminate human error, and increase productivity.


Journal of Proteome Research | 2015

Quest for Missing Proteins: Update 2015 on Chromosome-Centric Human Proteome Project.

Peter Horvatovich; Emma Lundberg; Yu-Ju Chen; Ting-Yi Sung; Fuchu He; Edouard C. Nice; Robert J. A. Goode; Simon Yu; Shoba Ranganathan; Mark S. Baker; Gilberto B. Domont; Erika Velasquez; Dong Li; Siqi Liu; Quanhui Wang; Qing-Yu He; Rajasree Menon; Yuanfang Guan; Fernando J. Corrales; Victor Segura; J. Ignacio Casal; Alberto Pascual-Montano; Juan Pablo Albar; Manuel Fuentes; María González-González; Paula Díez; Nieves Ibarrola; Rosa Ma Dégano; Yassene Mohammed; Christoph H. Borchers

This paper summarizes the recent activities of the Chromosome-Centric Human Proteome Project (C-HPP) consortium, which develops new technologies to identify yet-to-be annotated proteins (termed missing proteins) in biological samples that lack sufficient experimental evidence at the protein level for confident protein identification. The C-HPP also aims to identify new protein forms that may be caused by genetic variability, post-translational modifications, and alternative splicing. Proteogenomic data integration forms the basis of the C-HPPs activities; therefore, we have summarized some of the key approaches and their roles in the project. We present new analytical technologies that improve the chemical space and lower detection limits coupled to bioinformatics tools and some publicly available resources that can be used to improve data analysis or support the development of analytical assays. Most of this papers content has been compiled from posters, slides, and discussions presented in the series of C-HPP workshops held during 2014. All data (posters, presentations) used are available at the C-HPP Wiki (http://c-hpp.webhosting.rug.nl/) and in the Supporting Information.


Journal of Proteome Research | 2012

Cloud Parallel Processing of Tandem Mass Spectrometry Based Proteomics Data

Yassene Mohammed; Ekaterina Mostovenko; Alex A. Henneman; Rob J. Marissen; André M. Deelder; Magnus Palmblad

Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.


Journal of Proteome Research | 2015

Qualis-SIS: Automated Standard Curve Generation and Quality Assessment for Multiplexed Targeted Quantitative Proteomic Experiments with Labeled Standards

Yassene Mohammed; Andrew J. Percy; Andrew G. Chambers; Christoph H. Borchers

Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the users perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.


Future Generation Computer Systems | 2014

A new optimization phase for scientific workflow management systems

Sonja Holl; Olav Zimmermann; Magnus Palmblad; Yassene Mohammed; Martin Hofmann-Apitius

Scientific workflows have emerged as an important tool for combining computational power with data analysis for all scientific domains in e-science. They help scientists to design and execute complex in silico experiments. However, with increasing complexity it becomes more and more infeasible to optimize scientific workflows by trial and error. To address this issue, this paper describes the design of a new optimization phase integrated in the established scientific workflow life cycle. We have also developed a flexible optimization application programming interface (API) and have integrated it into a scientific workflow management system. A sample plugin for parameter optimization based on genetic algorithms illustrates, how the API enables rapid implementation of concrete workflow optimization methods. Finally, a use case taken from the area of structural bioinformatics validates how the optimization approach facilitates setup, execution and monitoring of workflow parameter optimization in high performance computing e-science environments.


Bioanalysis | 2015

A standardized kit for automated quantitative assessment of candidate protein biomarkers in human plasma

Andrew J. Percy; Yassene Mohammed; Juncong Yang; Christoph H. Borchers

BACKGROUNDnAn increasingly popular mass spectrometry-based quantitative approach for health-related research in the biomedical field involves the use of stable isotope-labeled standards (SIS) and multiple/selected reaction monitoring (MRM/SRM). To improve inter-laboratory precision and enable more widespread use of this absolute quantitative technique in disease-biomarker assessment studies, methods must be standardized. Results/methodology: Using this MRM-with-SIS-peptide approach, we developed an automated method (encompassing sample preparation, processing and analysis) for quantifying 76 candidate protein markers (spanning >4 orders of magnitude in concentration) in neat human plasma.nnnDISCUSSION/CONCLUSIONnThe assembled biomarker assessment kit - the BAK-76 - contains the essential materials (SIS mixes), methods (for acquisition and analysis), and tools (Qualis-SIS software) for performing biomarker discovery or verification studies in a rapid and standardized manner.


international conference on e-science | 2012

Fast confidential search for bio-medical data using Bloom filters and Homomorphic Cryptography

Henning Perl; Yassene Mohammed; Michael P. Brenner; Matthew Smith

Data protection is a challenge when outsourcing medical analysis, especially if one is dealing with patient related data. While securing transfer channels is possible using encryption mechanisms, protecting the data during analyses is difficult as it usually involves processing steps on the plain data. A common use case in bioinformatics is when a scientist searches for a biological sequence of amino acids or DNA nucleotides in a library or database of sequences to identify similarities. Most such search algorithms are optimized for speed with less or no consideration for data protection. Fast algorithms are especially necessary because of the immense search space represented for instance by the genome or proteome of complex organisms. We propose a new secure exact term search algorithm based on Bloom filters. Our algorithm retains data privacy by using Obfuscated Bloom filters while maintaining the performance needed for real-life applications. The results can then be further aggregated using Homomorphic Cryptography to allow exact-match searching. The proposed system facilitates outsourcing exact term search of sensitive data to on-demand resources in a way which conforms to best practice of data protection.


Proteomics | 2017

Multiplexed MRM‐based assays for the quantitation of proteins in mouse plasma and heart tissue

Andrew J. Percy; Sarah A. Michaud; Armando Jardim; Nicholas J. Sinclair; Suping Zhang; Yassene Mohammed; Andrea L. Palmer; Darryl B. Hardie; Juncong Yang; André LeBlanc; Christoph H. Borchers

The mouse is the most commonly used laboratory animal, with more than 14 million mice being used for research each year in North America alone. The number and diversity of mouse models is increasing rapidly through genetic engineering strategies, but detailed characterization of these models is still challenging because most phenotypic information is derived from time‐consuming histological and biochemical analyses. To expand the biochemists toolkit, we generated a set of targeted proteomic assays for mouse plasma and heart tissue, utilizing bottom‐up LC/MRM‐MS with isotope‐labeled peptides as internal standards. Protein quantitation was performed using reverse standard curves, with LC‐MS platform and curve performance evaluated by quality control standards. The assays comprising the final panel (101 peptides for 81 proteins in plasma; 227 peptides for 159 proteins in heart tissue) have been rigorously developed under a fit‐for‐purpose approach and utilize stable‐isotope labeled peptides for every analyte to provide high‐quality, precise relative quantitation. In addition, the peptides have been tested to be interference‐free and the assay is highly multiplexed, with reproducibly determined protein concentrations spanning >4 orders of magnitude. The developed assays have been used in a small pilot study to demonstrate their application to molecular phenotyping or biomarker discovery/verification studies.


Proteomics | 2017

PeptideTracker: A knowledgebase for collecting and storing information on protein concentrations in biological tissues

Yassene Mohammed; Pallab Bhowmick; Derek Smith; Dominik Domanski; Angela M. Jackson; Sarah Michaud; Sebastian Malchow; Andrew J. Percy; Andrew G. Chambers; Andrea L. Palmer; Suping Zhang; Albert Sickmann; Christoph H. Borchers

Targeted proteomics using multiple reaction monitoring (MRM) has become the method of choice for the rapid, precise, and reproducible quantitation of proteins in complex biological matrices. MRM-based targeted proteomics allows protein profiling in complex biological samples and accurate comparisons of between samples [1-6], typically using a bottom-up LC-MS/MS approach [7]. While relative quantitation approaches report differences in fold-changes between the samples being compared, absolute methods measure the concentrations [8], ideally based on isotopically labeled internal standard peptides that are analogues of the endogenous proteotypic target peptides [9]. The use of stable-isotope labeled standard (SIS) peptides is a key factor in the precision and reproducibility of absolute quantitation experiments using LC/MRM-MS, but the results may still not be accurate. Experimentally derived concentrations for specific tissues or biofluids can vary as a function of digestion protocol, or even which peptides are selected to represent a given protein [10-12]. n nThis article is protected by copyright. All rights reserved

Collaboration


Dive into the Yassene Mohammed's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Magnus Palmblad

Leiden University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ulrich Sax

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred Viezens

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge