Rainer Schrader
University of Cologne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rainer Schrader.
NeuroImage | 2004
Jiri Cizek; Karl Herholz; Stefan Vollmar; Rainer Schrader; Johannes C. Klein; Wolf-Dieter Heiss
In recent years, mutual information has proved to be an excellent criterion for registration of intra-individual images from different modalities. Multi-resolution coarse-to-fine optimization was proposed for speeding-up of the registration process. The aim of our work was to further improve registration speed without compromising robustness or accuracy. We present and evaluate two procedures for co-registration of positron emission tomography (PET) and magnetic resonance (MR) images of human brain that combine a multi-resolution approach with an automatic segmentation of input image volumes into areas of interest and background. We show that an acceleration factor of 10 can be achieved for clinical data and that a suitable preprocessing can improve robustness of registration. Emphasis was laid on creation of an automatic registration system that could be used routinely in a clinical environment. For this purpose, an easy-to-use graphical user interface has been developed. It allows physicians with no special knowledge of the registration algorithm to perform a fast and reliable alignment of images. Registration progress is presented on the fly on a fusion of images and enables visual checking during a registration.
Journal of Cheminformatics | 2009
Syed Asad Rahman; Matthew Bashton; Gemma L. Holliday; Rainer Schrader; Janet M. Thornton
BackgroundFinding one small molecule (query) in a large target library is a challenging task in computational chemistry. Although several heuristic approaches are available using fragment-based chemical similarity searches, they fail to identify exact atom-bond equivalence between the query and target molecules and thus cannot be applied to complex chemical similarity searches, such as searching a complete or partial metabolic pathway.In this paper we present a new Maximum Common Subgraph (MCS) tool: SMSD (Small Molecule Subgraph Detector) to overcome the issues with current heuristic approaches to small molecule similarity searches. The MCS search implemented in SMSD incorporates chemical knowledge (atom type match with bond sensitive and insensitive information) while searching molecular similarity. We also propose a novel method by which solutions obtained by each MCS run can be ranked using chemical filters such as stereochemistry, bond energy, etc.ResultsIn order to benchmark and test the tool, we performed a 50,000 pair-wise comparison between KEGG ligands and PDB HET Group atoms. In both cases the SMSD was shown to be more efficient than the widely used MCS module implemented in the Chemistry Development Kit (CDK) in generating MCS solutions from our test cases.ConclusionPresently this tool can be applied to various areas of bioinformatics and chemo-informatics for finding exhaustive MCS matches. For example, it can be used to analyse metabolic networks by mapping the atoms between reactants and products involved in reactions. It can also be used to detect the MCS/substructure searches in small molecules reported by metabolome experiments, as well as in the screening of drug-like compounds with similar substructures.Thus, we present a robust tool that can be used for multiple applications, including the discovery of new drug molecules. This tool is freely available on http://www.ebi.ac.uk/thornton-srv/software/SMSD/
Bioinformatics | 2005
Syed Asad Rahman; P. Advani; R. Schunk; Rainer Schrader; Dietmar Schomburg
MOTIVATION Pathway Hunter Tool (PHT), is a fast, robust and user-friendly tool to analyse the shortest paths in metabolic pathways. The user can perform shortest path analysis for one or more organisms or can build virtual organisms (networks) using enzymes. Using PHT, the user can also calculate the average shortest path (Jungnickel, 2002 Graphs, Network and Algorithm. Springer-Verlag, Berlin), average alternate path and the top 10 hubs in the metabolic network. The comparative study of metabolic connectivity and observing the cross talk between metabolic pathways among various sequenced genomes is possible. RESULTS A new algorithm for finding the biochemically valid connectivity between metabolites in a metabolic network was developed and implemented. A predefined manual assignment of side metabolites (like ATP, ADP, water, CO(2) etc.) and main metabolites is not necessary as the new concept uses chemical structure information (global and local similarity) between metabolites for identification of the shortest path.
Nonlinear Programming 4#R##N#Proceedings of the Nonlinear Programming Symposium 4 Conducted by the Computer Sciences Department at the University of Wisconsin–Madison, July 14–16, 1980 | 1981
Bernhard Korte; Rainer Schrader
We characterize those combinatorial optimization problems which can be solved approximately by polynomially bounded algorithms. Using slight modifications of the Sahni and Ibarra and Kim algorithms for the knapsack problem we prove that there is no fast approximation scheme unless their algorithmic ideas apply. Hence we show that these algorithms are not only the origin but also prototypes for all polynomial or fully polynomial approximation schemes.
Bioinformatics | 2001
Eva Bolten; Alexander Schliep; Sebastian Schneckener; Dietmar Schomburg; Rainer Schrader
MOTIVATION It is widely believed that for two proteins Aand Ba sequence identity above some threshold implies structural similarity due to a common evolutionary ancestor. Since this is only a sufficient, but not a necessary condition for structural similarity, the question remains what other criteria can be used to identify remote homologues. Transitivity refers to the concept of deducing a structural similarity between proteins A and C from the existence of a third protein B, such that A and B as well as B and C are homologues, as ascertained if the sequence identity between A and B as well as that between B and C is above the aforementioned threshold. It is not fully understood if transitivity always holds and whether transitivity can be extended ad infinitum. RESULTS We developed a graph-based clustering approach, where transitivity plays a crucial role. We determined all pair-wise similarities for the sequences in the SwissProt database using the Smith-Waterman local alignment algorithm. This data was transformed into a directed graph, where protein sequences constitute vertices. A directed edge was drawn from vertex A to vertex B if the sequences A and B showed similarity, scaled with respect to the self-similarity of A, above a fixed threshold. Transitivity was important in the clustering process, as intermediate sequences were used, limited though by the requirement of having directed paths in both directions between proteins linked over such sequences. The length dependency-implied by the self-similarity-of the scaling of the alignment scores appears to be an effective criterion to avoid clustering errors due to multi-domain proteins. To deal with the resulting large graphs we have developed an efficient library. Methods include the novel graph-based clustering algorithm capable of handling multi-domain proteins and cluster comparison algorithms. Structural Classification of Proteins (SCOP) was used as an evaluation data set for our method, yielding a 24% improvement over pair-wise comparisons in terms of detecting remote homologues. AVAILABILITY The software is available to academic users on request from the authors. CONTACT [email protected]; [email protected]; [email protected]; [email protected]; [email protected]. SUPPLEMENTARY INFORMATION http://www.zaik.uni-koeln.de/~schliep/ProtClust.html.
Bioinformatics | 2006
Lars Kaderali; Thomas Zander; Ulrich Faigle; Jürgen Wolf; Joachim L. Schultze; Rainer Schrader
MOTIVATION DNA microarrays allow the simultaneous measurement of thousands of gene expression levels in any given patient sample. Gene expression data have been shown to correlate with survival in several cancers, however, analysis of the data is difficult, since typically at most a few hundred patients are available, resulting in severely underdetermined regression or classification models. Several approaches exist to classify patients in different risk classes, however, relatively little has been done with respect to the prediction of actual survival times. We introduce CASPAR, a novel method to predict true survival times for the individual patient based on microarray measurements. CASPAR is based on a multivariate Cox regression model that is embedded in a Bayesian framework. A hierarchical prior distribution on the regression parameters is specifically designed to deal with high dimensionality (large number of genes) and low sample size settings, that are typical for microarray measurements. This enables CASPAR to automatically select small, most informative subsets of genes for prediction. RESULTS Validity of the method is demonstrated on two publicly available datasets on diffuse large B-cell lymphoma (DLBCL) and on adenocarcinoma of the lung. The method successfully identifies long and short survivors, with high sensitivity and specificity. We compare our method with two alternative methods from the literature, demonstrating superior results of our approach. In addition, we show that CASPAR can further refine predictions made using clinical scoring systems such as the International Prognostic Index (IPI) for DLBCL and clinical staging for lung cancer, thus providing an additional tool for the clinician. An analysis of the genes identified confirms previously published results, and furthermore, new candidate genes correlated with survival are identified.
Information Processing Letters | 1988
Ulrich Faigle; Rainer Schrader
Simulated annealing is a randomized optimization algorithm which accepts deteriorations of the objective function with a probability depending on a control parameter t in each iteration. Under general assumptions we show that the equilibrium distributions with respect to the parameter levels converge to the distribution which selects a global optimum with probability one. This result may be taken as an explanation for the observed good performance of simulated annealing implementations which prefer many iterations on relatively few parameter levels to relatively few iterations at many parameter levels.
Mathematics of Operations Research | 2000
Andreas Nolte; Rainer Schrader
Simulated Annealing has proven to be a very successful heuristic for various combinatorial optimization problems. It is a randomized algorithm that attempts to find the global optimum with high probability by local exchanges. In this paper we give a new proof of the convergence of Simulated Annealing by applying results about rapidly mixing Markov chains. With this proof technique it is possible to obtain better bounds for the finite time behavior of Simulated Annealing than previously known.
Annals of Operations Research | 2001
A. Erdmann; Andreas Nolte; A. Noltemeier; Rainer Schrader
Since opening a new flight connection or closing an existing flight has a great impact on the revenues of an airline, the generation of the flight schedule is one of the fundamental problems in airline planning processes.In this paper we concentrate on a special case of the problem which arises at charter companies. In contrast to airlines operating on regular schedules, the market for charter airlines is well-known and the schedule is allowed to change completely from period to period. Thus, precise adjustments to the demands of the market have a great potential for minimizing operating costs.We present a capacitated network design model and propose a combined branch-and-cut approach to solve this airline schedule generation problem. To tighten the linear relaxation bound, we add cutting planes which adjust the number of aircraft and the spill of passengers to the demand on each itinerary.For real-world problems from a large European charter airline we obtain solutions within a very few percent of optimality with running times in the order of minutes on a customary personal computer for most of the data sets.
SIAM Journal on Computing | 1986
Ulrich Faigle; László Lovász; Rainer Schrader; György Turán
Linial and Saks [2] have shown that