Amir Madany Mamlouk
University of Lübeck
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amir Madany Mamlouk.
Neurocomputing | 2003
Amir Madany Mamlouk; Christine Chee-Ruiter; Ulrich G. Hofmann; James M. Bower
Abstract In this paper we describe an effort to project an olfactory perception database onto the nearest high dimensional Euclidean space using multidimensional scaling. This yields an independent Euclidean interpretation of odor perception, whether this space is metric or not. Self-organizing maps were then applied to produce two-dimensional maps of the Euclidean approximation of olfactory perception space. These maps provide new knowledge about complexity and potentially the functionality of the sense of smell from the point of view of human odor perception. This report is based on a recent thesis by Madany Mamlouk, Quantifying olfactory perception, at the University of Lubeck, Germany.
PLOS ONE | 2011
Daniel H. Rapoport; Tim Becker; Amir Madany Mamlouk; Simone Schicktanz; Charli Kruse
Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high reliability and statistical significance. These include the distribution of life/cycle times and cell areas, as well as of the symmetry of cell divisions and motion analyses. The new algorithm thus allows for the quantification and parameterization of cell culture with unprecedented accuracy. To evaluate our validation algorithm, two large reference data sets were manually created. These data sets comprise more than 320,000 unstained adult pancreatic stem cells from rat, including 2592 mitotic events. The reference data sets specify every cell position and shape, and assign each cell to the correct branch of its genealogic tree. We provide these reference data sets for free use by others as a benchmark for the future improvement of automated tracking methods.
Neurocomputing | 2004
Amir Madany Mamlouk; Thomas Martinetz
Abstract In recent works, large databases of stimuli and their corresponding olfactory perceptions have been analyzed to gain an insight into the organization of olfactory perception. Maps of these perceptions have provided evidence that the olfactory perception space is high dimensional. Based on these results, the question of the dimensionality of olfactory perception space can be asked using a new perspective. In this paper the problem of dimensionality is approached more rigorously and upper bounds on the dimensionality of the olfactory perception space are estimated.
international conference on artificial neural networks | 2008
Sascha Klement; Amir Madany Mamlouk; Thomas Martinetz
A Support-Vector-Machine (SVM) learns for given 2-class-data a classifier that tries to achieve good generalisation by maximising the minimal margin between the two classes. The performance can be evaluated using cross-validation testing strategies. But in case of low sample size data, high dimensionality might lead to strong side-effects that can significantly bias the estimated performance of the classifier. On simulated data, we illustrate the effects of high dimensionality for cross-validation of both hard- and soft-margin SVMs. Based on the theoretical proofs towards infinity we derive heuristics that can be easily used to validate whether or not given data sets are subject to these constraints.
joint pattern recognition symposium | 2003
Amir Madany Mamlouk; Jan T. Kim; Erhardt Barth; Michael Brauckmann; Thomas Martinetz
If a simple and fast solution for one-class classification is required, the most common approach is to assume a Gaussian distribution for the patterns of the single class. Bayesian classification then leads to a simple template matching. In this paper we show for two very different applications that the classification performance can be improved significantly if a more uniform subgaussian instead of a Gaussian class distribution is assumed. One application is face detection, the other is the detection of transcription factor binding sites on a genome. As for the Gaussian, the distance from a template, i.e., the distribution center, determines a pattern’s class assignment. However, depending on the distribution assumed, maximum likelihood learning leads to different templates from the training data. These new templates lead to significant improvements of the classification performance.
brazilian symposium on computer graphics and image processing | 2006
Thomas Martinetz; Amir Madany Mamlouk; Cicero Mota
The incremental Badoiu-Clarkson algorithm finds the smallest ball enclosing n points in d dimensions with at least O(1/radict) precision, after t iteration steps. The extremely simple incremental step of the algorithm makes it very attractive both for theoreticians and practitioners. A simplified proof for this convergence is given. This proof allows to show that the precision increases, in fact, even as O(u/t) with the number of iteration steps. Computer experiments, but not yet a proof, suggest that the u, which depends only on the data instance, is actually bounded by min{radic2d, radic2n}. If it holds, then the algorithm finds the smallest enclosing ball with epsi precision in at most 0(ndradic/dm/epsi) time, with dm = min{d, n}
international conference on artificial neural networks | 2014
Norman Scheel; Catie Chang; Amir Madany Mamlouk
Recently a new technique called multiband imaging was introduced, it allows extremely low repetition times for functional magnetic resonance imaging (fMRI). As these ultra fast imaging scans can increase the Nyquist rate by an order of magnitude, there are many new effects, that have to be accounted for. As more frequencies can now be sampled directly, we want to analyze especially those that are due to physiological noise, such as cardiac and respiratory signals. Here, we adapted RETROICOR [4] to handle multiband fMRI data. We show the importance of physiological noise regression for standard temporal resolution fMRI and compare it to the high temporal resolution case. Our results show that especially for multiband fMRI scans, it is of the utmost importance to apply physiological noise regression, as residuals of these noises are clearly detectable in non noise independent components if no prior physiological noise has been applied.
international conference on artificial neural networks | 2012
Henry Schütze; Thomas Martinetz; Silke Anders; Amir Madany Mamlouk
Modern functional brain imaging methods (e.g. functional magnetic resonance imaging, fMRI) produce large amounts of data. To adequately describe the underlying neural processes, data analysis methods are required that are capable to map changes of high-dimensional spatio-temporal patterns over time. In this paper, we introduce Multivariate Principal Subspace Entropy (MPSE), a multivariate entropy approach that estimates spatio-temporal complexity of fMRI time series. In a temporally sliding window, MPSE measures the differential entropy of an assumed multivariate Gaussian density, with parameters that are estimated based on low-dimensional principal subspace projections of fMRI images. First, we apply MPSE to simulated time series to test how reliably it can differentiate between state phases that differ only in their intrinsic dimensionality. Secondly, we apply MPSE to real-world fMRI data of subjects who were scanned during an emotional task. Our findings suggest that MPSE might be a valid descriptor of spatio-temporal complexity of brain states.
Bildverarbeitung für die Medizin | 2011
Tim Becker; Daniel H. Rapoport; Amir Madany Mamlouk
Reliable analysis of adult stem cell populations in in vitro experiments still poses a problem on the way to fully understand the regulating mechanism of these cultures. However, it is essential in the use of cultivated endogenous cells in stem cell therapies. One crucial feature during automated analysis is clearly the robust detection of mitotic events. In this work, we use the fully labeled stem cell benchmark data set CeTReS I in order to evaluate different approaches of mitosis detection: a purely time line based approach; a feature-based motility detector; and a detector based on the cell morphology changes, for which we also propose an adaptive version. We demonstrate that the approach based on morphological change outperforms the static detectors. However, the set of optimal features is changing over time, and thus it is not surprising that a feature set adapted to the systems confluency shows the best performance.
international conference on artificial neural networks | 2010
Ingrid Brænne; Kai Labusch; Amir Madany Mamlouk
Genome-wide association (GWA) studies provide large amounts of high-dimensional data. GWA studies aim to identify variables that increase the risk for a given phenotype. Univariate examinations have provided some insights, but it appears that most diseases are affected by interactions of multiple factors, which can only be identified through a multivariate analysis. However, multivariate analysis on the discrete, high-dimensional and low-sample-size GWA data is made more difficult by the presence of random effects and nonspecific coupling. In this work, we investigate the suitability of three standard techniques (p-values, SVM, PCA) for analyzing GWA data on several simulated datasets. We compare these standard techniques against a sparse coding approach; we demonstrate that sparse coding clearly outperforms the other approaches and can identify interacting factors in far higherdimensional datasets than the other three approaches.