Annahita Oswald
Ludwig Maximilian University of Munich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Annahita Oswald.
NeuroImage | 2010
Claudia Plant; Stefan J. Teipel; Annahita Oswald; Christian Böhm; Thomas Meindl; Janaina Mourão-Miranda; Arun W. Bokde; Harald Hampel; Michael Ewers
Subjects with mild cognitive impairment (MCI) have an increased risk to develop Alzheimers disease (AD). Voxel-based MRI studies have demonstrated that widely distributed cortical and subcortical brain areas show atrophic changes in MCI, preceding the onset of AD-type dementia. Here we developed a novel data mining framework in combination with three different classifiers including support vector machine (SVM), Bayes statistics, and voting feature intervals (VFI) to derive a quantitative index of pattern matching for the prediction of the conversion from MCI to AD. MRI was collected in 32 AD patients, 24 MCI subjects and 18 healthy controls (HC). Nine out of 24 MCI subjects converted to AD after an average follow-up interval of 2.5 years. Using feature selection algorithms, brain regions showing the highest accuracy for the discrimination between AD and HC were identified, reaching a classification accuracy of up to 92%. The extracted AD clusters were used as a search region to extract those brain areas that are predictive of conversion to AD within MCI subjects. The most predictive brain areas included the anterior cingulate gyrus and orbitofrontal cortex. The best prediction accuracy, which was cross-validated via train-and-test, was 75% for the prediction of the conversion from MCI to AD. The present results suggest that novel multivariate methods of pattern matching reach a clinically relevant accuracy for the a priori prediction of the progression from MCI to AD.
PLOS ONE | 2013
Martin Dyrba; Michael Ewers; Martin Wegrzyn; Ingo Kilimann; Claudia Plant; Annahita Oswald; Thomas Meindl; Michela Pievani; Arun L.W. Bokde; Andreas Fellgiebel; Massimo Filippi; Harald Hampel; Stefan Klöppel; Karlheinz Hauenstein; Thomas Kirste; Stefan J. Teipel
Diffusion tensor imaging (DTI) based assessment of white matter fiber tract integrity can support the diagnosis of Alzheimer’s disease (AD). The use of DTI as a biomarker, however, depends on its applicability in a multicenter setting accounting for effects of different MRI scanners. We applied multivariate machine learning (ML) to a large multicenter sample from the recently created framework of the European DTI study on Dementia (EDSD). We hypothesized that ML approaches may amend effects of multicenter acquisition. We included a sample of 137 patients with clinically probable AD (MMSE 20.6±5.3) and 143 healthy elderly controls, scanned in nine different scanners. For diagnostic classification we used the DTI indices fractional anisotropy (FA) and mean diffusivity (MD) and, for comparison, gray matter and white matter density maps from anatomical MRI. Data were classified using a Support Vector Machine (SVM) and a Naïve Bayes (NB) classifier. We used two cross-validation approaches, (i) test and training samples randomly drawn from the entire data set (pooled cross-validation) and (ii) data from each scanner as test set, and the data from the remaining scanners as training set (scanner-specific cross-validation). In the pooled cross-validation, SVM achieved an accuracy of 80% for FA and 83% for MD. Accuracies for NB were significantly lower, ranging between 68% and 75%. Removing variance components arising from scanners using principal component analysis did not significantly change the classification results for both classifiers. For the scanner-specific cross-validation, the classification accuracy was reduced for both SVM and NB. After mean correction, classification accuracy reached a level comparable to the results obtained from the pooled cross-validation. Our findings support the notion that machine learning classification allows robust classification of DTI data sets arising from multiple scanners, even if a new data set comes from a scanner that was not part of the training sample.
mining and learning with graphs | 2010
Bianca Wackersreuther; Peter Wackersreuther; Annahita Oswald; Christian Böhm; Karsten M. Borgwardt
In many application domains, graphs are utilized to model entities and their relationships, and graph mining is important to detect patterns within these relationships. While the majority of recent data mining techniques deal with static graphs that do not change over time, recent years have witnessed the advent of an increasing number of time series of graphs. In this paper, we define a novel framework to perform frequent subgraph discovery in dynamic networks. In particular, we are considering dynamic graphs with edge insertions and edge deletions over time. Existing subgraph mining algorithms can be easily integrated into our framework to make them handle dynamic graphs. Finally, an extensive experimental evaluation on a large real-world case study confirms the practical feasibility of our approach.
conference on information and knowledge management | 2009
Christian Böhm; Frank Fiedler; Annahita Oswald; Claudia Plant; Bianca Wackersreuther
The ability to deal with uncertain information is becoming increasingly important for modern database applications. Whereas a conventional (certain) object is usually represented by a vector from a multidimensional feature space, an uncertain object is represented by a multivariate probability density function (PDF). This PDF can be defined either discretely (e.g. by a histogram) or continuously in parametric form (e.g. by a Gaussian Mixture Model). For a database of uncertain objects, the users expect similar data analysis techniques as for a conventional database of certain objects. An important analysis technique for certain objects is the skyline operator which finds maximal or minimal vectors with respect to any possible attribute weighting. In this paper, we propose the concept of probabilistic skylines, an extension of the skyline operator for uncertain objects. In addition, we propose efficient and effective methods for determining the probabilistic skyline of uncertain objects which are defined by a PDF in parametric form (e.g. a Gaussian function or a Gaussian Mixture Model). To further accelerate the search, we elaborate how the computation of the probabilistic skyline can be supported by an index structure for uncertain objects. An extensive experimental evaluation demonstrates both the effectiveness and the efficiency of our technique.
Datenschutz Und Datensicherheit - Dud | 2011
Christoph Busch; Ulrike Korte; Sebastian Abt; Christian Böhm; Ines Färber; Sergej Fries; Johannes Merkle; Claudia Nickel; Alexander Nouak; Alexander Opel; Annahita Oswald; Thomas Seidl; Bianca Wackersreuther; Peter Wackersreuther; Xuebing Zhou
ZusammenfassungBiometrische Systeme sind zwar technisch weit ausgereift und bieten heute Erkennungsleistungen, die noch vor 10 Jahren unerreichbar waren. Jedoch ist ein weit verbreiteter Einsatz von biometrischen Authentisierungsverfahren durch Bedenken hinsichtlich des notwendigen Schutzes von Referenzdaten gebremst. Eine sichere und datenschutzfreundliche Verarbeitung von biometrischen Daten wird möglich, wenn Template Protection Verfahren zum Einsatz kommen. Diese Verfahren wurden in einer wissenschaftlichen Studie (BioKeyS-Pilot-DB Teil 2) des Bundesamtes für Sicherheit in der Informationstechnik (BSI) untersucht. Dieser Artikel berichtet über die Ergebnisse im Projekt. Er zeigt auf, wie Mechanismen zum Schutz von biometrischen Daten mit Zusatzinformationen z.B. Passwörtern verknüpft und wie die Verfahren auch in Identifikationssystemen eingesetzt werden können.
medical image computing and computer assisted intervention | 2012
Martin Dyrba; Michael Ewers; Martin Wegrzyn; Ingo Kilimann; Claudia Plant; Annahita Oswald; Thomas Meindl; Michela Pievani; Arun L.W. Bokde; Andreas Fellgiebel; Massimo Filippi; Harald Hampel; Stefan Klöppel; Karlheinz Hauenstein; Thomas Kirste; Stefan J. Teipel
Diffusion tensor imaging (DTI) allows assessing neuronal fiber tract integrity in vivo to support the diagnosis of Alzheimers disease (AD). It is an open research question to which extent combinations of different neuroimaging techniques increase the detection of AD. In this study we examined different methods to combine DTI data and structural T1-weighted magnetic resonance imaging (MRI) data. Further, we applied machine learning techniques for automated detection of AD. We used a sample of 137 patients with clinically probable AD (MMSE 20.6 ±5.3) and 143 healthy elderly controls, scanned in nine different scanners, obtained from the recently created framework of the European DTI study on Dementia (EDSD). For diagnostic classification we used the DTI derived indices fractional anisotropy (FA) and mean diffusivity (MD) as well as grey matter density (GMD) and white matter density (WMD) maps from anatomical MRI. We performed voxel-based classification using a Support Vector Machine (SVM) classifier with tenfold cross validation. We compared the results from each single modality with those from different approaches to combine the modalities. For our sample, combining modalities did not increase the detection rates of AD. An accuracy of approximately 89% was reached for GMD data alone and for multimodal classification when GMD was included. This high accuracy remained stable across each of the approaches. As our sample consisted of mildly to moderately affected patients, cortical atrophy may be far progressed so that the decline in structural network connectivity derived from DTI may not add additional information relevant for the SVM classification. This may be different for predementia stages of AD. Further research will focus on multimodal detection of AD in predementia stages of AD, e.g. in amnestic mild cognitive impairment (aMCI), and on evaluating the classification performance when adding other modalities, e.g. functional MRI or FDG-PET.
european conference on machine learning | 2010
Christian Böhm; Frank Fiedler; Annahita Oswald; Claudia Plant; Bianca Wackersreuther; Peter Wackersreuther
Hierarchical clustering methods are widely used in various scientific domains such as molecular biology, medicine, economy, etc. Despite the maturity of the research field of hierarchical clustering, we have identified the following four goals which are not yet fully satisfied by previous methods: First, to guide the hierarchical clustering algorithm to identify only meaningful and valid clusters. Second, to represent each cluster in the hierarchy by an intuitive description with e.g. a probability density function. Third, to consistently handle outliers. And finally, to avoid difficult parameter settings.With ITCH, we propose a novel clustering method that is built on a hierarchical variant of the information-theoretic principle of Minimum Description Length (MDL), referred to as hMDL. Interpreting the hierarchical cluster structure as a statistical model of the data set, it can be used for effective data compression by Huffman coding. Thus, the achievable compression rate induces a natural objective function for clustering, which automatically satisfies all four above mentioned goals.
knowledge discovery and data mining | 2010
Christian Böhm; Sebastian Goebl; Annahita Oswald; Claudia Plant; Michael Plavinski; Bianca Wackersreuther
Integrative mining of heterogeneous data is one of the major challenges for data mining in the next decade. We address the problem of integrative clustering of data with mixed type attributes. Most existing solutions suffer from one or both of the following drawbacks: Either they require input parameters which are difficult to estimate, or/and they do not adequately support mixed type attributes. Our technique INTEGRATE is a novel clustering approach that truly integrates the information provided by heterogeneous numerical and categorical attributes. Originating from information theory, the Minimum Description Length (MDL) principle allows a unified view on numerical and categorical information and thus naturally balances the influence of both sources of information in clustering. Moreover, supported by the MDL principle, parameter-free clustering can be performed which enhances the usability of INTEGRATE on real world data. Extensive experiments demonstrate the effectiveness of INTEGRATE in exploiting numerical and categorical information for clustering. As an efficient iterative algorithm INTEGRATE is scalable to large data sets.
knowledge discovery and data mining | 2010
Christian Böhm; Annahita Oswald; Claudia Plant; Michael Plavinski; Bianca Wackersreuther
The skyline operator is a well established database primitive which is traditionally applied in a way that only a single skyline is computed. In this paper we use multiple skylines themselves as objects for data exploration and data mining. We define a novel similarity measure for comparing different skylines, called SkyDist. SkyDist can be used for complex analysis tasks such as clustering, classification, outlier detection, etc. We propose two different algorithms for computing SkyDist, based on Monte-Carlo sampling and on the plane sweep paradigm. In an extensive experimental evaluation, we demonstrate the efficiency and usefulness of SkyDist for a number of applications and data mining methods.
database and expert systems applications | 2009
Annahita Oswald; Bianca Wackersreuther
Idiopathic chronic pain disorders constitude a large, clinically important health care problem that urgently needs deeper path physiological insight. The understanding which brain compartments are involved in such diseases, is therefore a very interesting research topic in neurological medicine. In this paper, we apply an efficient algorithm for motif discovery to time series data of somatoform patients and healthy controls.We find groups of brain compartments that occur frequently within the brain networks and are characteristic for patients with somatoform disorder.