Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bassam Mokbel is active.

Publication


Featured researches published by Bassam Mokbel.


Neurocomputing | 2011

Relational generative topographic mapping

Andrej Gisbrecht; Bassam Mokbel; Barbara Hammer

The generative topographic mapping (GTM) has been proposed as a statistical model to represent high-dimensional data by a distribution induced by a sparse lattice of points in a low-dimensional latent space, such that visualization, compression, and data inspection become possible. The formulation in terms of a generative statistical model has the benefit that relevant parameters of the model can be determined automatically based on an expectation maximization scheme. Further, the model offers a large flexibility such as a direct out-of-sample extension and the possibility to obtain different degrees of granularity of the visualization without the need of additional training. Original GTM is restricted to Euclidean data points in a given Euclidean vector space. Often, data are not explicitly embedded in a Euclidean vector space, rather pairwise dissimilarities of data can be computed, i.e. the relations between data points are given rather than the data vectors themselves. We propose a method which extends the GTM to relational data and which allows us to achieve a sparse representation of data characterized by pairwise dissimilarities, in latent space. The method, relational GTM, is demonstrated on several benchmarks.


International Journal of Neural Systems | 2012

Linear Time Relational Prototype Based Learning

Andrej Gisbrecht; Bassam Mokbel; Frank-Michael Schleif; Xibin Zhu; Barbara Hammer

Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.


GbRPR '09 Proceedings of the 7th IAPR-TC-15 International Workshop on Graph-Based Representations in Pattern Recognition | 2009

Graph-Based Representation of Symbolic Musical Data

Bassam Mokbel; Alexander Hasenfuss; Barbara Hammer

In this work, we present an approach that utilizes a graph-based representation of symbolic musical data in the context of automatic topographic mapping. A novel approach is introduced that represents melodic progressions as graph structures providing a dissimilarity measure which complies with the invariances in the human perception of melodies. That way, music collections can be processed by non-Euclidean variants of Neural Gas or Self-Organizing Maps for clustering, classification, or topographic mapping for visualization. We demonstrate the performance of the technique on several datasets of classical music.


Neurocomputing | 2015

Metric learning for sequences in relational LVQ

Bassam Mokbel; Benjamin Paaßen; Frank-Michael Schleif; Barbara Hammer

Metric learning constitutes a well-investigated field for vectorial data with successful applications, e.g. in computer vision, information retrieval, or bioinformatics. One particularly promising approach is offered by low-rank metric adaptation integrated into modern variants of learning vector quantization (LVQ). This technique is scalable with respect to both data dimensionality and the number of data points, and it can be accompanied by strong guarantees of learning theory. Recent extensions of LVQ to general (dis-)similarity data have paved the way towards LVQ classifiers for non-vectorial, possibly discrete, structured objects such as sequences, which are addressed by classical alignment in bioinformatics applications. In this context, the choice of metric parameters plays a crucial role for the result, just as it does in the vectorial setting. In this contribution, we propose a metric learning scheme which allows for an autonomous learning of parameters (such as the underlying scoring matrix in sequence alignments) according to a given discriminative task in relational LVQ. Besides facilitating the often crucial and problematic choice of the scoring parameters in applications, this extension offers an increased interpretability of the results by pointing out structural invariances for the given task.


2013 17th International Conference on Information Visualisation | 2013

Nonlinear Dimensionality Reduction for Cluster Identification in Metagenomic Samples

Andrej Gisbrecht; Barbara Hammer; Bassam Mokbel; Alexander Sczyrba

We investigate the potential of modern nonlinear dimensionality reduction techniques for an interactive cluster detection in bioinformatics applications. We demonstrate that recent non-parametric techniques such as t-distributed stochastic neighbor embedding (t-SNE) allow a cluster identification which is superior to direct clustering of the original data or cluster detection based on classical parametric dimensionality reduction approaches. Non-parametric approaches, however, display quadratic complexity which makes them unsuitable in interactive devices. As speedup, we propose kernel-t-SNE as a fast parametric counterpart based on t-SNE.


The international journal of learning | 2014

Example-based feedback provision using structured solution spaces

Sebastian Gross; Bassam Mokbel; Benjamin Paassen; Barbara Hammer; Niels Pinkwart

Intelligent tutoring systems (ITSs) typically rely on a formalised model of the underlying domain knowledge in order to provide feedback to learners adaptively to their needs. This approach implies two general drawbacks: the formalisation of a domain-specific model usually requires a huge effort, and in some domains it is not possible at all. In this paper, we propose feedback provision strategies in absence of a formalised domain model, motivated by example-based learning approaches. We demonstrate the feasibility and effectiveness of these strategies in several studies with experts and students. We discuss how, in a set of solutions, appropriate examples can be automatically identified and assigned to given student solutions via machine learning techniques in conjunction with an underlying dissimilarity metric. The plausibility of such an automatic selection is evaluated in an expert survey, while possible choices for domain-agnostic dissimilarity measures are tested in the context of real solution sets of Java programs. The quantitative evidence suggests that the proposed feedback strategies and automatic example assignment are viable in principle, further user studies in large-scale learning environments being the subject of future research.


international symposium on neural networks | 2012

Linear basis-function t-SNE for fast nonlinear dimensionality reduction

Andrej Gisbrecht; Bassam Mokbel; Barbara Hammer

t-distributed stochastic neighbor embedding (t-SNE) constitutes a nonlinear dimensionality reduction technique which is particularly suited to visualize high dimensional data sets with intrinsic nonlinear structures. A major drawback, however, consists in its squared complexity which makes the technique infeasible for large data sets or online application in an interactive framework. In addition, since the technique is non parametric, it possesses no direct method to extend the technique to novel data points. In this contribution, we propose an extension of t-SNE to an explicit mapping. In the limit, it reduces to standard non-parametric t-SNE, while offering a feasible nonlinear embedding function for other parameter choices. We evaluate the performance of the technique when trained on a small subpart of the given data only. It turns out that its generalization ability is good when evaluated with the standard quality curve. Further, in many cases, it obtains a quality which approximates the quality of t-SNE when trained on the full data set, albeit only 10% of the data are used for training. This opens the way towards efficient nonlinear dimensionality reduction techniques as required in interactive settings.


intelligent data analysis | 2011

Prototype-based classification of dissimilarity data

Barbara Hammer; Bassam Mokbel; Frank-Michael Schleif; Xibin Zhu

Unlike many black-box algorithms in machine learning, prototype-based models offer an intuitive interface to given data sets, since prototypes can directly be inspected by experts in the field. Most techniques rely on Euclidean vectors such that their suitability for complex scenarios is limited. Recently, several unsupervised approaches have successfully been extended to general, possibly non-Euclidean data characterized by pairwise dissimilarities. In this paper, we shortly review a general approach to extend unsupervised prototype-based techniques to dissimilarities, and we transfer this approach to supervised prototypebased classification for general dissimilarity data. In particular, a new supervised prototype-based classification technique for dissimilarity data is proposed.


Neurocomputing | 2016

Adaptive structure metrics for automated feedback provision in intelligent tutoring systems

Benjamin Paaßen; Bassam Mokbel; Barbara Hammer

Typical intelligent tutoring systems rely on detailed domain-knowledge which is hard to obtain and difficult to encode. As a data-driven alternative to explicit domain-knowledge, one can present learners with feedback based on similar existing solutions from a set of stored examples. At the heart of such a data-driven approach is the notion of similarity. We present a general-purpose framework to construct structure metrics on sequential data and to adapt those metrics using machine learning techniques. We demonstrate that metric adaptation improves the classification of wrong versus correct learner attempts in a simulated data set from sports training, and the classification of the underlying learner strategy in a real Java programming dataset.


intelligent tutoring systems | 2014

How to Select an Example? A Comparison of Selection Strategies in Example-Based Learning

Sebastian Gross; Bassam Mokbel; Barbara Hammer; Niels Pinkwart

In this paper, we investigate an Intelligent Tutoring System (ITS) for Java programming that implements an example-based learning approach. The approach does not require an explicit formalization of the domain knowledge but automatically identifies appropriate examples from a data set consisting of learners’ solution attempts and sample solution steps created by experts. In a field experiment conducted in an introductory course for Java programming, we examined four example selection strategies for selecting appropriate examples for feedback provision and analyzed how learners’ solution attempts changed depending on the selection strategy. The results indicate that solutions created by experts are more beneficial to support learning than solution attempts of other learners, and that examples modeling steps of problem solving are more appropriate for very beginners than complete sample solutions.

Collaboration


Dive into the Bassam Mokbel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niels Pinkwart

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Sebastian Gross

Clausthal University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wouter Lueks

University of Groningen

View shared research outputs
Top Co-Authors

Avatar

Alexander Hasenfuss

Clausthal University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge