Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marwan Torki is active.

Publication


Featured researches published by Marwan Torki.


empirical methods in natural language processing | 2014

Al-Bayan: An Arabic Question Answering System for the Holy Quran

Heba Abdelnasser; Maha Ragab; Reham Mohamed; Alaa Mohamed; Bassant Farouk; Nagwa M. El-Makky; Marwan Torki

Recently, Question Answering (QA) has been one of the main focus of natural language processing research. However, Arabic Question Answering is still not in the mainstream. The challenges of the Arabic language and the lack of resources have made it difficult to provide Arabic QA systems with high accuracy. While low accuracies may be accepted for general purpose systems, it is critical in some fields such as religious affairs. Therefore, there is a need for specialized accurate systems that target these critical fields. In this paper, we propose Al-Bayan, a new Arabic QA system specialized for the Holy Quran. The system accepts an Arabic question about the Quran, retrieves the most relevant Quran verses, then extracts the passage that contains the answer from the Quran and its interpretation books (Tafseer). Evaluation results on a collected dataset show that the overall system can achieve 85% accuracy using the top-3 results.


workshop on applications of computer vision | 2015

Real-Time Multi-scale Action Detection from 3D Skeleton Data

Amr Sharaf; Marwan Torki; Mohamed E. Hussein; Motaz El-Saban

In this paper we introduce a real-time system for action detection. The system uses a small set of robust features extracted from 3D skeleton data. Features are effectively described based on the probability distribution of skeleton data. The descriptor computes a pyramid of sample covariance matrices and mean vectors to encode the relationship between the features. For handling the intra-class variations of actions, such as action temporal scale variations, the descriptor is computed using different window scales for each action. Discriminative elements of the descriptor are mined using feature selection. The system achieves accurate detection results on difficult unsegmented sequences. Experiments on MSRC-12 and G3D datasets show that the proposed system outperforms the state-of-the-art in detection accuracy with very low latency. To the best of our knowledge, we are the first to propose using multi-scale description in action detection from 3D skeleton data.


workshop on applications of computer vision | 2016

Linear-time online action detection from 3D skeletal data using bags of gesturelets

Moustafa Meshry; Mohamed E. Hussein; Marwan Torki

Sliding window is one direct way to extend a successful recognition system to handle the more challenging detection problem. While action recognition decides only whether or not an action is present in a pre-segmented video sequence, action detection identifies the time interval where the action occurred in an unsegmented video stream. Sliding window approaches can however be slow as they maximize a classifier score over all possible sub-intervals. Even though new schemes utilize dynamic programming to speed up the search for the optimal sub-interval, they require offline processing on the whole video sequence. In this paper, we propose a novel approach for online action detection based on 3D skeleton sequences extracted from depth data. It identifies the sub-interval with the maximum classifier score in linear time. Furthermore, it is suitable for real-time applications with low latency.


north american chapter of the association for computational linguistics | 2015

Al-Bayan: A Knowledge-based System for Arabic Answer Selection

Reham Mohamed; Maha Ragab; Heba Abdelnasser; Nagwa M. El-Makky; Marwan Torki

This paper describes Al-Bayan team participation in SemEval-2015 Task 3, Subtask A. Task 3 targets semantic solutions for answer selection in community question answering systems. We propose a knowledge-based solution for answer selection of Arabic questions, specialized for Islamic sciences. We build a Semantic Interpreter to evaluate the semantic similarity between Arabic question and answers using our Quranic ontology of concepts. Using supervised learning, we classify the candidate answers according to their relevance to the users questions. Results show that our system achieves 74.53% accuracy which is comparable to the other participating systems.


workshop on applications of computer vision | 2016

A multi-modal feature fusion framework for kinect-based facial expression recognition using Dual Kernel Discriminant Analysis (DKDA)

Sherin Aly; A. Lynn Abbott; Marwan Torki

We present a multi-modal feature fusion framework for Kinect-based Facial Expression Recognition (FER). The framework extracts and pre-processes 2D and 3D features separately. The types of 2D and 3D features are selected to maximize the accuracy of the system, with the Histogram of Oriented Gradient (HOG) features for 2D data and statistically selected angles for 3D data giving the best performance. The sets of 2D features and 3D features are reduced and later combined using a novel Dual Kernel Discriminant Analysis (DKDA) approach. Final classification is done using SVMs. The framework is benchmarked on a public Kinect-based FER dataset which includes data for 32 subjects (in both frontal and non-frontal poses and two expression intensities) and 6 basic expressions (plus neutral), namely: happiness, sadness, anger, disgust, fear, and surprise. The framework shows that the proposed combination of 2D and 3D features outperforms simpler existing combinations of 2D and 3D features, as well as systems that use either 2D or 3D features only. The proposed system also outperforms Linear Discriminant Analysis (LDA)-transformed and traditional Kernel Discriminant Analysis (KDA)-transformed systems, with an average accuracy improving of 10%. It also outperforms the state of the art by more than 13% in frontal poses.


international conference on image processing | 2015

Seeded Laplacian: An interactive image segmentation approach using eigenfunctions

Ahmed Taha; Marwan Torki

In this paper, we cast the scribbled-based interactive image segmentation as a semi-supervised learning problem. Our novel approach alleviates the need to solve an expensive generalized eigenvector problem by approximating the eigenvectors using a more efficiently computed eigenfunctions. The smoothness operator defined on feature densities at the limit n → ∞ recovers the exact eigenvectors of the graph Laplacian, where n is the number of nodes in the graph. In our experiments scribble annotation is applied, where users label few pixels as foreground and background to guide the foreground/background segmentation. Experiments are carried out on standard data-sets which contain a wide variety of natural images. We achieve better qualitative and quantitative results compared to state-of-the-art algorithms.


international conference on pattern recognition | 2014

Spatial-Visual Label Propagation for Local Feature Classification

Tarek El-Gaaly; Marwan Torki; Ahmed M. Elgammal

In this paper we present a novel approach to integrate feature similarity and spatial consistency of local features to achieve the goal of localizing an object of interest in an image. The goal is to achieve coherent and accurate labeling of feature points in a simple and effective way. We introduced our Spatial-Visual Label Propagation algorithm to infer the labels of local features in a test image from known labels. This is done in a transductive manner to provide spatial and feature smoothing over the learned labels. We show the value of our novel approach by a diverse set of experiments with successful improvements over previous methods and baseline classifiers.


british machine vision conference | 2015

Multi-Modality Feature Transform: An Interactive Image Segmentation Approach.

Moustafa Meshry; Ahmed Taha; Marwan Torki

In this paper, we tackle the interactive image segmentation problem. Unlike the regular image segmentation problem, the user provides additional constraints that guide the segmentation process. In some algorithms, like [1, 4], the user provides scribbles on foreground/background (Fg/Bg) regions. In other algorithms, like [6, 8], the user is required to provide a bounding box or an enclosing contour to surround the Fg object, other outside pixels are constrained to be Bg. In our problem, we consider scribbles as the form of user-provided annotation. Introducing suitable features in the scribble-based Fg/Bg segmentation problem is crucial. In many cases, the object of interest has different regions with different color modalities. The same applies to a nonuniform background. Fg/Bg color modalities can even overlap when the appearance is solely modeled using color spaces like RGB or Lab. Therefore, in this paper, we purposefully discriminate Fg scribbles from Bg scribbles for a better representation. This is achieved by learning a discriminative embedding space from user-provided scribbles. The transformation between the original features and the embedded features is calculated. This transformation is used to project unlabeled features onto the same embedding space. The transformed features are then used in a supervised classification manner to solve the Fg/Bg segmentation problem. We further refine the results using a self-learning strategy, by expanding scribbles and recomputing the embedding and transformations. Figure 1 illustrates the motivation for this paper. Color features usually cannot capture different modalities available in the scribbles and successfully distinguish Fg from Bg at the same time. As we can see in figure 1(b), the RGB color space will eventually mix Fg/Bg scribbles. On the other hand, figure 1(c) shows that a well-defined embedding space can clearly distinguish between Fg and Bg scribbles, while preserving different color modalities within each scribble.


international joint conference on artificial intelligence | 2013

Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations

Mohamed E. Hussein; Marwan Torki; Mohammad Abdelaziz Gowayyed; Motaz El-Saban


international joint conference on artificial intelligence | 2013

Histogram of oriented displacements (HOD): describing trajectories of human joints for action recognition

Mohammad Abdelaziz Gowayyed; Marwan Torki; Mohammed Elsayed Hussein; Motaz El-Saban

Collaboration


Dive into the Marwan Torki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge