Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Motaz El-Saban is active.

Publication


Featured researches published by Motaz El-Saban.


mobile and ubiquitous multimedia | 2009

Mobicast: a system for collaborative event casting using mobile phones

Ayman Kaheel; Motaz El-Saban; Mahmoud Refaat; Mostafa Ezz

Lately, the usage of mobile live video streaming for video sharing has been growing steadily. However, because of the limitations presented by the capabilities of mobile phone video capturing devices, these video streams end up having either a low resolution or a small field of view. On the other hand, the ubiquity of video capture-capable mobile phones make the probability that more than one user will be recording the same scene from different views relatively high.n In this paper we introduce a system for mobile live video streaming, named Mobicast, that enables the collaboration between multiple users streaming the same event from their mobile phones to provide a better collective viewing experience of the event to end viewers. We describe architectural components of the system that can be used in enhancing the viewing experience in different ways. Thereafter, we describe the details of an implementation of the system aiming at enhancing the viewing experience by stitching the incoming mobile video streams to construct a panoramic view in real-time. We performed a number of experiments, using both real-usage data and synthetically generated data, aiming at verifying that the system fulfills its promise of enhancing the viewing experience.


international conference on image processing | 2010

Fast stitching of videos captured from freely moving devices by exploiting temporal redundancy

Motaz El-Saban; Mostafa Izz; Ayman Kaheel

We investigate the problem of efficient panoramic video construction based on time-synchronized input video streams. No additional constraints are imposed regarding the motion of the capturing video cameras. The presented work is, to the best of our knowledge, the first attempt to construct in real-time a panoramic video stream from input video streams captured by freely moving cameras. The main contribution is in proposing an efficient panoramic video construction algorithm that exploits temporal information to avoid solving the stitching problem fully on a frame by frame basis. We provide detailed experimental evaluation of different methodologies that employ previous frames stitching results such as tracking interest points using optical flow and using areas of overlap to limit the search space for interest points. Our results clearly indicate that making use of temporal information in video stitching can achieve a significant reduction in execution time while providing a comparable effectiveness.


workshop on applications of computer vision | 2011

Multi-view human action recognition system employing 2DPCA

Mohamed A. Naiel; Moataz M. Abdelwahab; Motaz El-Saban

A novel algorithm for view-invariant human action recognition is presented. This approach is based on Two-Dimensional Principal Component Analysis (2DPCA) applied directly on the Motion Energy Image (MEI) or the Motion History Image (MHI) in both the spatial domain and the transform domain. This method reduces the computational complexity by a factor of at least 66, achieving the highest recognition accuracy per camera, while maintaining minimum storage requirements, compared with the most recent reports in the field. Experimental results performed on the Weizmann action and the INIRIA IXMAS datasets confirm the excellent properties of the proposed algorithm, showing its robustness and ability to work with small number of training sequences. The dramatic reduction in computational complexity promotes the use in real time applications.


acm multimedia | 2009

Stitching videos streamed by mobile phones in real-time

Motaz El-Saban; Mahmoud Refaat; Ayman Kaheel; Ahmed Abdul-Hamid

User generated videos with mobile phone cameras are becoming more and more ubiquitous allowing people to share live content with remote parties, possibly in real-time. However with limited mobile phone capabilities, these videos are usually of small resolution, resulting in a small field of view for acceptable quality. Fortunately with the proliferation of video capture-enabled mobile phones, there is high chance that one or more persons will be shooting the same scene from different views. In this demonstration, we are showing an end-to-end system which receives video streams coming from different mobile phones, time synchronizes the streams and produces a single composite mosaic video, and all of this is done in real-time. The proposed system operates without coordination between users. The system has been tested under various capturing conditions such as indoor, outdoor, day and night conditions.


international conference on image processing | 2011

Improved optimal seam selection blending for fast video stitching of videos captured from freely moving devices

Motaz El-Saban; Mostafa Izz; Ayman Kaheel; Mahmoud Refaat

We investigate the problem of stitching timely synchronized video streams captured by freely moving devices. Recently, it was shown that using frame-to-frame correlation can greatly enhance the efficiency and effectiveness of video stitching algorithms [19]. In this paper, we address some of the shortcomings in [19], namely the simple blending approach, causing almost a third of stitching errors and the fact that the stitching algorithm is only tested on a frame-by-frame basis which does not realistically mimic the user perception of the output system quality as a complete video. We propose the use of a modified blending technique based on optimal seam selection and experimentally validate its superiority using precision, recall and F1 measures on a frame-by-frame basis, while maintaining low computational complexity. Furthermore, we validate that the performance gains measured on a frame-by-frame basis are also evident when the stitched video output is evaluated as a single unit.


international conference on multimedia and expo | 2011

Seamless annotation and enrichment of mobile captured video streams in real-time

Motaz El-Saban; Xin-Jing Wang; Noran Hasan; Mahmoud Bassiouny; Mahmoud Refaat

Mobile phones are becoming more and more ubiquitous with a large number of these devices having image/video capturing capabilities, connection capabilities and built-in rich sensory. This has encouraged the common user to capture more image/video content than ever before. However, this has created two interrelated problems: 1) while capturing some scene the user may want to get more information about it to make a decision (e.g. a buying one) without painful textual input on these mobile devices and while accounting for multiple meanings associated with a single image, as an image is worth a thousand words and 2) videos captured cannot be easily searched afterwards and hence forgotten due to the lack of proper indexing techniques. In this paper, we are presenting a system addressing the above two problems through a single solution by providing users with real-time automatically generated tags of their currently captured videos. The user can select/deselect from the automatic tags, thus tags can serve as visual query suggestions helping bridging the users query intent. This same set of tags will be stored with the video for enabling easy content access afterwards.


workshop on applications of computer vision | 2011

Object matching using feature aggregation over a frame sequence

Mahmoud Bassiouny; Motaz El-Saban

Object instance matching is a cornerstone component in many computer vision applications such as image search, augmented reality and unsupervised tagging. The common flow in these applications is to take an input image and match it against a database of previously enrolled images of objects of interest. This is usually difficult as one needs to capture an image corresponding to an object view already present in the database, especially in the case of 3D objects with high curvature where light reflection, viewpoint change and partial occlusion can significantly alter the appearance of the captured image. Rather than relying on having numerous views of each object in the database, we propose an alternative method of capturing a short video sequence scanning a certain object and utilize information from multiple frames to improve the chance of a successful match in the database. The matching step combines local features from a number of frames and incrementally forms a point cloud describing the object. We conduct experiments on a database of different object types showing promising matching results on both a privately collected set of videos and those freely available on the Web such that on YouTube. Increase in accuracy of up to 20% over the baseline of using a single frame matching is shown to be possible.


international midwest symposium on circuits and systems | 2011

Simultaneous Human detection and action recognition employing 2DPCA-HOG

Mohamed A. Naiel; Moataz M. Abdelwahab; Motaz El-Saban; Wasfy B. Mikhael

In this paper a novel algorithm for Human detection and action recognition in videos is presented. The algorithm is based on Two-Dimensional Principal Components Analysis (2DPCA) applied to Histogram of Oriented Gradients (HOG). Due to simultaneous Human detection and action recognition employing the same algorithm, the computational complexity is reduced to a great deal. Experimental results applied to public datasets confirm these excellent properties compared to most recent methods.


international conference on multimedia and expo | 2011

Active feedback for enhancing the construction of panoramic live mobile video streams

Mahmoud Refaat; Motaz El-Saban; Ayman Kaheel

Constructing a panoramic video out of multiple incoming live mobile video streams is a challenging problem. This problem involves multiple users live streaming the same scene from different angles, using their mobile phones, with the objective of constructing a panoramic video of the scene. The main challenge in this problem is the lack of coordination between the streaming users, resulting in too much, too little, or no overlap between incoming streams. To add to the challenge, the streaming users are generally free to move, which means that the amounts of overlap between the different streams are dynamically changing. In this paper, we propose a method for automatically coordinating between the streaming users, such that the quality of the resulting panoramic video is enhanced. The method works by analyzing the incoming video streams, and automatically providing active feedback to the streaming users. We investigate different methods for generating the active feedback and presenting it to the streaming users resulting in an improved panoramic video output compared to the case where no feedback is utilized.


international conference on image processing | 2011

Higher order potentials with superpixel neighbourhood (HSN) for semantic image segmentation

Mostafa S. Ibrahim; Motaz El-Saban

Among the approaches for solving the semantic image segmentation problem that has proven successful is in formulating an energy minimization expressed on top of a conditional random field (CRF) over image pixels. Recently, high order potentials (cliques of size greater than 2) over superpixels have been incorporated in the CRF energy function yielding promising results. These potentials encourage pixels within the same superpixel to take the same label by penalizing inconsistent labeling within the superpixel. While some of the earlier attempts modeled higher order potentials without considering the conditional dependencies between superpixels, others modeled these dependencies at the cost of oversimplified models at higher levels. In this paper, we propose incorporating superpixel neighborhood information within the high order potential, hence modeling dependencies between superpixels without the need of oversimplifying or constraining the model. Results show that the proposed method achieves state-of-the-art results on the challenging PASCAL VOC 2007 dataset.

Collaboration


Dive into the Motaz El-Saban's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wasfy B. Mikhael

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge