Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed Omran is active.

Publication


Featured researches published by Mohamed Omran.


computer vision and pattern recognition | 2016

The Cityscapes Dataset for Semantic Urban Scene Understanding

Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Rehfeld; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele

Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.


european conference on computer vision | 2014

Ten Years of Pedestrian Detection, What Have We Learned?

Rodrigo Benenson; Mohamed Omran; Jan Hendrik Hosang; Bernt Schiele

Paper-by-paper results make it easy to miss the forest for the trees.We analyse the remarkable progress of the last decade by dis- cussing the main ideas explored in the 40+ detectors currently present in the Caltech pedestrian detection benchmark. We observe that there exist three families of approaches, all currently reaching similar detec- tion quality. Based on our analysis, we study the complementarity of the most promising ideas by combining multiple published strategies. This new decision forest detector achieves the current best known performance on the challenging Caltech-USA dataset.


computer vision and pattern recognition | 2015

Taking a deeper look at pedestrians

Jan Hendrik Hosang; Mohamed Omran; Rodrigo Benenson; Bernt Schiele

In this paper we study the use of convolutional neural networks (convnets) for the task of pedestrian detection. Despite their recent diverse successes, convnets historically underperform compared to other pedestrian detectors. We deliberately omit explicitly modelling the problem into the network (e.g. parts or occlusion modelling) and show that we can reach competitive performance without bells and whistles. In a wide range of experiments we analyse small and big convnets, their architectural choices, parameters, and the influence of different training data, including pretraining on surrogate tasks. We present the best convnet detectors on the Caltech and KITTI dataset. On Caltech our convnets reach top performance both for the Caltech1x and Caltech10x training setup. Using additional data at training time our strongest convnet model is competitive even to detectors that use additional data (optical flow) at test time.


computer vision and pattern recognition | 2017

Joint Graph Decomposition & Node Labeling: Problem, Algorithms, Applications

Evgeny Levinkov; Jonas Uhrig; Siyu Tang; Mohamed Omran; Eldar Insafutdinov; Alexander Kirillov; Carsten Rother; Thomas Brox; Bernt Schiele; Bjoern Andres

We state a combinatorial optimization problem whose feasible solutions define both a decomposition and a node labeling of a given graph. This problem offers a common mathematical abstraction of seemingly unrelated computer vision tasks, including instance-separating semantic segmentation, articulated human body pose estimation and multiple object tracking. Conceptually, it generalizes the unconstrained integer quadratic program and the minimum cost lifted multicut problem, both of which are NP-hard. In order to find feasible solutions efficiently, we define two local search algorithms that converge monotonously to a local optimum, offering a feasible solution at any time. To demonstrate the effectiveness of these algorithms in tackling computer vision tasks, we apply them to instances of the problem that we construct from published data, using published algorithms. We report state-of-the-art application-specific accuracy in the three above-mentioned applications.


IEEE Transactions on Medical Imaging | 2015

Detecting Surgical Tools by Modelling Local Appearance and Global Shape

David Bouget; Rodrigo Benenson; Mohamed Omran; Laurent Riffaud; Bernt Schiele; Pierre Jannin

Detecting tools in surgical videos is an important ingredient for context-aware computer-assisted surgical systems. To this end, we present a new surgical tool detection dataset and a method for joint tool detection and pose estimation in 2d images. Our two-stage pipeline is data-driven and relaxes strong assumptions made by previous works regarding the geometry, number, and position of tools in the image. The first stage classifies each pixel based on local appearance only, while the second stage evaluates a tool-specific shape template to enforce global shape. Both local appearance and global shape are learned from training data. Our method is validated on a new surgical tool dataset of 2 476 images from neurosurgical microscopes, which is made freely available. It improves over existing datasets in size, diversity and detail of annotation. We show that our method significantly improves over competitive baselines from the computer vision field. We achieve 15% detection miss-rate at 10-1 false positives per image (for the suction tube) over our surgical tool dataset. Results indicate that performing semantic labelling as an intermediate task is key for high quality detection.


computer vision and pattern recognition | 2016

Weakly Supervised Object Boundaries

Anna Khoreva; Rodrigo Benenson; Mohamed Omran; Matthias Hein; Bernt Schiele

State-of-the-art learning based boundary detection methods require extensive training data. Since labelling object boundaries is one of the most expensive types of annotations, there is a need to relax the requirement to carefully annotate images to make both the training more affordable and to extend the amount of training data. In this paper we propose a technique to generate weakly supervised annotations and show that bounding box annotations alone suffice to reach high-quality object boundaries without using any object-specific boundary annotations. With the proposed weak supervision techniques we achieve the top performance on the object boundary detection task, outperforming by a large margin the current fully supervised state-of-theart methods.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Towards Reaching Human Performance in Pedestrian Detection

Shanshan Zhang; Rodrigo Benenson; Mohamed Omran; Jan Hendrik Hosang; Bernt Schiele

Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the “perfect single frame detector”. We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech pedestrian dataset). After manually clustering the frequent errors of a top detector, we characterise both localisation and background-versus-foreground errors. To address localisation errors we study the impact of training annotation noise on the detector performance, and show that we can improve results even with a small portion of sanitised training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech pedestrian dataset, and provide a new sanitised set of training and test annotations.


computer vision and pattern recognition | 2016

How Far are We from Solving Pedestrian Detection

Shanshan Zhang; Rodrigo Benenson; Mohamed Omran; Jan Hendrik Hosang; Bernt Schiele


computer vision and pattern recognition | 2015

The Cityscapes Dataset

Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Scharwächter; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele


international conference on 3d vision | 2018

Neural Body Fitting: Unifying Deep Learning and Model Based Human Pose and Shape Estimation

Mohamed Omran; Christoph Lassner; Gerard Pons-Moll; Peter V. Gehler; Bernt Schiele

Collaboration


Dive into the Mohamed Omran's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Roth

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Ramos

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Alexander Kirillov

Dresden University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge