Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel C. Asmar is active.

Publication


Featured researches published by Daniel C. Asmar.


computer vision and pattern recognition | 2006

Tree Trunks as Landmarks for Outdoor Vision SLAM

Daniel C. Asmar; John S. Zelek; Samer M. Abdallah

Simultaneous Localization and Mapping (SLAM) of robots is the process of building a map of the robot milieu, while simultaneously localizing the robot inside that map. Cameras have been recently proposed, as a replacement for laser range finders, for the purpose of detecting and localizing landmarks around the navigating robot. Vision SLAM is either Interest Point (IP) based, where landmarks are images saliencies, or object-based where real objects are used as landmarks. The contribution of this paper is two prong: first, it details an approach based on Perceptual Organization (PO) to detect and track trees in a sequence of images, thereby promoting the use of a camera as a viable exteroceptive sensor for object-based SLAM; second,it demonstrates the superiority of the suggested PO system over two appearance-based algorithms in segmenting trees from difficult settings. Experiments conducted on a database of 873 images containing approximately 2008 tree trunks, show that the proposed system correctly classifies trees at 81 % with a false positive rate of 30%.


Numerical Heat Transfer Part B-fundamentals | 2004

A COMPARATIVE ASSESSMENT WITHIN A MULTIGRID ENVIRONMENT OF SEGREGATED PRESSURE-BASED ALGORITHMS FOR FLUID FLOW AT ALL SPEEDS

M. Darwish; Daniel C. Asmar; F. Moukalled

This article deals with the evaluation of six segregated high-resolution pressure-based algorithms, which extend the SIMPLE, SIMPLEC, PISO, SIMPLEX, SIMPLEST, and PRIME algorithms, originally developed for incompressible flow, to compressible flow simulations. The algorithms are implemented within a single grid, a prolongation grid, and a full multigrid method and their performance assessed by solving problems in the subsonic, transonic, supersonic, and hypersonic regimes. This study clearly demonstrates that all algorithms are capable of predicting fluid flow at all speeds and qualify as efficient smoothers in multigrid calculations. In terms of CPU efficiency, there is no global and consistent superiority of any algorithm over the others, even though PRIME and SIMPLEST are generally the most expensive for inviscid flow problems. Moreover, these two algorithms are found to be very unstable in most of the cases tested, requiring considerable upwind bleeding (up to 50%) of the high-resolution scheme to promote convergence. The most stable algorithms are SIMPLEC and SIMPLEX. Moreover, the reduction in computational effort associated with the prolongation grid method reveals the importance of initial guess in segregated solvers. The most efficient method is found to be the full multigrid method, which resulted in a convergence acceleration ratio, in comparison with the single grid method, as high as 18.4.


international conference on robotics and automation | 2006

Towards benchmarks for vision SLAM algorithms

Samer M. Abdallah; Daniel C. Asmar; John S. Zelek

SLAM in an outdoor environment using natural landmarks stands as the holy grail of SLAM algorithms. Segmenting landmarks from background clutter in such environments is difficult and vision, rather than laser, has a higher potential to perform such tasks due to the higher bandwidth of information it carries. There is a need to establish a benchmark upon which emerging vision SLAM algorithms can be assessed and compared. Towards this objective, this paper proposes the infrastructure for such a benchmark and discusses the issues involved in compiling it. Ego-motion information is extracted via a strap-down inertial measurement unit (IMU). Synchronized Global Positioning System (GPS), IMU, and surrounding images of an outdoor park environment are compiled into a database. IMU data in tested on an inertial navigation system (INS) dead-reckoning algorithm. The adequacy of the stereo image database is validated by extracting disparity maps of each stereo image in the database. IMU simulations show the necessity for visual SLAM to improve pose estimation. The complete data set, including GPS, IMU, and stereo images is available for downloading purposes


Journal of Field Robotics | 2007

A benchmark for outdoor vision SLAM systems

Samer M. Abdallah; Daniel C. Asmar; John S. Zelek

ALSPACH DL, 1972, IEEE T AUTOMAT CONTR, VAC17, P439, DOI 10.1109-TAC.1972.1100034; ASMAR DC, 2006, IEEE P COMP VIS PATT; BAILEY T, 2003, IEEE P INT C ROB AUT; Bryson M., 2005, P AUSTR C ROB AUT SY; DAVISON AJ, 2003, IEEE P C PATT AN MAC; DAVISON AJ, 2003, P C COMP VIS NIC FRA; DAVISON AJ, 2001, IEEE P C COMP VIS PA; Deans M., 2000, INT S EXP ROB; Dissanayake G, 2001, IEEE T ROBOTIC AUTOM, V17, P731, DOI 10.1109-70.964672; Dufournaud Y, 2000, PROC CVPR IEEE, P612, DOI 10.1109-CVPR.2000.855876; FITZGIBBONS T, 2004, THESIS U SYDNEY AUST; Harris C., 1988, 4 ALV VIS C, P147; JUNG IK, 2004, THESIS CNRS TOULOUSE; KIM JH, 2003, IEEE P C ROB AUT TAI; KWOK NM, 2003, P AUSTR C ROB AUT BR; KWOK NM, 2005, IEEE P C ROB AUT; KWOK NM, 2004, IEEE P C INT ROB SYS; LEMAIRE T, 2005, IEEE P C INT ROB SYS; MALLET A, 2000, IEEE INT C ROB AUT S, P3519; NEBOT E, 2004, NAVIGATION SYSTEM DE; Nebot E, 1999, J ROBOTIC SYST, V16, P81, DOI 10.1002-(SICI)1097-4563(199902)16:281::AID-ROB23.0.CO;2-9; NIETO J, 2000, SIMULTANEOUS LOCALIZ; Olson CF, 2001, IEEE INT CONF ROBOT, P1099; PANZIERI S, 2001, VISION BASED NAVIGAT; ROY S, 1998, IEEE P INT C COMP VI; Se S, 2002, INT J ROBOT RES, V21, P735, DOI 10.1177-027836402761412467; SHI J, 1994, IEEE C COMP VIS PATT, P593; SOLA J, 2005, IEEE RSJ P C INT ROB; SORENSON HW, 1971, AUTOMATICA, V7, P465, DOI 10.1016-0005-1098(71)90097-5; *U WAT, 2005, GEOGR DEP; Wald A., 1947, SEQUENTIAL ANAL; WALD A, 1948, ANN MATH STAT, V19, P326, DOI 10.1214-aoms-1177730197; WILLIAMS SB, 2001, P 3 INT C FIELD SERV, P315


international conference on robotics and automation | 2011

A hybrid ankle/hip preemptive falling scheme for humanoid robots

Bassam Jalgha; Daniel C. Asmar; Imad H. Elhajj

If we are to one day rely on robots as assistive devices they should be capable of mitigating the impact of random disturbances and avoid falling. Humans are surprisingly apt at remaining on their feet when pushed; they rely on reflexes such as bending the ankles and/or the hips, or by taking a step if the magnitude of the disturbance is relatively large. This paper presents a fall avoidance scheme that is capable of applying both ankle and hip strategies on a humanoid robot. While both strategies serve the same purpose, the hip strategy can absorb larger disturbances but has a higher energy overhead and should be avoided when it is not necessary. Our system is capable of detecting at the onset of a disturbance if an ankle or hip strategy is more appropriate. The decision is taken based on a “decision surface” that is delimited by threshold values of the robots state variables. The control is based on the intuitive Virtual Model Control (VMC) approach. The system is tested on a simulated robot developed under Gazebo. Results show successful fall avoidance with an ability to choose the optimum fall avoidance strategy


robotics and biomimetics | 2011

Inertial-vision sensor fusion for pedestrian localization

Dima Chdid; Raja Oueis; Hiam Khoury; Daniel C. Asmar; Imad H. Elhajj

The localization of an ambulatory individual, a.k.a. a pedestrian, is a quickly-developing domain with the potential to permeate into a variety of applications, as knowledge of an individuals location within an environment becomes ever-more useful. In order to automate the localization task, positioning modules are prime candidates for inclusion in a system. Such modules are expected to reduce both the effort and time incurred during the localization process while improving the accuracy and organization of the exchanged data. Building on and combining recent developments in the fields of step detection using inertial measurement units and structure from motion using a camera rig, the work presented in this paper is an implementation of a pedestrian localization system targeted specifically at infrastructure-less indoor localization. The inertial measurement unit and camera rigs are respectively attached to the mobile users foot and waist, and the collected data is processed by the localization module to obtain a current position. The focus of this paper is the implementation and preliminary testing of this localization modules components.


systems, man and cybernetics | 2004

SmartSLAM: localization and mapping across multi-environments

Daniel C. Asmar; John S. Zelek; Samer M. Abdallah

In the absence of absolute localization tools such as GPS, a robot can still successfully navigate by conducting simultaneous localization and mapping (SLAM). All SLAM algorithms to date can only be applied in one environment at a time. In this paper we propose to extend SLAM to multi-environments. In SmartSLAM, the robot first classifies its entourage using environment recognition code and then performs SLAM using landmarks that are appropriate for its surrounding milieu. One thousand images of various indoor and outdoor environments were collected and used as training data for a three-layered feedforward backpropagation neural network. This neural network was then tested on two sets of query images of indoor environments and another two sets of outdoor environments, yielding 83% and 95% correct classification rates for the indoor images and 80% and 79% success rates for the outdoor images.


intelligent robots and systems | 2015

Ground segmentation and occupancy grid generation using probability fields

Ali Harakeh; Daniel C. Asmar; Elie A. Shammas

This paper proposes a novel technique for segmenting the ground plane and at the same time estimating the occupancy probability of each point in a scene. Using a stereo camera rig, our system first calculates a disparity map and transforms it to a v-disparity map, which is then filtered and processed to generate a corresponding probability field. The probability field generated is then used for precise segmentation of ground planes as well as for the generation of occupancy grids. Unlike what is proposed in the prior art, our system requires minimal initialization and is independent of the stereo sensor characteristics as well as the parameters of the disparity algorithm. More importantly, our technique does not require any prior assumption about the terrain visual characteristics. Experimental results using sequences of images from two different data sets are presented to validate the proposed methods.


Scopus | 2009

A Simple Momentum Controller for Humanoid Push Recovery

Bassam Jalgha; Daniel C. Asmar

While working in a dynamic environment, humanoid robots are subject to unknown forces and disturbances, putting them at risk of falling down and damaging themselves. One mechanism by which humans avoid falling under similar conditions is the human momentum reflex. Although such systems have been devised, the processing requirements are too high to be implemented on small humanoids having microcontroller processing capabilities. This paper presents a simplified momentum controller for fall avoidence. The system is tested on a simulated robot developed under Gazebo as well as under a real humanoid. Results show successful fall avoidance.


systems, man and cybernetics | 2003

A robot's spatial perception communicated via human touch

John S. Zelek; Daniel C. Asmar

A robot perceives space in order to navigate, map, search and investigate its surroundings. One application area where humans interact with robots is search and rescue. The robot may have unique capabilities such as seeing outside the visible spectrum and being able to navigate in tight and hazardous spaces. In such an operation, the human may also be an effective search agent. Thus, it would be beneficial if the robots spatial perception was conveyed to the human via a secondary peripheral modality. We have worked on developing a visual to tactile substitution device for people who are visually impaired or blind. We propose to use this similar technique to the remote robot in order that it convey its spatial perception to the human operator or co-worker.

Collaboration


Dive into the Daniel C. Asmar's collaboration.

Top Co-Authors

Avatar

Elie A. Shammas

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Imad H. Elhajj

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georges Younes

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Noel Maalouf

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Salah Bazzi

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Samer M. Abdallah

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Bassam Jalgha

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Chadi Mansour

American University of Beirut

View shared research outputs
Researchain Logo
Decentralizing Knowledge