Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hadi Aliakbarpour is active.

Publication


Featured researches published by Hadi Aliakbarpour.


international conference on information fusion | 2010

Probabilistic LMA-based classification of human behaviour understanding using Power Spectrum technique

Kamrad Khoshhal; Hadi Aliakbarpour; João Quintas; Paulo Drews; Jorge Dias

This paper proposes a new approach for the Power Spectrum (PS)-based feature extraction applied to probabilistic Laban Movement Analysis (LMA), for the sake of human behaviour understanding. A Bayesian network is presented to understand human action and behaviour based on 3D spatial data and using the LMA concept which is a known human movement descriptor. We have two steps for the classification process. The first step is estimating LMA parameters which are built to describe human motion situation by using some low level features. Then by having these parameters, it is possible to classify different human actions and behaviours. Here, a sample of using 3D acceleration data of six body parts to obtain some LMA parameters and understand some performed actions by human is shown. A new approach is applied to extract features from a signal data such as acceleration using the PS technique to achieve some of LMA parameters. A number of actions are defined, then a Bayesian network is used in learning and classification process. The experimental results prove that the proposed method is able to classify actions.


international conference on information fusion | 2010

Human silhouette volume reconstruction using a gravity-based virtual camera network

Hadi Aliakbarpour; Jorge Dias

The article represents a method to perform the Shape From Silhouette (SFS) of human, based on gravity sensing. A network of cameras is used to observe the scene. The extrinsic parameters among the cameras are initially unknown. An IMU is rigidly coupled to each camera in order to provide gravity and magnetic data. By applying a data fusion between each camera and its coupled IMU, it becomes possible to consider a downward-looking virtual camera for each camera within the network. Then extrinsic parameters among virtual cameras are estimated using the heights of two 3D points with respect to one camera within the network. Registered 2D points on the image plane of each camera is reprojected to its virtual camera image plane, using the concept of infinite homography. Such a virtual image plane is horizontal with a normal parallel to the gravity. The 2D points from the virtual image planes are back-projected onto the 3D space in order to make conic volumes of the observed object. From intersection of the created conic volumes from all cameras, the silhouette volume of the object is obtained. The experimental results validate both feasibility and effectiveness of the proposed method.


computer vision and pattern recognition | 2016

Semantic Depth Map Fusion for Moving Vehicle Detection in Aerial Video

Mahdieh Poostchi; Hadi Aliakbarpour; Raphael Viguier; Filiz Bunyak; Kannappan Palaniappan

Wide area motion imagery from an aerial platform offers a compelling advantage in providing a global picture of traffic flows for transportation and urban planning that is complementary to the information from a network of ground-based sensors and instrumented vehicles. We propose an automatic moving vehicle detection system for wide area aerial video based on semantic fusion of motion information with projected building footprint information to significantly reduce the false alarm rate in urban scenes with many tall structures. Motion detections are obtained using the flux tensor and combined with a scene level depth mask to identify tall structures using height information derived from a dense 3D point cloud estimated using multiview stereo from the same source imagery or a prior model. The trace of the flux tensor provides robust spatio-temporal information of moving edges including the motion of tall structures caused by parallax effects. The parallax induced motions are filtered out by incorporating building depth maps obtained from dense urban 3D point clouds. Using a level-set based geodesic active contours framework, the coarse thresholded tall structures depth masks evolved and stopped at the actual building boundaries. Experiments are carried out on a cropped 2k × 2k region of interest for 200 frames from Albuquerque urban aerial imagery. An average precision of 83% and recall of 76% have been reported using an object-level detection performance evaluation method.


computer vision and pattern recognition | 2012

Parameterizing interpersonal behaviour with Laban movement analysis — A Bayesian approach

Kamrad Khoshhal Roudposhti; Luís Picado Santos; Hadi Aliakbarpour; Jorge Dias

In this paper we propose a probabilistic model to parameterize human interactive behaviour from human motion. To Support the model taxonomy, we use Laban Movement Analysis (LMA), proposed by Rudolph Laban [11], to characterize human non-verbal communication. In interpersonal communication, body motion carries a lot of meaningful information, useful to analyse group dynamic behaviors in a wide range of social scenarios (e.g. behaviour analysis of human interpersonal activities and surveillance system). Taking the advantage of interpretation of social signals defined by Alex Pentland [19], and the descriptive body movement analysis proposed by Laban, we identified characteristics allowing both works to complement each other. To explore in group dynamics, we attempt to show the existent connections between Pentlands descriptions for Interpersonal Behaviours (IBs), and LMA parameters for human body part motions. Those relations are the keys to characterize the interpersonal communication. Given the uncertainty of the phenomenon, Bayesians methodology is applied. The results present LMA parameters as reliable indicators for IBs, allowing us to generalize the model.


digital image computing: techniques and applications | 2010

IMU-Aided 3D Reconstruction Based on Multiple Virtual Planes

Hadi Aliakbarpour; Jorge Dias

This paper proposes a novel approach for fast 3D reconstruction of an object inside a scene by using Inertial Measurement Unit (IMU) data. A network of cameras is used to observe the scene. For each camera within the network, a virtual camera is considered by using the concept of \emph{infinite homography}. Such a virtual camera is downward and has optical axis parallel to the gravity vector. Then a set of virtual horizontal 3D planes are considered for the aim of 3D reconstruction. The intersection of these virtual parallel 3D planes with the object is computed using the concept of homography and by applying a 2D Bayesian occupancy grid for each plane. The experimental results validate both feasibility and effectiveness of the proposed method.


Proceedings of SPIE | 2013

Geometric exploration of virtual planes in a fusion-based 3D data registration framework

Hadi Aliakbarpour; Kannappan Palaniappan; Jorge Dias

Three-dimensional reconstruction of objects, particularly buildings, within an aerial scene is still a challenging computer vision task and an importance component of Geospatial Information Systems. In this paper we present a new homography-based approach for 3D urban reconstruction based on virtual planes. A hybrid sensor consisting of three sensor elements including camera, inertial (orientation) sensor (IS) and GPS (Global Positioning System) location device mounted on an airborne platform can be used for wide area scene reconstruction. The heterogeneous data coming from each of these three sensors are fused using projective transformations or homographies. Due to inaccuracies in the sensor observations, the estimated homography transforms between inertial and virtual 3D planes have measurement uncertainties. The modeling of such uncertainties for the virtual plane reconstruction method is described in this paper. A preliminary set of results using simulation data is used to demonstrate the feasibility of the proposed approach.


international conference on industrial informatics | 2011

Inertial-visual fusion for camera network calibration

Hadi Aliakbarpour; Jorge Dias

This paper proposes a novel technique to calibrate a network of cameras by fusion of inertial-visual data. There is a set of still cameras (structure) and one (or more) mobile agent(s) camera in the network. Each camera within the network is assumed to be rigidly coupled with an Inertial Sensor (IS). By fusion of inertial and visual data, it becomes possible to consider a virtual camera beside of each camera within the network, using the concept of infinite homography. The mentioned virtual camera is downward-looking, its optical axis is parallel to the gravity and has a horizontal image plane. Taking advantage of the defined virtual cameras, the transformations between cameras are estimated by knowing just the heights of two arbitrary points with respect to one camera within the structure network. The proposed approach is notably fast and it requires a minimum human interaction. Another novelty of this method is its applicability for dynamic moving cameras (robots) in order to calibrate the cameras and consequently localizing the robots, as long as that the two marked points are visible by them.


doctoral conference on computing, electrical and industrial systems | 2011

HMM-Based Abnormal Behaviour Detection Using Heterogeneous Sensor Network

Hadi Aliakbarpour; Kamrad Khoshhal; João Quintas; Kamel Mekhnacha; Julien Ros; Maria Andersson; Jorge Dias

This paper proposes a HMM-based approach for detecting abnormal situations in some simulated ATM (Automated Teller Machine) scenarios, by using a network of heterogeneous sensors. The applied sensor network comprises of cameras and microphone arrays. The idea is to use such a sensor network in order to detect the normality or abnormality of the scenes in terms of whether a robbery is happening or not. The normal or abnormal event detection is performed in two stages. Firstly, a set of low-level-features (LLFs) is obtained by applying three different classifiers (what are called here as low-level classifiers) in parallel on the input data. The low-level classifiers are namely Laban Movement Analysis (LMA), crowd and audio analysis. Then the obtained LLFs are fed to a concurrent Hidden Markov Model in order to classify the state of the system (what is called here as high-level classification). The attained experimental results validate the applicability and effectiveness of the using heterogeneous sensor network to detect abnormal events in the security applications.


doctoral conference on computing, electrical and industrial systems | 2011

LMA-Based Human Behaviour Analysis Using HMM

Kamrad Khoshhal; Hadi Aliakbarpour; Kamel Mekhnacha; Julien Ros; João Quintas; Jorge Dias

In this paper a new body motion-based Human Behaviour Analysing (HBA) approach is proposed for the sake of events classification. Here, the interesting events are as normal and abnormal behaviours in a Automated Teller Machine (ATM) scenario. The concept of Laban Movement Analysis (LMA), which is a known human movement analysing system, is used in order to define and extract sufficient features. A two-phase probabilistic approach have been applied to model the system’s state. Firstly, a Bayesian network is used to estimate LMA-based human movement parameters. Then the sequence of the obtained LMA parameters are used as the inputs of the second phase. As the second phase, the Hidden Markov Model (HMM), which is a well-known approach to deal with the time-sequential data, is used regarding the context of the ATM scenario. The achieved results prove the eligibility and efficiency of the proposed method for the surveillance applications.


international workshop on robot motion and control | 2013

Image-based servoing of non-holonomic vehicles using non-central catadioptric cameras

Hadi Aliakbarpour; Omar Tahri; Helder Araújo

Novel contributions on image-based control of a mobile robot using a general catadioptric camera model are presented in this paper. Visual servoing applications using catadioptric cameras have essentially been using central cameras and the corresponding unified projection model. So far only in a few cases more general models have been used. In this paper we address the problem of visual servoing using the socalled the radial model. The radial model can be applied to many camera configurations and in particular to non-central catadioptric systems with mirrors that are symmetric around an axis coinciding with the optical axis. In this case, we show that the radial model can be used with a non-central catadioptric camera to allow effective image-based visual servoing (IBVS) of a mobile robot. Two sets of experiments are carried. In one of the sets, IMU (Inertial Measurement Unit) is used to measure the relative rotation of the robot and in the other set visual features are solely used. The achieved results validate both the applicability and effectiveness of the proposed method for imaged-based control of a non-holonomic robot.

Collaboration


Dive into the Hadi Aliakbarpour's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge