Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abbas Bigdeli is active.

Publication


Featured researches published by Abbas Bigdeli.


Eurasip Journal on Image and Video Processing | 2011

Face recognition from still images to video sequences: a local-feature-based framework

Shaokang Chen; Sandra Mau; Mehrtash Tafazzoli Harandi; Conrad Sanderson; Abbas Bigdeli; Brian C. Lovell

Although automatic faces recognition has shown success for high-quality images under controlled conditions, for video-based recognition it is hard to attain similar levels of performance. We describe in this paper recent advances in a project being undertaken to trial and develop advanced surveillance systems for public safety. In this paper, we propose a local facial feature based framework for both still image and video-based face recognition. The evaluation is performed on a still image dataset LFW and a video sequence dataset MOBIO to compare 4 methods for operation on feature: feature averaging (Avg-Feature), Mutual Subspace Method (MSM), Manifold to Manifold Distance (MMS), and Affine Hull Method (AHM), and 4 methods for operation on distance on 3 different features. The experimental results show that Multi-region Histogram (MRH) feature is more discriminative for face recognition compared to Local Binary Patterns (LBP) and raw pixel intensity. Under the limitation on a small number of images available per person, feature averaging is more reliable than MSM, MMD, and AHM and is much faster. Thus, our proposed framework—averaging MRH feature is more suitable for CCTV surveillance systems with constraints on the number of images and the speed of processing.


international conference on distributed smart cameras | 2008

A high resolution smart camera with GigE Vision extension for surveillance applications

Ehsan Norouznezhad; Abbas Bigdeli; Adam Postula; Brian C. Lovell

Intelligent video surveillance is currently a hot topic in computer vision research. The goal of intelligent video surveillance is to process the captured video from the monitored area, extract specific information and take appropriate action based on that information. Due to the high computational complexity of vision tasks and the real-time nature of these systems, current software-based intelligent video surveillance systems are unable to perform sophisticated operations. Smart cameras are a key component for future intelligent surveillance systems. They use embedded processing to offload computationally intensive vision tasks from the host processing computers and increasingly reduce the required communication bandwidth and data flows over the network. This paper reports on the design of a high resolution smart camera with a GigE vision extension for automated video surveillance systems. The features of the new camera interface standard, GigE Vision will be introduced and its suitability for video surveillance systems will be described. The surveillance framework for which the GigE vision standard has been developed is presented as well as a brief overview of the proposed smart camera.


international conference on embedded software and systems | 2007

Face Detection on Embedded Systems

Abbas Bigdeli; Colin Sim; Morteza Biglari-Abhari; Brian C. Lovell

Over recent years automated face detection and recognition (FDR) have gained significant attention from the commercial and research sectors. This paper presents an embedded face detection solution aimed at addressing the real-time image processing requirements within a wide range of applications. As face detection is a computationally intensive task, an embedded solution would give rise to opportunities for discrete economical devices that could be applied and integrated into a vast majority of applications. This work focuses on the use of FPGAs as the embedded prototyping technology where the thread of execution is carried out on an embedded soft-core processor. Custom instructions have been utilized as a means of applying software/hardware partitioning through which the computational bottlenecks are moved to hardware. A speedup by a factor of 110 was achieved from employing custom instructions and software optimizations.


international conference on control, automation, robotics and vision | 2010

Obstacle-free range determination for rail track maintenance vehicles

Frederic D. Maire; Abbas Bigdeli

Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track sig-nalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The two main ideas are, (1) to warp the camera image into an image where the rails are parallel through a projective transform, and (2) to track the two rail curves simultaneously by evaluating small parallel segments. The performance of the system is demonstrated on an image dataset.


international conference on distributed smart cameras | 2009

Face detection system design for real time high resolution smart camera

Yasir Mohd Mustafah; Abbas Bigdeli; Amelia Wong Azman; Brian C. Lovell

Recognizing faces in a crowd in real-time is a key feature which would significantly enhance Intelligent Surveillance Systems. Using a smart camera as a tool to extract faces for recognition would greatly reduce the computational load on the main processing unit. Main processing unit would not be overloaded by the demands of the high data rates of the video and could be designed solely for face recognition. The challenge is with the increasing speed and resolution of the camera sensors, a fast and robust face detection system is required for real time operation. In this paper we report on a multiple-stage face detection system that is designed for implementation on an FPGA based high resolution smart camera system. The system consist of filter stages to greatly reduce the region of interest in video image, followed by a face detection stage to accurately locate the faces. For filter stage, the algorithm is designed to be very fast so that it can be processed in real time. Meanwhile, for face detection stage, a hardware and software co-design technique is utilised to accelerate it.


international conference on distributed smart cameras | 2007

An Automated Face Recognition System for Intelligence Surveillance: Smart Camera Recognizing Faces in the Crowd

Yasir Mohd Mustafah; Amelia Wong Azman; Abbas Bigdeli; Brian C. Lovell

Smart cameras are rapidly finding their way into intelligent surveillance systems. Recognizing faces in the crowd in real-time is one of the key features that will significantly enhance intelligent surveillance systems. The main challenge is the fact that the high volumes of data generated by high-resolution sensors make it computationally impossible for mainstream computers to process. In our proposed technique, the smart camera extracts all the faces from the full-resolution frame and sends the pixel information from these face areas to the main processing unit as a auxiliary video stream - potentially achieving massive data rate reduction. Face recognition software running on the main processing unit then performs the required pattern recognition.


international conference on innovations in information technology | 2006

See Me, Teach Me: Facial Expression and Gesture Recognition for Intelligent Tutoring Systems

Abdolhossein Sarrafzadeh; Samuel Alexander; Farhad Dadgostar; Chao Fan; Abbas Bigdeli

Many software systems would significantly improve performance if they could adapt to the emotional state of the user, for example if intelligent tutoring systems, ATMs and ticketing machines could recognise when users were confused, frustrated or angry they could provide remedial help so improving the service. This paper presents research leading to the development of Easy with Eve, an affective tutoring systems (ATS) for mathematics. The system detects student emotion, adapts to students and displays emotion via a lifelike agent called Eve. Eves is guided by a case-based system which uses data that was generated by an observational study. This paper presents the observational study, the case-based method, and the ATS


european conference on computer vision | 2012

Directional space-time oriented gradients for 3d visual pattern analysis

Ehsan Norouznezhad; Mehrtash Tafazzoli Harandi; Abbas Bigdeli; Mahsa Baktash; Adam Postula; Brian C. Lovell

Various visual tasks such as the recognition of human actions, gestures, facial expressions, and classification of dynamic textures require modeling and the representation of spatio-temporal information. In this paper, we propose representing space-time patterns using directional spatio-temporal oriented gradients. In the proposed approach, a 3D video patch is represented by a histogram of oriented gradients over nine symmetric spatio-temporal planes. Video comparison is achieved through a positive definite similarity kernel that is learnt by multiple kernel learning. A rich spatio-temporal descriptor with a simple trade-off between discriminatory power and invariance properties is thereby obtained. To evaluate the proposed approach, we consider three challenging visual recognition tasks, namely the classification of dynamic textures, human gestures and human actions. Our evaluations indicate that the proposed approach attains significant classification improvements in recognition accuracy in comparison to state-of-the-art methods such as LBP-TOP, 3D-SIFT, HOG3D, tensor canonical correlation analysis, and dynamical fractal analysis.


international conference on distributed smart cameras | 2009

An efficient background estimation algorithm for embedded smart cameras

Vikas Reddy; Conrad Sanderson; Brian C. Lovell; Abbas Bigdeli

Segmentation of foreground objects of interest from an image sequence is an important task in most smart cameras. Background subtraction is a popular and efficient technique used for segmentation. The method assumes that a background model of the scene under analysis is known. However, in many practical circumstances it is unavailable and needs to be estimated from cluttered image sequences. With embedded systems as the target platform, in this paper we propose a sequential technique for background estimation in such conditions, with low computational and memory requirements. The first stage is somewhat similar to that of the recently proposed agglomerative clustering background estimation method, where image sequences are analysed on a block by block basis. For each block location a representative set is maintained which contains distinct blocks obtained along its temporal line. The novelties lie in iteratively filling in background areas by selecting the most appropriate candidate blocks according to the combined frequency responses of extended versions of the candidate block and its neighbourhood. It is assumed that the most appropriate block results in the smoothest response, indirectly enforcing the spatial continuity of structures within a scene. Experiments on real-life surveillance videos demonstrate the advantages of the proposed method.


international conference on distributed smart cameras | 2010

Object tracking on FPGA-based smart cameras using local oriented energy and phase features

Ehsan Norouznezhad; Abbas Bigdeli; Adam Postula; Brian C. Lovell

This paper presents the use of local oriented energy and phase features for real-time object tracking on smart cameras. In our proposed system, local energy features are used as spatial feature set for representing the target region while the local phase information are used for estimating the motion pattern of the target region. The motion pattern information of the target region is used for displacement of search area. Local energy and phase features are extracted by filtering the incoming images with a bank of complex Gabor filters. The effectiveness of the chosen feature set is tested using a mean-shift tracker. Our experiments show that the proposed system can significantly enhance the performance of the tracker in presence of photometric variations and geometric transformation. The real-time implementation of the system is also described in this paper. To achieve the desired performance, a hardware/software co-design approach is pursued. Apart from mean-shift vector calculation, the other blocks are implemented on hardware resources. The system was synthesized onto a Xilinx Virtex-5 XC5VSX50T using Xilinx ML506 development board and the implementation results are presented.

Collaboration


Dive into the Abbas Bigdeli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amelia Wong Azman

International Islamic University Malaysia

View shared research outputs
Top Co-Authors

Avatar

Yasir Mohd Mustafah

International Islamic University Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaokang Chen

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Postula

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Ting Shan

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge