Fabrice Meriaudeau
Universiti Teknologi Petronas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fabrice Meriaudeau.
acm multimedia | 2016
Anastasia Pampouchidou; Olympia Simantiraki; Amir Fazlollahi; Matthew Pediaditis; Dimitris Manousos; Alexandros Roniotis; Giorgos A. Giannakakis; Fabrice Meriaudeau; Panagiotis G. Simos; Kostas Marias; Fan Yang; Manolis Tsiknakis
Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.
Biomedical Signal Processing and Control | 2017
Mohamed Abul Hassan; Aamir Saeed Malik; David Fofi; N. M. Saad; Babak Karasfi; Yasir Salih Ali; Fabrice Meriaudeau
Abstract Photoplethysmography and Ballistocardiography are two concepts that are used to measure heart rate from human, by using facial videos. Heart rate estimation is essential to determine the physiological and pathological state of a person. This paper presents a critical review of digital camera based heart rate estimating method on facial skin. This review extends the investigation on to the principles and theory behind photoplethysmography and ballistocardiography. The article contains reviews on the significance of the methods and contributions to overcome challenges such as; poor signal strength, illumination variance, and motion variance. The experiments were conducted to validate the state of the art methods on a challenging database that is available publicly. The implemented methods were validated using the database, on 27 subjects for a range of skin tones from pearl white, fair, olive to black. The results were computed using statistical methods such as: mean error, standard deviation, the root mean square error, Pearson correlation coefficient, and Bland-Altman analysis. The results derived from the experiments showed the reliability of the state of the art methods and provided direction to improve for situations involving illumination variance and motion variance.
IEEE Access | 2017
Atif Anwer; Syed Saad Azhar Ali; Amjad Khan; Fabrice Meriaudeau
Commercial RGB-D cameras provide the possibility of fast, accurate, and cost-effective 3-D scanning solution in a single package. These economical depth cameras provide several advantages over conventional depth sensors, such as sonars and lidars, in specific usage scenarios. In this paper, we analyze the performance of Kinect v2 time-of-flight camera while operating fully submerged underwater in a customized waterproof housing. Camera calibration has been performed for Kinect’s RGB and NIR cameras, and the effect of calibration on the generated 3-D mesh is discussed in detail. To overcome the effect of refraction of light due to the sensor housing and water, we propose a time-of-flight correction method and a fast, accurate and intuitive refraction correction method that can be applied to the acquired depth images, during 3-D mesh generation. Experimental results show that the Kinect v2 can acquire point cloud data up to 650 mm. The reconstruction results have been analyzed qualitatively and quantitatively, and confirm that the 3-D reconstruction of submerged objects at small distances is possible without the requirement of any external NIR light source. The proposed algorithms successfully generated 3-D mesh with a mean error of ±6 mm at a frame rate of nearly 10 fps. We acquired a large data set of RGB, IR and depth data from a submerged Kinect v2. The data set covers a large variety of objects scanned underwater and is publicly available for further use, along with the Kinect waterproof housing design and correction filter codes. The research is aimed toward small-scale research activities and economical solution for 3-D scanning underwater. Applications such as coral reef mapping and underwater SLAM in shallow waters for ROV’s can be a viable application area that can benefit from results achieved.
IEEE Sensors Journal | 2017
Mohamed Abul Hassan; Aamir Saeed Malik; David Fofi; N. M. Saad; Yasir Salih Ali; Fabrice Meriaudeau
Video-based heartbeat rate measurement is a rapidly growing application in remote health monitoring. Video-based heartbeat rate measuring methods operate mainly by estimating photoplethysmography or ballistocardiography signals. These methods operate by estimating the microscopic color change in the face or by estimating the microscopic rigid motion of the head/ facial skin. However, the robustness to motion artifacts caused by illumination variance and motion variance of the subject poses main challenge. We present a video-based heartbeat rate measuring framework to overcome these problems by using the principle of ballistocardiography. In this paper, we proposed a ballistocardiography model based on Newtons third law of force and dynamics of harmonic oscillation. We formulate a framework based on the ballistocardiography model to measure the rigid involuntary head motion caused by the ejection of the blood from the heart. Our proposed framework operates by estimating the motion of multivariate feature points to estimate the heartbeat rate autonomously. We evaluated our proposed framework along with existing video-based heartbeat rate measuring methods with three databases, namely; MAHNOB HCI database, human-computer interaction database, and driver health monitoring database. Our proposed framework outperformed existing methods by reporting a low mean error rate of 4.34 bpm with a standard deviation of 3.14 bpm, root mean square error of 5.29 with a high Pearson correlation coefficient of 0.91. The proposed method also operated robustly in the human-computer interaction database and driver health monitoring database by overcoming the issues related to illumination and motion variance.
Computers in Biology and Medicine | 2017
Ravi Kamble; Manesh Kokare; Girish Deshmukh; Fawnizu Azmadi Hussin; Fabrice Meriaudeau
Accurate detection of diabetic retinopathy (DR) mainly depends on identification of retinal landmarks such as optic disc and fovea. Present methods suffer from challenges like less accuracy and high computational complexity. To address this issue, this paper presents a novel approach for fast and accurate localization of optic disc (OD) and fovea using one-dimensional scanned intensity profile analysis. The proposed method utilizes both time and frequency domain information effectively for localization of OD. The final OD center is located using signal peak-valley detection in time domain and discontinuity detection in frequency domain analysis. However, with the help of detected OD location, the fovea center is located using signal valley analysis. Experiments were conducted on MESSIDOR dataset, where OD was successfully located in 1197 out of 1200 images (99.75%) and fovea in 1196 out of 1200 images (99.66%) with an average computation time of 0.52s. The large scale evaluation has been carried out extensively on nine publicly available databases. The proposed method is highly efficient in terms of quickly and accurately localizing OD and fovea structure together compared with the other state-of-the-art methods.
international symposium on robotics | 2016
Amjad Khan; Syed Saad Azhar Ali; Aamir Saeed Malik; Atif Anwer; Nur Afande Ali Hussain; Fabrice Meriaudeau
For everyday inspection jobs in offshore oil and gas industry, the human divers are being replaced by underwater vehicles. This paper proposes a visual feedback based control of an autonomous underwater vehicle for pipeline inspection. The hydrodynamic disturbances in water severely affect the movement of the vehicle resulting in performance degrading. The heading of the autonomous underwater vehicle under such disturbances is controlled using visual feedback to track the pipeline for inspection. The proposed method does not demand expensive position feedback devices such as underwater acoustic positioning system. By using built-in camera of the vehicle and few image processing techniques a simpler, easier and low-cost solution is proposed. The performance evaluation of the proposed technique on sample underwater images is also presented.
international conference on intelligent and advanced systems | 2016
Atif Anwer; Syed Saad Azhar Ali; Fabrice Meriaudeau
In this paper, we propose possibility for reconstruction of surface of an underwater object or 3D scene reconstruction of an underwater environment using an economical RGB-D sensor such as Microsoft Kinect. Reconstructing the 3D surface of an underwater object is a challenging task due to degraded quality of underwater images. There are various reasons of quality degradation of underwater images i.e., non-uniform illumination of light on the surface of objects, scattering and absorption effects. Particles and impurities present in underwater produces Gaussian noise on the captured underwater optical images which degrades the quality of images. However, using depth sensors, as a cost effective alternative, we aim to show that underwater 3D scene reconstruction is possible with sight tradeoffs on accuracy but major cost saving. The acquired depth data is proposed to be processed by applying real-time mesh generating techniques from the acquired point cloud. The experimental result aims to show that the proposed method reconstructs 3D surface of underwater objects accurately using captured underwater depth images.
2016 IEEE International Conference on Underwater System Technology: Theory and Applications (USYS) | 2016
Atif Anwer; Syed Saad Azhar Ali; Amjad Khan; Fabrice Meriaudeau
This paper presents preliminary work to utilize a commercial time of flight depth camera for real-time 3D scene reconstruction of underwater objects. Typical RGB stereo camera imaging for 3D capturing suffers from blur and haziness due to turbidity of water in addition to critical dependence on light, either from natural or artificial sources. We propose a method for repurposing the low-cost Microsoft Kinect™ Time of Flight camera for underwater environment enabling dense depth data acquisition that can be processed in real time. Our motivation is the ease of use and low cost of the device for high quality real-time scene reconstruction as compared to multi view stereo cameras, albeit at a smaller range. Preliminary results of depth data acquisition and surface reconstruction in underwater environment are also presented. The novelty of our work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and our main objective is to develop an economical and compact solution for underwater 3D mapping.
international colloquium on signal processing and its applications | 2015
Goh Chuan Meng; A. Shahzad; N. M. Saad; Aamir Saeed Malik; Fabrice Meriaudeau
Advance biomedical engineering technologies are continuously changing the medical practices to improve medical care of patients nowadays. In this paper, we describe the concept used to prototype a device for needle insertion navigation during intravenous catheterization process via Near Infrared (NIR) imaging technique. A vein locator prototype using the NIR imaging technique and augmented reality (AR) technology have been developed in this work which is meant to be used in the process of intravenous catheterization. The challenges faced during the development of this prototype included the calibration of composite images in the see-through display. In this prototype, the Vuzix STAR 1200XL eyewear system has been used as the head mounted display and the imaging video is input by IR CCD camera. Additional, We select the optimum illumination by using NIR LED with wavelength of 830nm and 850nm in prototying to obtain the best contrast NIR venous image for different types of skin tone.
COMPAY/OMIA@MICCAI | 2018
Oscar Perdomo; Vincent Andrearczyk; Fabrice Meriaudeau; Henning Müller; Fabio González
Glaucoma is an ophthalmic disease related to damage in the optic nerve and it is without symptoms in its early stages. Left untreated, it can lead to vision limitation and blindness. Eye fundus images have been widely accepted by medical personnel to examine the morphology and texture of the optic nerve head and the physiologic cup but glaucoma diagnosis is still subjective and without clear consensus among experts. This paper presents a multi-stage deep learning model for glaucoma diagnosis based on a curriculum learning strategy. In curriculum learning, a model is sequentially trained to solve incrementally difficult tasks. Our proposed model includes the following stages: segmentation of the optic disc and physiological cup, prediction of morphometric features from segmentations, and prediction of disease level (healthy, suspicious and glaucoma). The experimental evaluation shows that our proposed method outperforms conventional convolutional deep learning models from the state of the art reported on the RIM-ONE-v1 and DRISHTI-GS1 datasets with an accuracy of 89.4% and an AUC of 0.82 respectively.