Corneliu Florea
Politehnica University of Bucharest
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Corneliu Florea.
advanced concepts for intelligent vision systems | 2008
Constantin Vertan; Alina Oprea; Corneliu Florea; Laura Florea
The paper presents a new [pseudo-] Logarithmic Model for Image Processing (LIP), which allows the computation of gray-level addition, substraction and multiplication with scalars within a fixed gray-level range [0; D ] without the use of clipping. The implementation of Laplacian edge detection techniques under the proposed model yields superior performance in biomedical applications as compared with the classical operations (performed either as real axis operations, either as classical LIP models).
european conference on computer vision | 2014
Corneliu Florea; Laura Florea; Constantin Vertan
Automatic monitoring for the assessment of pain can significantly improve the psychological comfort of patients. Recently introduced databases with expert annotation opened the way for pain intensity estimation from facial analysis. In this contribution, pivotal face elements are identified using the Histograms of Topographical features (HoT) which are a generalization of the topographical primal sketch. In order to improve the discrimination between different pain intensity values and respectively the generalization with respect to the monitored persons, we transfer data representation from the emotion oriented Cohn-Kanade database to the UNBC McMaster Shoulder Pain database.
british machine vision conference | 2013
Laura Florea; Corneliu Florea; Ruxandra Vrânceanu; Constantin Vertan
We investigate the possibility of estimating the cognitive process used by a person when addressing a mental challenge by following the Eye Accessing Cue (EAC) model from the Neuro-Linguistic Programming (NLP) theory [1]. This model, showed in figure 1, describes the eyemovements that are not used for visual tasks (non visual movements) and suggests that the direction of gaze, in such a case, can be an indicator for the internal representational system used by a person facing a given query. The actual EAC is thought to be identified by distinguishing between the relative position of the iris and the eye socket (lid edge). Our approach is to determine the four limits of the eye socket: the inner and outer corners, the upper and lower lids and the iris center and to subsequently analyze the identified region. The entire method flowchart is presented in figure 2. The schematics of the method used for independently looking for the position of each eye landmark is described in figure 3. Given the face square position by Viola-Jones algorithm [4] and the eye centers given by the method from [3], we fuse information related to position, normalized luminance, template matching and shape constraining. For position and luminance, we construct priors over the training database, while for template matching we describe a patch by concatenation of integral and edge projections on horizontal and vertical directions. The score of how likely is a patch to be centered on the true landmark position is given by a Multi Layer Perceptron. For the shape constrain, inspired by the CLM [2], we construct the probability density function in the eigenspace of the shapes in the training set. By ordering the landmarks according to a prior confidence (e.g. eye outer corners are more reliable than upper and lower eye boundaries) and by keeping all points fixed with the exception of the current least reliable, we build the likelihood of various current landmark positions. This information is fused with previous stages and we iteratively improve the landmark position. The final landmark position is taken as the weighted center of mass of the convex combination between initial stages and shape likelihood. To study the specific of the gaze direction we introduce Eye-Chimera database, which comprises 1172 frontal face images, grouped according to the 7 gaze directions, with a set of 5 points marked for each eye: the iris center and 4 points delimiting the bounding box. Recognizing individual EACs. The recognition of the EAC case (gaze direction) is done by identifying the position of the iris center inside the eye socket complemented by the information of the interior of the eye delimited shape. The interior of the eye quadrilateral shape is described by the integral projections normalized to 32 samples. For the actual recognition we have trained a random forrest to take as input the EAC feature (landmarks positions and integral features). We consider two types of recognition situations: three cases (looking
international conference on consumer electronics | 2008
Felix Albu; Corneliu Florea; Adrian Zamfir; Alexandru Drimbarean
Two new global motion estimation methods are proposed. The first one, called sign projection (SP) is obtained by modifying the integral projection estimation method using two thresholds for pixel values. The second method, called binary incrementation (BI), is obtained by using only one threshold for generating binary vectors from two images. It is shown that the proposed approaches provide similar motion estimation accuracy with the integral projection (IP) and phase correlation (PC) methods. Also, they have reduced numerical complexity and memory requirements, leading to shorter processing times as well as lower power consumption. The technique is particularly suitable for implementation in consumer devices such as digital video cameras.
advanced concepts for intelligent vision systems | 2007
Corneliu Florea; Constantin Vertan; Laura Florea
Digital capture with consumer digital still camera of the radiographic film significantly decreases the dynamic range and, hence, the details visibility. We propose a method that boosts the dynamic range of the processed X-ray image based on the fusion of a set of digital images acquired under different exposure values. The fusion is controlled by a fuzzy-like confidence information and the luminance range is oversampled by using logarithmic image processing operators.
International Journal of Applied Mathematics and Computer Science | 2013
Corneliu Florea; Laura Florea
Abstract While most of state-of-the-art image processing techniques were built under the so-called classical linear image processing, an alternative that presents superior behavior for specific applications comes in the form of Logarithmic Type Image Processing (LTIP). This refers to mathematical models constructed for the representation and processing of gray tones images. In this paper we describe a general mathematical framework that allows extensions of these models by various means while preserving their mathematical properties. We propose a parametric extension of LTIP models and discuss its similarities with the human visual system. The usability of the proposed extension model is verified for an application of contrast based auto-focus in extreme lighting conditions. The closing property of the named models facilitates superior behavior when compared with state-of-the-art methods.
scandinavian conference on image analysis | 2013
Razvan George Condorovici; Corneliu Florea; Ruxandra Vrânceanu; Constantin Vertan
This paper presents an automatic system for the recognition of artistic genre in digital representations of paintings. This solution comes as part of the recent extensive effort of developing image processing solutions that facilitate a better understanding of art. As art addresses human perception, the current extracted features are perceptually inspired. While 3D Color Histogram and Gabor Filter Energy have been used for art description, frameworks extracted using anchoring theory are novel in this field. The paper investigates the possible use of 7 classifiers and the resulting performance, as evaluated on a database containing more than 3400 paintings from 6 different genres, outperforms the reported state of the art.
Journal of Visual Communication and Image Representation | 2015
Răzvan George Condorovici; Corneliu Florea; Constantin Vertan
We introduce a new perceptual system for painting recognition.Each perceptual category is addressed with a dedicated feature.Painting dominant shapes are described using the anchoring theory.Color palette is represented through the newly introduced Dominant Color Volume. We propose a framework for the automatic recognition of artistic genre in digital representations of paintings. As we aim to contribute to a better understanding of art by humans, we extensively mimic low-level and medium-level human perception by relying on perceptually inspired features. While Gabor filter energy has been used for art description, Dominant Color Volume (DCV) and frameworks extracted using anchoring theory are novel in this field. To perform the actual genre recognition, we rely on a late fusion scheme based on combining Multi-Layer Perceptron (MLP) classified data with Support Vector Machines (SVM). The performance is evaluated on an extended database containing more than 4000 paintings from 8 different genres, outperforming the reported state of the art.
computer analysis of images and patterns | 2013
Ruxandra Vrânceanu; Corneliu Florea; Laura Florea; Constantin Vertan
This paper investigates the recognition of the Eye Accessing Cues (EACs) used in Neuro-Linguistic Programming (NLP) and shows how computer vision techniques can be used for understanding the meaning of non-visual gaze directions. Any specific EAC is identified by the relative position of the iris within the eye bounding box, which is determined from modified versions of the classical integral projections. The eye cues are inferred via a logistic classifier from features extracted within the eye bounding box. The here proposed solution is shown to outperform in terms of detection rate other classical approaches.
international conference on consumer electronics | 2011
Corneliu Florea; Adrian Capata; Mihai Ciuc; Peter Corcoran
Influenced by the widespread adoption of HDTV consumer imaging devices have begun to feature full HD (high density) video capture. With this high-resolution capability, small imperfections in human faces are captured with full HD video. When used for portrait imaging, the resulting video is frequently unsatisfactory to the user. Manufacturers of imaging devices require practical, real-time solutions which can mitigate facial details emphasized by the HD resolution, while preserving the overall quality of the portrait images. The high bandwidth and processing requirements of full HD video make this a challenging task. In this paper, a practical algorithm, now incorporated in a number of consumer devices is explained. Major challenges and their solutions are presented. Attention is given to real-time optimizations of the algorithm which is designed with a view to its suitability for partial or full hardware implementation.