Catalina Cocianu
Bucharest University of Economic Studies
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Catalina Cocianu.
2009 First International Conference on Advances in Satellite and Space Communications | 2009
Luminita State; Catalina Cocianu; Corina Sararu; Panayiotis Vlamos
The increasing use of location-based services has raised many issues of decision support and resource allocation. A crucial problem is how to solve queries of Group k-Nearest Neighbour (GkNN). A typical example of a GkNN query is finding one or many nearest meeting places for a group of people. Existing methods mostly rely on a centralised base station. However, mobile P2P systems offer many benefits, including self-organization, fault-tolerance and load-balancing. In this study, we propose and evaluate a novel P2P algorithm focusing on GkNN queries, in which mobile query objects and static objects of interest are of two different categories. The algorithm is evaluated in the MiXiM simulation framework with both real and synthetic datasets. The results show the practical feasibility of the P2P approach for solving GkNN queries for mobile networks.The Discrete Fourier Transform (DFT) can be viewed as the Fourier Transform of a periodic and regularly sampled signal. The Non-Uniform Discrete Fourier Transform (NuDFT) is a generalization of the DFT for data that may not be regularly sampled in spatial or temporal dimensions. This flexibility allows for benefits in situation where sensor placement cannot be guaranteed to be regular or where prior knowledge of the informational content could allow for better sampling patterns than a regular one. NuDFT is used in applications such as Synthetic Aperture Radar (SAR), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI). Direct calculation of NDFT is time consuming and, in general, Non-uniform Fast Fourier Transform (NuFFT) is used. The key of computing NuFFT is to interpolate the non-uniformly sampled data onto a uniform grid and then use the Fast Fourier Transform. The interpolation process, called re-gridding or data-translation, is known to be the most time consuming (over 90% of the overall computation time of NuFFT) [1]. FPGA have been shown in prior work to be a power efficient way to perform this re-gridding as in [1]. We propose a novel memory-efficient FPGA based technique based on grouping the source points together in on-chip memory and hence reducing the number of memory accesses. The proposed architecture exhibits high performance for the re-gridding process. A speed-up of over 7.5 X was achieved when compared with existing FPGA-based technique for a target grid of size 256 atimes; 256. The basic procedure for re-gridding is based on updating all the target points within a specified distance of the source point using an interpolation kernel function. In this paper, we refer to this specified distance as interpolation threshold and its value is expressed in terms of the number of target points. Our proposed architecture is based on dividing the 2- Dimensional (2D) uniform target grid T into smaller 2D sub-grid. These sub-grids are called tiles. Corresponding to each tile, a block memory based FIFO is used. The idea is to group the source points that affect a tile into the FIFO corresponding to the tile. FIFOs are read one at a time and the tile corresponding to the FIFO being read is fetched from the external memory into the device. Performance of the proposed architecture is evaluated by simulating and computing the number of clock cycles required. Using a clock frequency of 50 MHz, which is chosen to be less then the achieved maximum frequency of 60.16 MHz, computation time for the translation process is calculated. Based on this computed time, throughput is calculated in terms of frames per second (fps).Principal Component Analysis is a well-known statistical method for feature extraction and it has been broadly used in a large series of image processing applications. The multiresolution support provides a suitable framework for noise filtering and image restoration by noise suppression. The procedure used is to determine statistically significant wavelet coefficients and from this to specify the multiresolution support. In the third section, we introduce the algorithms Generalized Multiresolution Noise Removal, and Noise Feature Principal Component Analysis. The algorithm Generalized Multiresolution Noise Removal extends the Multiresolution Noise Removal algorithm to the case of general uncorrelated Gaussian noise, and Noise Feature Principal Component Analysis algorithm allows the restoration of an image using a noise decorrelation process. A comparative analysis of the performance of the algorithms Generalized Multiresolution Noise Removal and Noise Feature Principal Component Analysis is experimentally performed against the standard Adaptive Mean Variance Restoration and Minimum Mean Squared Error algorithms. In the fourth section, we propose the Compression Shrinkage Principal Component Analysis algorithm and its model-free version as Shrinkage-Principal Component Analysis based methods for noise removal and image restoration. A series of conclusive remarks are supplied in the final section of the paper.
international conference on information technology coding and computing | 2002
Catalina Cocianu; Luminita State; Vlamos Panayiotis
The effectiveness of restoration techniques mainly depends on the accuracy of the image modeling. One of the most popular degradation models is based on the assumption that the image blur can be modeled as a superposition with an impulse response H that may be space variant and its output is subject to an additive noise. Our research aimed at the use of statistical concepts and tools for developing a new class of image restoration algorithms. Several variants of a heuristic scatter matrix based algorithm (HSBA), the algorithm HBA that uses the Bhattacharyya coefficient for image restoration, the heuristic regression based algorithm for image restoration and new approaches of image restoration based on the innovation algorithm are reported. The LMS type algorithm AMVR is presented. A comparative study is performed and reported on the quality and efficiency of the presented noise removal algorithms.
software engineering, artificial intelligence, networking and parallel/distributed computing | 2012
Luminita State; Catalina Cocianu; Marinela Mircea
The paper reports some new variants of gradient ascent type in learning SVMs. The theoretical development is presented in the third section of the paper. The performance analysis of the proposed variants, in terms of recognition accuracy and generalization capacity, is experimentally evaluated and the results are presented and commented in the final part of the paper.
international conference on enterprise information systems | 2018
Catalina Cocianu; Alexandru Stan
This paper focuses on the development of an image registration methodology for digital signature recognition. We consider two perturbation models, namely the rigid transformation and a mixture of shear and rigid deformation. The proposed methodology involves three stages. In the first stage, both the acquired image and the stored one are binarized to reduce the computational effort. Then an evolution strategy (ES) is applied to register the obtained binary images. The quality of each chromosome belonging to a certain population is evaluated in terms of mutual information-based fitness function. In order to speed up the computation of fitness values, we propose a computation strategy based on the binary representation of images and the sparsity of the image matrices. Finally, we evaluate the registration capabilities of the proposed methodology by means of quantitative measures as well as qualitative indicators. The experimental results and some conclusions concerning the capabilities of various methods derived from the proposed methodology are reported in the final section of the paper.
international conference on system theory, control and computing | 2014
Luminita State; Catalina Cocianu; Marinela Mircea
The aim of the paper is to report a new method based on genetic computation of designing a nonlinear soft margin SVM yielding to significant improvements in discriminating between two classes. The design of the SVM is performed in a supervised way, in general the samples coming from the classes being nonlinearly separable. The experimental analysis was performed on artificially generated data as well as on Ripley and MONKs datasets reported in the fourth section of the paper. The tests proved real improvements of both the recognition rate and generalization capacities without significantly increasing the computational complexity.
Applied Mechanics and Materials | 2011
Luminita State; Catalina Cocianu; Panayiotis Vlamos
Training a SVM corresponds to solving a linearly constrained quadratic problem (QP) in a number of variables equal to the number of data points, this optimization problem becoming challenging when the number of data points exceeds few thousands. Because the computational complexity of the existing algorithms is extremely large in case of few thousands support vectors and therefore the SVM QP-problem becomes intractable, several decomposition algorithms that do not make assumptions on the expected number of support vectors have been proposed instead. In this paper we propose a heuristic learning algorithm of gradient type for learning a SVM using linear separable data, and analyze its performance in terms of accuracy and efficiency. In order to evaluate the efficiency of our learning method, several tests were performed against the Platt’s SMO method, and the conclusions are formulated in the final section of the paper.
research challenges in information science | 2008
Luminita State; Catalina Cocianu; Panayiotis Vlamos
The aim of the research reported in the paper was twofold: to propose a new approach in cluster analysis and to investigate its performance, when it is combined with dimensionality reduction schemes. The search process for the optimal clusters approximating the unknown classes towards getting homogenous groups, where the homogeneity is defined in terms of the dasiatypicalitypsila of components with respect to the current skeleton. Our method is described in the third section of the paper. The compression scheme was set in terms of the principal directions corresponding to the available cloud. The final section presents the results of the tests aiming the comparison between the performances of our method and the standard k-means clustering technique when they are applied to the initial space as well as to compressed data.
international conference on software and data technologies | 2006
Luminita State; Catalina Cocianu; Panayiotis Vlamos; Viorica Stefanescu
international conference on informatics in control, automation and robotics | 2018
Catalina Cocianu; Luminita State; Vlamos Panayiotis; Viorica Stefanescu
symbolic and numeric algorithms for scientific computing | 2007
Luminita State; Catalina Cocianu; Panayiotis Vlamos; Doru Constantin