Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Razvan Andonie is active.

Publication


Featured researches published by Razvan Andonie.


foundations of computer science | 2001

A class of loop self-scheduling for heterogeneous clusters

Anthony T. Chronopoulos; Razvan Andonie; Manuel Benche; Daniel Grosu

Distributed Computing Systems are a viable and less expensive alternative to parallel computers. However, a serious difficulty in concurrent programming of a distributed system is how to deal with scheduling and load balancing of such a system which may consist of heterogeneous computers. Distributed scheduling schemes suitable for parallel loops with independent iterations on heterogeneous computer clusters have been designed in the past. In this work we consider a class of Self-Scheduling schemes for parallel loops with independent iterations which have been applied to multiprocessor systems. We extend this type of schemes to heterogeneous distributed systems. We present tests that the distributed versions of these schemes maintain load balanced execution on heterogeneous systems.


international symposium on signals circuits and systems | 2003

Differential implementations of threshold logic gates

Valeriu Beiu; José M. Quintana; M.J. Avedilo; Razvan Andonie

This paper reviews differential implementations of threshold logic gates, detailing two classes of solutions: capacitive (switched capacitor and floating gate), and conductance/current.


parallel, distributed and network-based processing | 2012

Clustering Superpeers in P2P Networks by Growing Neural Gas

Mihai Dumitrescu; Razvan Andonie

A challenging problem in peer-to-peer (P2P) networks is the management of super peers. We understand by this how to dynamically adapt the network topology (the number and the locations of super peers) in accordance to the network changes. The super peers are cluster centers which dynamically adapt their number and location. We introduce a self-organizing super peer overlay that suits the communication requirements of a P2P system. Our approach is based on the Growing Neural Gas clustering algorithm. The proposed framework may be suitable for disseminating network services in dynamic and large-scale networks where a large number of data and services need to be replicated, moved, and deleted in a decentralized manner. In our experiments, performed on the Protopeer simulator, the proposed algorithm adapts well to variable network load and churn.


Neural Networks | 2013

Bayesian ARTMAP for regression

Lucian Sasu; Razvan Andonie

Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons.


computational intelligence in bioinformatics and computational biology | 2012

Molecular distance geometry optimization using geometric build-up and evolutionary techniques on GPU

Levente Fabry-Asztalos; Istvan Lorentz; Razvan Andonie

We present a combination of methods addressing the molecular distance problem, implemented on a graphic processing unit. First, we use geometric build-up and depth-first graph traversal. Next, we refine the solution by simulated annealing. For an exact but sparse distance matrix, the build-up method reconstructs the 3D structures with a root-mean-square error (RMSE) in the order of 0.1 Å. Small and medium structures (up to 10,000 atoms) are computed in less than 10 seconds. For the largest structures (up to 100,000 atoms), the build-up RMSE is 2.2 Å and execution time is about 540 seconds. The performance of our approach depends largely on the graph structure. The SA step improves accuracy of the solution to the expense of a computational overhead.


international symposium on neural networks | 2009

Fuzzy ARTMAP rule extraction in computational chemistry

Razvan Andonie; Levente Fabry-Asztalos; Bogdan Crivat; Sarah Abdul-Wahid; Badi' Abdul-Wahid

We focus on extracting rules from a trained FAMR model. The FAMR is a Fuzzy ARTMAP (FAM) incremental learning system used for classification, probability estimation, and function approximation. The set of rules generated is post-processed in order to improve its generalization capability. Our method is suitable for small training sets. We compare our method with another neuro-fuzzy algorithm, and two standard decision tree algorithms: CART trees and Microsoft Decision Trees. Our goal is to improve efficiency of drug discovery, by providing medicinal chemists with a predictive tool for bioactivity of HIV-1 protease inhibitors.


international conference on adaptive and intelligent systems | 2009

A Multi-class Incremental and Decremental SVM Approach Using Adaptive Directed Acyclic Graphs

Honorius Galmeanu; Razvan Andonie

Multi-class approaches for SVMs are based on composition of binary SVM classifiers. Due to the numerous binary classifiers to be considered, for large training sets, this approach is known to be time expensive. In our approach, we improve time efficiency using concurrently two strategies: incremental training and reduction of trained binary SVMs. We present the exact migration conditions for the binary SVMs during their incremental training. We rewrite these conditions for the case when the regularization parameter is optimized. The obtained results are applied to a multi-class incremental / decremental SVM based on the Adaptive Directed Acyclic Graph. The regularization parameter is optimized on-line, and not by retraining the SVM with all input samples for each value of the regularization parameter.


international conference on optimization of electrical and electronic equipment | 2008

Incremental / decremental SVM for function approximation

H. Galmeanu; Razvan Andonie

Training a support vector regression (SVR) resumes to the process of migrating the vectors in and out of the support set along with modifying the associated thresholds. This paper gives a complete overview of all the boundary conditions implied by vector migration through the process. The process is similar to that of training a SVM, though the process of incrementing / decrementing of vectors into / out of the solution does not coincide with the increase / decrease of the associated threshold. The analysis shows the details of incremental and decremental procedures used to train the SVR. Vectors with duplicate contribution are also considered. The migration of vectors among sets on decreasing the regularization parameter C is particularly given attention. Eventually, experimental data show the possibility of modifying this parameter on a large scale, varying it from complete training (overfitting) to a calibrated value, to tune up the approximation performance of the regression.


Concurrency and Computation: Practice and Experience | 2006

An efficient concurrent implementation of a neural network algorithm

Razvan Andonie; Anthony T. Chronopoulos; Daniel Grosu; H. Galmeanu

The focus of this study is how we can efficiently implement the neural network backpropagation algorithm on a network of computers (NOC) for concurrent execution. We assume a distributed system with heterogeneous computers and that the neural network is replicated on each computer. We propose an architecture model with efficient pattern allocation that takes into account the speed of processors and overlaps the communication with computation. The training pattern set is distributed among the heterogeneous processors with the mapping being fixed during the learning process. We provide a heuristic pattern allocation algorithm minimizing the execution time of backpropagation learning. The computations are overlapped with communications. Under the condition that each processor has to perform a task directly proportional to its speed, this allocation algorithm has polynomial‐time complexity. We have implemented our model on a dedicated network of heterogeneous computers using Sejnowskis NetTalk benchmark for testing. Copyright


computer software and applications conference | 2016

Big Holes in Big Data: A Monte Carlo Algorithm for Detecting Large Hyper-Rectangles in High Dimensional Data

Joseph Lemley; Filip Jagodzinski; Razvan Andonie

We present the first algorithm for finding holes in high dimensional data that runs in polynomial time with respect to the number of dimensions. Previous algorithms are exponential. Finding large empty rectangles or boxes in a set of points in 2D and 3D space has been well studied. Efficient algorithms exist to identify the empty regions in these low-dimensional spaces. Unfortunately such efficiency is lacking in higher dimensions where the problem has been shown to be NP-complete when the dimensions are included in the input. Applications for algorithms that find large empty spaces include big data analysis, recommender systems, automated knowledge discovery, and query optimization. Our Monte Carlo-based algorithm discovers interesting maximal empty hyper-rectangles in cases where dimensionality and input size would otherwise make analysis impractical. The run-time is polynomial in the size of the input and the number of dimensions. We apply the algorithm on a 39-dimensional data set for protein structures and discover interesting properties that we think could not be inferred otherwise.

Collaboration


Dive into the Razvan Andonie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Lemley

Central Washington University

View shared research outputs
Top Co-Authors

Avatar

Yvonne Chueh

Central Washington University

View shared research outputs
Top Co-Authors

Avatar

Anthony T. Chronopoulos

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James L. Schwing

Central Washington University

View shared research outputs
Top Co-Authors

Avatar

Sarah Abdul-Wahid

Central Washington University

View shared research outputs
Top Co-Authors

Avatar

H. Galmeanu

Transylvania University

View shared research outputs
Researchain Logo
Decentralizing Knowledge