Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ömer Eğecioğlu is active.

Publication


Featured researches published by Ömer Eğecioğlu.


Theoretical Computer Science | 2009

Asynchronous spiking neural P systems

Matteo Cavaliere; Oscar H. Ibarra; Gheorghe Pun; Ömer Eğecioğlu; Mihai Ionescu; Sara Woodworth

We consider here spiking neural P systems with a non-synchronized (i.e., asynchronous) use of rules: in any step, a neuron can apply or not apply its rules which are enabled by the number of spikes it contains (further spikes can come, thus changing the rules enabled in the next step). Because the time between two firings of the output neuron is now irrelevant, the result of a computation is the number of spikes sent out by the system, not the distance between certain spikes leaving the system. The additional non-determinism introduced in the functioning of the system by the non-synchronization is proved not to decrease the computing power in the case of using extended rules (several spikes can be produced by a rule). That is, we obtain again the equivalence with Turing machines (interpreted as generators of sets of (vectors of) numbers). However, this problem remains open for the case of standard spiking neural P systems, whose rules can only produce one spike. On the other hand we prove that asynchronous systems, with extended rules, and where each neuron is either bounded or unbounded, are not computationally complete. For these systems, the configuration reachability, membership (in terms of generated vectors), emptiness, infiniteness, and disjointness problems are shown to be decidable. However, containment and equivalence are undecidable.


Bioinformatics | 2001

A new approach to sequence comparison: normalized sequence alignment.

Abdullah N. Arslan; Ömer Eğecioğlu; Pavel A. Pevzner

The Smith-Waterman algorithm for local sequence alignment is one of the most important techniques in computational molecular biology. This ingenious dynamic programming approach was designed to reveal the highly conserved fragments by discarding poorly conserved initial and terminal segments. However, the existing notion of local similarity has a serious flaw: it does not discard poorly conserved intermediate segments. The Smith-Waterman algorittim finds the local alignment with maximal score but it is unable to find local alignment with maximum degree of similarity (e.g. maximal percent of matches). Moreover, there is still no efficient algorithm that answers the following natural question: do two sequences share a (sufficiently long) fragment with more than 70% of similarity? As a result, the local alignment sometimes produces a mosaic of well-conserved fragments artificially connected by poorly-conserved or even unrelated fragments. This may lead to problems in comparison of long genomic sequences and comparative gene prediction as recently pointed out by Zhang et al. (Bioinformatics, 15, 1012-1019, 1999). In this paper we propose a new sequence comparison algorithm (normalized local alignment) that reports the regions with maximum degree of similarity. The algorithm is based on fractional programming and its running time is O(n 2 logn). In practice, normalized local alignment is only 3-5 times slower than the standard Smith-Waterman algorithm.


international conference on data engineering | 2010

Anonymizing weighted social network graphs

Sudipto Das; Ömer Eğecioğlu; Amr El Abbadi

The increasing popularity of social networks has initiated a fertile research area in information extraction and data mining. Although such analysis can facilitate better understanding of sociological, behavioral, and other interesting phenomena, there is a growing concern about personal privacy being breached, thereby requiring effective anonymization techniques. In this paper, we consider edge weight anonymization in social graphs. Our approach builds a linear programming (LP) model which preserves properties of the graph that are expressible as linear functions of the edge weights. Such properties form the foundations of many important graph-theoretic algorithms such as shortest paths, k-nearest neighbors, minimum spanning tree, etc. Off-the-shelf LP solvers can then be used to find solutions to the resulting model where the computed solution constitutes the weights in the anonymized graph. As a proof of concept, we choose the shortest paths problem, and experimentally evaluate the proposed techniques using real social network data sets.


Proceedings of the Third Forum on Research and Technology Advances in Digital Libraries, | 1996

Scalability issues for high performance digital libraries on the World Wide Web

Daniel Andresen; Tao Yang; Ömer Eğecioğlu; Oscar H. Ibarra; Terence R. Smith

We investigate scalability issues involved in developing high performance digital library systems. Our observations and solutions are based on our experience with the Alexandria Digital Library (ADL) testbed under development at UCSB. The current ADL system provides online browsing and processing of digitized maps and other geospatially mapped data via the World Wide Web (WWW). A primary activity of the ADL system involves computation and disk I/O for accessing compressed multi resolution images with hierarchical data structures, as well as other duties such as supporting database queries and on the fly HTML page generation. Providing multi resolution image browsing services can reduce network traffic but impose some additional cost at the server. We discuss the necessity of having a multiprocessor DL server to match potentially huge demands in simultaneous access requests from the Internet. We have developed a distributed scheduling system for processing DL requests, which actively monitors the usages of CPU, I/O channels and the interconnection network to effectively distribute work across processing units to exploit task and I/O parallelism. We present an experimental study on the performance of our scheme in addressing the scalability issues arising in ADL wavelet processing and file retrieval. Our results indicate that the system delivers good performance on these types of tasks.


international conference on data engineering | 2007

DeltaSky: Optimal Maintenance of Skyline Deletions without Exclusive Dominance Region Generation

Ping Wu; Divyakant Agrawal; Ömer Eğecioğlu; A. El Abbadi

This paper addresses the problem of efficient maintenance of a materialized skyline view in response to skyline removals. While there has been significant progress on skyline query computation, an equally important but largely unanswered issue is on the incremental maintenance for skyline deletions. Previous work suggested the use of the so called exclusive dominance region (EDR) to achieve optimal I/O performance for deletion maintenance. However, the shape of an EDR becomes extremely complex in higher dimensions, and algorithms for its computation have not been developed. We derive a systematic way to decompose a d-dimensional EDR into a collection of hyper-rectangles. We show that the number of such hyper-rectangles is O(md), where m is the current skyline result size. We then propose a novel algorithm DeltaSky which determines whether an intermediate R-tree MBR intersects with the EDR without explicitly calculating the EDR itself. This reduces the worse case complexity of the EDR intersection check from O(md) to O(md). Thus DeltaSky helps the branch and bound skyline algorithm achieve I/O optimality for deletion maintenance by finding only the newly appeared skyline points after the deletion. We discuss implementation issues and show that DeltaSky can be efficiently implemented using one extra B-Tree. Moreover, we propose two optimization techniques which further reduce the average cost in practice. Extensive experiments demonstrate that DeltaSky achieves orders of magnitude performance gain over alternative solutions.


IEEE Transactions on Knowledge and Data Engineering | 2004

Dimensionality reduction and similarity computation by inner-product approximations

Ömer Eğecioğlu; Hakan Ferhatosmanoglu; Umit Y. Ogras

As databases increasingly integrate different types of information such as multimedia, spatial, time-series, and scientific data, it becomes necessary to support efficient retrieval of multidimensional data. Both the dimensionality and the amount of data that needs to be processed are increasing rapidly. Reducing the dimension of the feature vectors to enhance the performance of the underlying technique is a popular solution to the infamous curse of dimensionality. We expect the techniques to have good quality of distance measures when the similarity distance between two feature vectors is approximated by some notion of distance between two lower-dimensional transformed vectors. Thus, it is desirable to develop techniques resulting in accurate approximations to the original similarity distance. We investigate dimensionality reduction techniques that directly target minimizing the errors made in the approximations. In particular, we develop dynamic techniques for efficient and accurate approximation of similarity evaluations between high-dimensional vectors based on inner-product approximations. Inner-product, by itself, is used as a distance measure in a wide area of applications such as document databases. A first order approximation to the inner-product is obtained from the Cauchy-Schwarz inequality. We extend this idea to higher order power symmetric functions of the multidimensional points. We show how to compute fixed coefficients that work as universal weights based on the moments of the probability density function of the data set. We also develop a dynamic model to compute the universal coefficients for data sets whose distribution is not known. Our experiments on synthetic and real data sets show that the similarity between two objects in high-dimensional space can be accurately approximated by a significantly lower-dimensional representation.


Journal of Non-newtonian Fluid Mechanics | 1997

Smoothed particle hydrodynamics techniques for the solution of kinetic theory problems Part 1: Method

Charu V. Chaubal; Ashok Srinivasan; Ömer Eğecioğlu; L. G. Leal

Abstract The smoothed particle hydrodynamics (SPH) technique has been applied to a problem in kinetic theory, namely, the dynamics of liquid crystalline polymers (LCPs). It is a Lagrangian solution method developed for fluid flow calculations; its adaption to kinetic theory is outlined. The Lagrangian formulation of the Doi theory for LCPs is first described, and the problem is presented in the general framework of nonparametric density estimation. The implementation of the SPH technique in this specific problem is given, highlighting particular aspects of our implementation of SPH, including the form of the kernel function and use of an adaptive kernel. We then present results which demonstrate convergence and other details of the solution method, and also make comparisons with other solution techniques and discuss other potential applications.


IEEE Transactions on Knowledge and Data Engineering | 2012

Anónimos: An LP-Based Approach for Anonymizing Weighted Social Network Graphs

Sudipto Das; Ömer Eğecioğlu; A. El Abbadi

The increasing popularity of social networks has initiated a fertile research area in information extraction and data mining. Anonymization of these social graphs is important to facilitate publishing these data sets for analysis by external entities. Prior work has concentrated mostly on node identity anonymization and structural anonymization. But with the growing interest in analyzing social networks as a weighted network, edge weight anonymization is also gaining importance. We present Anónimos, a Linear Programming-based technique for anonymization of edge weights that preserves linear properties of graphs. Such properties form the foundation of many important graph-theoretic algorithms such as shortest paths problem, k-nearest neighbors, minimum cost spanning tree, and maximizing information spread. As a proof of concept, we apply Anónimos to the shortest paths problem and its extensions, prove the correctness, analyze complexity, and experimentally evaluate it using real social network data sets. Our experiments demonstrate that Anónimos anonymizes the weights, improves k-anonymity of the weights, and also scrambles the relative ordering of the edges sorted by weights, thereby providing robust and effective anonymization of the sensitive edge-weights. We also demonstrate the composability of different models generated using Anónimos, a property that allows a single anonymized graph to preserve multiple linear properties.


conference on information and knowledge management | 2000

Dimensionality reduction and similarity computation by inner product approximations

Ömer Eğecioğlu; Hakan Ferhatosmanoglu

As databases increasingly integrate different types of information such as multimedia, spatial, time-series, and scientific data, it becomes necessary to support efficient retrieval of multidimensional data. Both the dimensionality and the amount of data that needs to be processed are increasing rapidly. Reducing the dimension of the feature vectors to enhance the performance of the underlying technique is a popular solution to the infamous curse of dimensionality. We expect the techniques to have good quality of distance measures when the similarity distance between two feature vectors is approximated by some notion of distance between two lower-dimensional transformed vectors. Thus, it is desirable to develop techniques resulting in accurate approximations to the original similarity distance. We investigate dimensionality reduction techniques that directly target minimizing the errors made in the approximations. In particular, we develop dynamic techniques for efficient and accurate approximation of similarity evaluations between high-dimensional vectors based on inner-product approximations. Inner-product, by itself, is used as a distance measure in a wide area of applications such as document databases. A first order approximation to the inner-product is obtained from the Cauchy-Schwarz inequality. We extend this idea to higher order power symmetric functions of the multidimensional points. We show how to compute fixed coefficients that work as universal weights based on the moments of the probability density function of the data set. We also develop a dynamic model to compute the universal coefficients for data sets whose distribution is not known. Our experiments on synthetic and real data sets show that the similarity between two objects in high-dimensional space can be accurately approximated by a significantly lower-dimensional representation.


Information Processing Letters | 1997

Billiard quorums on the grid

Divyakant Agrawal; Ömer Eğecioğlu; Amr El Abbadi

Abstract Maekawa considered a simple but suboptimal grid based quorum generation scheme in which N sites in a network are logically organized in the form of a √ N × √ N grid, and the quorum sets are row-column pairs. Even though the quorum size 2√ N of the grid scheme is twice as large as finite projective plane based optimal size quorums, it has the advantage of being simple and geometrically evident. In this paper we construct grid based quorums which use a modified grid, and paths that resemble billiard ball paths instead of horizontal and vertical line segments of rows and columns in the grid scheme. The size of these quorums is √2√ N . The construction and its properties are geometrically evident as in the case of Maekawas grid, and the quorum sets can be generated efficiently.

Collaboration


Dive into the Ömer Eğecioğlu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amr El Abbadi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tao Yang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge