Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Panos K. Chrysanthis is active.

Publication


Featured researches published by Panos K. Chrysanthis.


personal, indoor and mobile radio communications | 2002

On indoor position location with wireless LANs

Phongsak Prasithsangaree; Prashant Krishnamurthy; Panos K. Chrysanthis

Location aware services are becoming attractive with the deployment of next generation wireless networks and broadband multimedia wireless networks especially in indoor and campus areas. To provide location aware services, obtaining the position of a user accurately is important. While it is possible to deploy additional infrastructure for this purpose, using existing communications infrastructure is preferred for cost reasons. Because of technical restrictions, location fingerprinting schemes are the most promising. In this paper we present a systematic study of the performance tradeoff and deployment issues. In this paper we present some experimental results towards such a systematic study and discuss some issues related to the indoor positioning problem.


ACM Transactions on Database Systems | 1994

Synthesis of extended transaction models using ACTA

Panos K. Chrysanthis; Krithi Ramamritham

ACTA is a comprehensive transaction framework that facilitates the formal description of properties of extended transaction models. Specifically, using ACTA, one can specify and reason about (1) the effects of transactions on objects and (2) the interactions between transactions. This article presents ACTA as a tool for the synthesis of extended transaction models, one which supports the development and analysis of new extended transaction models in a systematic manner. Here, this is demonstrated by deriving new transaction definitions (1) by modifying the specifications of existing transaction models, (2) by combining the specifications of existing models, and (3) by starting from first principles. To exemplify the first, new models are synthesized from atomic transactions and join transactions. To illustrate the second, we synthesize a model that combines aspect of the nested- and split-transaction models. We demonstrate the latter by deriving the specification of an open-nested-transaction model from high-level requirements.


data engineering for wireless and mobile access | 2003

TiNA: a scheme for temporal coherency-aware in-network aggregation

Mohamed A. Sharaf; Jonathan Beaver; Alexandros Labrinidis; Panos K. Chrysanthis

This paper presents TiNA, a scheme for minimizing energy consumption in sensor networks by exploiting end-user tolerance to temporal coherency. TiNA utilizes temporal coherency tolerances to both reduce the amount of information transmitted by individual nodes (communication cost dominates power usage in sensor networks), and to improve quality of data when not all sensor readings can be propagated up the network within a given time constraint. TiNA was evaluated against a traditional in-network aggregation scheme with respect to power savings as well as the quality of data for aggregate queries. Preliminary results show that TiNA can reduce power consumption by up to 50% without any loss in the quality of data.


Proceedings 1993 IEEE Workshop on Advances in Parallel and Distributed Systems | 1993

Transaction processing in mobile computing environment

Panos K. Chrysanthis

Distributed systems are expected to support mobile computations executed over a computer network of fixed and mobile hosts. The authors examine the requirements for structuring such mobile computations that access shared data in a database, argue that open-nesting can better facilitate these requirements, and propose an Open-Nested Transaction model in a mobile environment using the notion of Reporting Transactions and Co-Transactions.


symposium on reliable distributed systems | 1995

Supporting semantics-based transaction processing in mobile database applications

Gary D. Walborn; Panos K. Chrysanthis

Advances in computer and telecommunication technologies have made mobile computing a reality. However, greater mobility implies a more tenuous network connection and a higher rate of disconnection. In order to tolerate disconnections as well as to reduce the delays and cost of wireless communication, it is necessary to support autonomous mobile operations on data shared by stationary hosts. This would allow the part of a computation executing on a mobile host to continue executing while the mobile host is not connected to the network. In this paper, we examine whether object semantics can be exploited to facilitate autonomous and disconnected operation in mobile database applications. We define the class of fragmentable objects which may be split among a number of sites, operated upon independently at each site, and then recombined in a semantically consistent fashion. A number of objects with such characteristics are presented and an implementation of fragmentable stacks is shown and discussed.


international conference on distributed computing systems | 1999

Scalable processing of read-only transactions in broadcast push

Evaggelia Pitoura; Panos K. Chrysanthis

Recently, push-based delivery has attracted considerable attention as a means of disseminating information to large client populations in both wired and wireless settings. We address the problem of ensuring the consistency and currency of client read-only transactions in the presence of updates. To this end, additional control information is broadcast. A suite of methods is proposed that vary in the complexity and volume of the control information transmitted and subsequently differ in response times, degrees of concurrency, and space and processing overheads. The proposed methods are combined with caching to improve query latency. The relative advantages of each method are demonstrated through both simulation results and qualitative arguments. Read-only transactions are processed locally at the client without contacting the server and thus the proposed approaches are scalable, i.e., their performance is independent of the number of clients.


very large data bases | 1996

A taxonomy of correctness criteria in database applications

Krithi Ramamritham; Panos K. Chrysanthis

Abstract. Whereas serializability captures database consistency requirements and transaction correctness properties via a single notion, recent research has attempted to come up with correctness criteria that view these two types of requirements independently. The search for more flexible correctness criteria is partily motivated by the introduction of new transaction models that extend the traditional atomic transaction model. These extensions came about because the atomic transaction model in conjunction with serializability is found to be very constraining when used in advanced applications (e.g., design databases) that function in distributed, cooperative, and heterogeneous environments. In this article we develop a taxonomy of various correctness criteria that focus on database consistency requirements and transaction correctness properties from the viewpoint of what the different dimensions of these two are. This taxonomy allows us to categorize correctness criteria that have been proposed in the literature. To help in this categorization, we have applied a uniform specification technique, based on ACTA, to express the various criteria. Such a categorization helps shed light on the similarities and differences between different criteria and places them in perspective.


ACM Transactions on Database Systems | 2008

Algorithms and metrics for processing multiple heterogeneous continuous queries

Mohamed A. Sharaf; Panos K. Chrysanthis; Alexandros Labrinidis; Kirk Pruhs

The emergence of monitoring applications has precipitated the need for Data Stream Management Systems (DSMSs), which constantly monitor incoming data feeds (through registered continuous queries), in order to detect events of interest. In this article, we examine the problem of how to schedule multiple Continuous Queries (CQs) in a DSMS to optimize different Quality of Service (QoS) metrics. We show that, unlike traditional online systems, scheduling policies in DSMSs that optimize for average response time will be different from policies that optimize for average slowdown, which is a more appropriate metric to use in the presence of a heterogeneous workload. Towards this, we propose policies to optimize for the average-case performance for both metrics. Additionally, we propose a hybrid scheduling policy that strikes a fine balance between performance and fairness, by looking at both the average- and worst-case performance, for both metrics. We also show how our policies can be adaptive enough to handle the inherent dynamic nature of monitoring applications. Furthermore, we discuss how our policies can be efficiently implemented and extended to exploit sharing in optimized multi-query plans and multi-stream CQs. Finally, we experimentally show using real data that our policies consistently outperform currently used ones.


IEEE Transactions on Computers | 2002

Multiversion data broadcast

Evaggelia Pitoura; Panos K. Chrysanthis

Recently, broadcasting has attracted considerable attention as a means of disseminating information to large client populations in both wired and wireless settings. In this paper, we consider broadcasting multiple versions of data items to increase the concurrency of client transactions in the presence of updates. We introduce various techniques for organizing multiple versions on the broadcast channel. Performance results show that the overhead of supporting multiple versions can be kept low while providing a considerable increase in concurrency. Besides increasing the concurrency of client transactions, multiversion broadcast provides clients with the possibility of accessing multiple server states in a single broadcast cycle. Furthermore, multiversioning increases the tolerance of client transactions of disconnections from the broadcast channel.


international conference on management of data | 2008

Distributed databases and peer-to-peer databases: past and present

Angela Bonifati; Panos K. Chrysanthis; Aris M. Ouksel; Kai-Uwe Sattler

The need for large-scale data sharing between autonomous and possibly heterogeneous decentralized systems on the Web gave rise to the concept of P2P database systems. Decentralized databases are, however, not new. Whereas a definition for a P2P database system can be readily provided, a comparison with the more established decentralized models, commonly referred to as distributed, federated and multi-databases, is more likely to provide a better insight to this new P2P data management technology. Thus, in the paper, by distinguishing between db-centric and P2P-centric features, we examine features common to these database systems as well as other ad-hoc features that solely characterize P2P databases. We also provide a non-exhaustive taxonomy of the most prominent research efforts toward the realization of full-fledged P2P databases.

Collaboration


Dive into the Panos K. Chrysanthis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kirk Pruhs

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Krithi Ramamritham

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Panayiotis Andreou

University of Central Lancashire

View shared research outputs
Researchain Logo
Decentralizing Knowledge