Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aditya N. Saharia is active.

Publication


Featured researches published by Aditya N. Saharia.


Journal of Organizational Computing and Electronic Commerce | 1995

Outsourcing information system functions: An organization economics perspective

Paul Alpar; Aditya N. Saharia

Outsourcing of information systems functions has become a frequently chosen alternative of providing information systems services. This is true across many industries and all firm sizes. Practitioners have developed a number of guidelines relating to outsourcing. While many of these guidelines seem plausible their underlying economic reasons are often not identified because they are not based on any theory. We analyze outsourcing of information systems functions using the transaction cost economics framework. The framework allows us to incorporate production as well as coordination costs in evaluating the outsourcing option.


IEEE Transactions on Knowledge and Data Engineering | 1994

Estimating block accesses in database organizations

George Diehr; Aditya N. Saharia

The exact expression for the expected number of disk accesses required to retrieve a given number of records, called the Yao function, requires iterative computations. Several authors have developed approximations to the Yao function, all of which have substantial errors in some situations. We derive and evaluate simple upper and lower bounds that never differ by more than a small fraction of a disk access. >


decision support systems | 1995

Approximate dependencies in database systems

Aditya N. Saharia; Terence M. Barron

Abstract Functional dependencies are the most commonly used approach for capturing real-word integrity constraints which are to be reflected in a database. There are, however, many useful kinds of constraints, especially approximate ones, that cannot be represented correctly by functional dependencies and therefore are enforced via programs which update the database, if they are enforced at all. This tends to make such constraints invisible since they are not an explicit part of the database, increasing maintenance problems and the likelihood of inconsistencies. We propose a new approach, cluster dependencies, as a way to enforce approximate dependencies. By treating equality as a fuzzy concept and defining appropriate similarity measures, it is possible to represent a broad range of approximate constraints directly in the database by storing and accessing cluster definitions. We discuss different interpretations of cluster dependencies and describe the additional data structures needed to enforce them. We also contrast them with an existing approach, fuzzy functional dependencies, which are much more limited in the kind of approximate constraints they can represent.


European Journal of Operational Research | 1996

An appointment-based service center with guaranteed service

Yair M. Babad; Maqbool Dada; Aditya N. Saharia

Abstract Customers are scheduled to arrive at periodic scheduling intervals to receive service from a single server system. A customer must start receiving service within a given departure interval; if this is not the case, the system will pay a penalty and/or transfer the customer to another facility at the systems cost. A complete transient and steady-state analysis of the system is given, the optimal scheduling and departure intervals are determined, and the virtual work in the system is analyzed.


IEEE Transactions on Knowledge and Data Engineering | 1998

A decision model for choosing the optimal level of storage in temporal databases

Debabrata Dey; Terence M. Barron; Aditya N. Saharia

A database allows its users to reduce uncertainty about the world. However, not all properties of all objects can always be stored in a database. As a result, the user may have to use probabilistic inference rules to estimate the data required for his decisions. A decision based on such estimated data may not be perfect. The authors call the costs associated with such suboptimal decisions the cost of incomplete information. This cost can be reduced by expanding the database to contain more information; such expansion will increase the data-related costs because of more data collection, manipulation, storage, and retrieval. A database designer must then consider the trade-off between the cost of incomplete information and the data-related costs, and choose a design that minimizes the overall cost to the organization. In temporal databases, the sheer volume of the data involved makes such a trade-off at design time all the more important. They develop probabilistic inference rules that allow one to infer missing values in spatial, as well as temporal, dimension. They then use the framework for developing guidelines for designing and reorganizing temporal databases, which explicitly includes a trade-off between the incomplete information and the data-related costs.


decision support systems | 1995

Data requirements in statistical decision support systems: formulation and some results in choosing summaries

Terry Barron; Aditya N. Saharia

The problem of determining data requirements in cases where statistical query answers are desired is studied. Specifically, we consider the value of storing aggregate data that can be used to speed up answering such queries, but at the potential costs of incomplete information due to either estimation error or staleness, as well as increased costs of update. We formulate the overall optimization problem for design, and decompose it into several subproblems that can be separately addressed. Two of these subproblems are the choice of update method, and choice of aggregates. Qualitative results are given regarding the selection of update policy, and design heuristics, based on numerical experiments, are given for single-attribute Legendre polynomial aggregates. Multivariate Legendre aggregates are also discussed, and suggestions for future research are given.


Information Systems Research | 1990

A Refresh Scheme for Remote Snapshots

Aditya N. Saharia; George Diehr

This article presents a scheme called “Difference Table” for maintaining database snapshots stored at sites remote from a central database which are refreshed only upon user request. Database snapshots are currently in widespread use where a subset of the central database is extracted and transmitted to a local workstation and utilized for decision support. The Difference Table method checks each update to a central database table against the definition of the snapshot. If the update is relevant, its effect is stored in a difference table. On receiving the refresh request, the contents of the difference table are transmitted to the remote site where they update the snapshot. The Difference Table scheme allows a selective refresh of the snapshot, in the sense that only the changes to a snapshot since the last refresh are transmitted. We discuss the additional database tables and processes required to support the Difference Table scheme. Performance measures are developed, and both quantitative and qualitative comparisons are made to alternative methods such as full regeneration and the approach used by System R * . By most criteria and in many environments, the Difference Table scheme is preferable to these alternatives. It also has several attractive side benefits which are not available in alternative methods.


Journal of Management Information Systems | 1990

Maintaining remote decision support databases

George Diehr; Aditya N. Saharia; David Chao

Abstract:This research describes and analyzes schemes for managing decision support databases that are extracted from a central database and “downloaded” to personal workstations. Unlike a (true) distributed database system, where updates are propagated to maintain consistency, these remote “snapshots” are updated only periodically (“refreshed”) upon command of the remote workstation user. This approach to data management has many of the same advantages of a distributed database over a centralized database (e.g., reduced communication costs, improved response time for retrievals, and reduction in contention), but it avoids the high overhead for concurrency control associated with updating in a distributed database. The added cost is in reduced data consistency.The schemes analyzed include full regeneration, the scheme used by System R*, and two new schemes. One new scheme—called modified regeneration—is a variation on simple full regeneration of the snapshot, but transmits only relevant changes to the sna...


Information Systems Research | 1990

Optimal Information Structures for the Seller of a Search Good

Terence M. Barron; Aditya N. Saharia

This paper examines an information system design problem faced by the seller of a search good who sells his product in a competitive market to well-informed consumers. The formulation results in a nonlinear optimization problem having a special structure which can be exploited in solving the first-order conditions. Closed-form solutions and comparative statics results are given in the case of a uniformly-distributed attribute, and we provide a numerical example in the case of a normally-distributed attribute.


Archive | 1995

Outsourcing information systems functions: An organizational economics perspective

Paul Alpar; Aditya N. Saharia

Collaboration


Dive into the Aditya N. Saharia's collaboration.

Top Co-Authors

Avatar

George Diehr

California State University San Marcos

View shared research outputs
Top Co-Authors

Avatar

Terence M. Barron

College of Business Administration

View shared research outputs
Top Co-Authors

Avatar

Debabrata Dey

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terry Barron

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Yair M. Babad

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge