Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adir Even is active.

Publication


Featured researches published by Adir Even.


ACM Sigmis Database | 2007

Utility-driven assessment of data quality

Adir Even; Ganesan Shankaranarayanan

Data consumers assess quality within specific business contexts or decision tasks. The same data resource may have an acceptable level of quality for some contexts but this quality may be unacceptable for other contexts. However, existing data quality metrics are mostly derived impartially, disconnected from the specific contextual characteristics. This study argues for the need to revise data quality metrics and measurement techniques to incorporate and better reflect contextual assessment. It contributes to that end by developing new metrics for assessing data quality along commonly used dimensions - completeness, validity, accuracy, and currency. The metrics are driven by data utility, a conceptual measure of the business value that is associated with the data within a specific usage context. The suggested data quality measurement framework uses utility as a scaling factor for calculating quality measurements at different levels of data hierarchy. Examples are used to demonstrate the use of utility-driven assessment in real-world data management scenarios and the broader implications for data management are discussed


Communications of The ACM | 2006

The metadata enigma

Ganesan Shankaranarayanan; Adir Even

Metadata promises too much value as a business management tool to dismiss its implementation and maintenance effort as the equivalent of Sisyphean torture.


IEEE Transactions on Knowledge and Data Engineering | 2007

Economics-Driven Data Management: An Application to the Design of Tabular Data Sets

Adir Even; Ganesan Shankaranarayanan; Paul D. Berger

Organizational data repositories are recognized as critical resources for supporting a large variety of decision tasks and for enhancing business capabilities. As investments in data resources increase, there is also a growing concern about the economic aspects of data resources. While the technical aspects of data management are well examined, the contribution of data management to economic performance is not. Current design and implementation methodologies for data management are driven primarily by technical and functional requirements, without considering the relevant economic factors sufficiently. To address this gap, this study proposes a framework for optimizing data management design and maintenance decisions. The framework assumes that certain design characteristics of data repositories and data manufacturing processes significantly affect the utility of the data resources and the costs associated with implementing them. Modeling these effects helps identify design alternatives that maximize net-benefit, defined as the difference between utility and cost. The framework for the economic assessment of design alternatives is demonstrated for the optimal design of a large data set


decision support systems | 2010

Evaluating a model for cost-effective data quality management in a real-world CRM setting

Adir Even; Ganesan Shankaranarayanan; Paul D. Berger

Managing data resources at high quality is usually viewed as axiomatic. However, we suggest that, since the process of improving data quality should attempt to maximize economic benefits as well, high data quality is not necessarily economically-optimal. We demonstrate this argument by evaluating a microeconomic model that links the handling of data quality defects, such as outdated data and missing values, to economic outcomes: utility, cost, and net-benefit. The evaluation is set in the context of Customer Relationship Management (CRM) and uses large samples from a real-world data resource used for managing alumni relations. Within this context, our evaluation shows that all model parameters can be measured, and that all model-related assumptions are, largely, well supported. The evaluation confirms the assumption that the optimal quality level, in terms of maximizing net-benefits, is not necessarily the highest possible. Further, the evaluation process contributes some important insights for revising current data acquisition and maintenance policies.


Journal of Data and Information Quality | 2009

Dual Assessment of Data Quality in Customer Databases

Adir Even; Ganesan Shankaranarayanan

Quantitative assessment of data quality is critical for identifying the presence of data defects and the extent of the damage due to these defects. Quantitative assessment can help define realistic quality improvement targets, track progress, evaluate the impacts of different solutions, and prioritize improvement efforts accordingly. This study describes a methodology for quantitatively assessing both impartial and contextual data quality in large datasets. Impartial assessment measures the extent to which a dataset is defective, independent of the context in which that dataset is used. Contextual assessment, as defined in this study, measures the extent to which the presence of defects reduces a dataset’s utility, the benefits gained by using that dataset in a specific context. The dual assessment methodology is demonstrated in the context of Customer Relationship Management (CRM), using large data samples from real-world datasets. The results from comparing the two assessments offer important insights for directing quality maintenance efforts and prioritizing quality improvement solutions for this dataset. The study describes the steps and the computation involved in the dual-assessment methodology and discusses the implications for applying the methodology in other business contexts and data environments.


hawaii international conference on system sciences | 2006

Enhancing Decision Making with Process Metadata: Theoretical Framework, Research Tool, and Exploratory Examination

Adir Even; Ganesan Shankaranarayanan; Stephanie Watts

The quality of the data used in a decision task has important implications for the decision outcome. Recent research suggests that data quality perception is context-dependent. This study examines process metadata, which describes how a particular data set was created and delivered, as a supporting aid for contextual quality assessment. We first develop a model for understanding the effects of process metadata on the decision outcome when it is provided together with intrinsic quality measurements. We then describe a research tool developed to assess the effect of process metadata. An exploratory test using this tool suggests that both data quality perceptions and the associated process metadata have beneficial effects on outcomes, when mediated by decision process efficiency. The model developed in this study and the preliminary empirical results highlight the value of embedding quality metadata within computer-supported decision environments.


Information & Management | 2017

Business intelligence and organizational learning

Lior Fink; Nir Yogev; Adir Even

This study develops and tests a research model of BI value creation.The model incorporates both general-IT and specific-BI value creation mechanisms.We initially assess the model with qualitative data collected in three organizations.We then test the hypotheses with cross-sectional data collected from managers.The findings demonstrate the value creation processes unique to BI resources. With the aim of bridging the gap between well-established research on information technology (IT) value creation and the emergent study of business intelligence (BI), this study develops and tests a model of BI value creation that is firmly anchored in both streams of research. The analysis draws on the resource-based view and on conceptualizations of organizational learning to hypothesize about the paths by which BI assets and BI capabilities create business value. The research model is first assessed in an exploratory analysis of data collected through interviews in three firms and then tested in a confirmatory analysis of data collected through a survey.


International Journal of Information Quality | 2007

Utility-driven configuration of data quality in data repositories

Adir Even; Ganesan Shankaranarayanan

The economic benefits of data quality have rarely been researched and quantified. Does the business benefit gained justify the high cost of data quality improvement? Understanding these costs and benefits can direct and improve the implementation and maintenance of data repositories. Here we evaluate the effects of targeted quality levels in data repositories targeting higher completeness and/or accuracy increases the utility of the data, but involves higher costs. Modelling utility cost effects helps assess quality configuration tradeoffs for optimising economic performance of data repositories. Using the model, we examine different configurations and demonstrate the effect of economics-driven evaluation on data management decisions.


Journal of Computer Information Systems | 2015

Utility Cost Perspectives in Data Quality Management

Adir Even; Ganesan Shankaranarayanan

The growing costs of managing data demand a closer examination of associated cost-benefit tradeoffs. As a step towards developing an economic perspective of data management, specifically data quality management, this study describes a value-driven model of data products and the processes that produce them. The contribution to benefit (utility) is associated with the use of data products and costs attributed to the different data processing stages. Utility/cost tradeoffs are thus linked to design and administrative decisions at the different processing stages. By modeling and quantifying the economic impact of these decisions, this study shows how economically superior data quality management policies may be developed. To illustrate it, the study uses the model to develop a data quality management policy for online error correction. The results indicate that decisions that consider economic tradeoffs can be very different compared with decisions that are driven by technical and functional requirements only.


International journal of business | 2013

How Business Intelligence Creates Value: An Empirical Investigation

Nir Yogev; Adir Even; Lior Fink

This study examines the business value associated with business intelligence BI systems, based on the premise that business value is largely contingent on system type and its unique contribution. The study adopts a process-oriented approach to evaluating the value contribution of BI, arguing that it stems from improvements in business processes. The study develops and tests a research model that explains the unique mechanisms through which BI creates business value. The model draws on the resource-based view to identify key assets and capabilities that determine the impact of BI on business processes and, consequently, on organizational performance. Analysis of data collected from 159 managers and IT/BI experts, using structural equation modeling SEM techniques, shows that BI largely contributes to business value by improving both operational and strategic business processes.

Collaboration


Dive into the Adir Even's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoav Kolodner

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Lior Fink

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Yisrael Parmet

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Alisa Wechsler

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Nir Yogev

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elad Moskovitz

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Marina Vugalter

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge