Sabah Currim
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sabah Currim.
extending database technology | 2004
Faiz Currim; Sabah Currim; Curtis E. Dyreson; Richard T. Snodgrass
The W3C XML Schema recommendation defines the structure and data types for XML documents. XML Schema lacks explicit support for time-varying XML documents. Users have to resort to ad hoc, non-standard mechanisms to create schemas for time-varying XML documents. This paper presents a data model and architecture, called τXSchema, for creating a temporal schema from a non-temporal (snapshot) schema, a temporal annotation, and a physical annotation. The annotations specify which portion(s) of an XML document can vary over time, how the document can change, and where timestamps should be placed. The advantage of using annotations to denote the time-varying aspects is that logical and physical data independence for temporal schemas can be achieved while remaining fully compatible with both existing XML Schema documents and the XML Schema recommendation.
data and knowledge engineering | 2008
Richard T. Snodgrass; Curtis E. Dyreson; Faiz Currim; Sabah Currim; Shailesh Joshi
The W3C XML Schema recommendation defines the structure and data types for XML documents, but lacks explicit support for time-varying XML documents or for a time-varying schema. In previous work we introduced @tXSchema, which is an infrastructure and suite of tools to support the creation and validation of time-varying documents, without requiring any changes to XML Schema. In this paper we extend @tXSchema to support versioning of the schema itself. We introduce the concept of a bundle, which is an XML document that references a base (non-temporal) schema, temporal annotations describing how the document can change, and physical annotations describing where timestamps are placed. When the schema is versioned, the base schema and temporal and physical schemas can themselves be time-varying documents, each with their own (possibly versioned) schemas. We describe how the validator can be extended to validate documents in this seeming precarious situation of data that changes over time, while its schema and even its representation are also changing.
data and knowledge engineering | 2012
Faiz Currim; Sabah Currim; Curtis E. Dyreson; Richard T. Snodgrass; Stephen W. Thomas; Rui Zhang
If past versions of XML documents are retained, what of the various integrity constraints defined in XML Schema on those documents? This paper describes how to interpret such constraints as sequenced constraints, applicable at each point in time. We also consider how to add new variants that apply across time, so-called nonsequenced constraints. Our approach supports temporal documents that vary over both valid and transaction time, whose schema can vary over transaction time. We do this by replacing the schema with a (possibly time-varying) temporal schema and replacing the document with a temporal document, both of which are upward compatible with conventional XML and with conventional tools like XMLLINT, which we have extended to support the temporal constraints introduced here.
international conference on conceptual modeling | 2006
Curtis E. Dyreson; Richard T. Snodgrass; Faiz Currim; Sabah Currim
When web servers publish data formatted in XML, only the current state of the data is (generally) published. But data evolves over time as it is updated. Capturing that evolution is vital to recovering past versions, tracking changes, and evaluating temporal queries. This paper presents a system to build a temporal data collection, which records the history of each published datum rather than just its current state. The key to exchanging temporal data is providing a temporal schema to mediate the interaction between the publisher and the reader. The schema describes how to construct a temporal data collection by “gluing” individual states into an integrated history.
international conference on management of data | 2013
Sabah Currim; Richard T. Snodgrass; Young-Kyoon Suh; Rui Zhang; Matthew Wong Johnson; Cheng Yi
It is surprisingly hard to obtain accurate and precise measurements of the time spent executing a query. We review relevant process and overall measures obtainable from the Linux kernel and introduce a structural causal model relating these measures. A thorough correlational analysis provides strong support for this model. Using this model, we developed a timing protocol, which (1) performs sanity checks to ensure validity of the data, (2) drops some query executions via clearly motivated predicates, (3) drops some entire queries at a cardinality, again via clearly motivated predicates, (4) for those that remain, for each computes a single measured time by a carefully justified formula over the underlying measures of the remaining query executions, and (5) performs post-analysis sanity checks. The resulting query time measurement procedure, termed the Tucson Protocol, applies to proprietary and open-source DBMSes.
data and knowledge engineering | 2007
Curtis E. Dyreson; Richard T. Snodgrass; Faiz Currim; Sabah Currim; Shailesh Joshi
In aspect-oriented programming (AOP) a cross-cutting concern is implemented in an aspect. An aspect weaver blends code from the aspect into a programs code at programmer-specified cut points, yielding an aspect-enhanced program. In this paper, we apply some of the concepts from the AOP paradigm to data. Like code, data also has cross-cutting concerns such as versioning, security, privacy, and reliability. We propose modeling a cross-cutting data concern as a schema aspect. A schema aspect describes the structure of the metadata in the cross-cutting concern, identifies the types of data elements that can be wrapped with metadata, i.e., the cut points, and provides some simple constraints on the use of the metadata. Several schema aspects can be applied to a single data collection, though in this paper we focus on just two aspects: a reliability aspect and a temporal aspect. We show how to weave the schema for these two aspects together with the schema for the data into a single, unified schema that we call a schema tapestry. The tapestry guides the construction, interpretation, and validation of an aspect-enhanced data collection.
The Reference Librarian | 2011
Sabah Currim
Experiential learning helps students achieve higher levels of expertise in people and critical thinking skills that libraries would like their future employees to have. However, experiential learning is time consuming and hard to incorporate in the classroom setting. The Internet Public Library (ipl2) has developed a methodology to allow instructors to easily incorporate experiential learning into the classroom; ipl2 offers course modules for instructors that include content such as syllabi, lectures, and assignments. In addition, ipl2 provides technology, a suitable environment, and an experienced staff support to aid instructors in getting started in offering their students an experiential learning experience quickly.
Information Systems | 2014
Sabah Currim; Sudha Ram; Alexandra Durcikova; Faiz Currim
Conceptual data modeling is a critical but difficult part of database development. Little research has attempted to find the underlying causes of the cognitive challenges or errors made during this stage. This paper describes a Modeling Expertise Framework (MEF) that uses modeler expertise to predict errors based on the revised Blooms taxonomy (RBT). The utility of RBT is in providing a classification of cognitive processes that can be applied to knowledge activities such as conceptual modeling. We employ the MEF to map conceptual modeling tasks to different levels of cognitive complexity and classify current modeler expertise levels. An experimental exercise confirms our predictions of errors. Our work provides an understanding into why novices can handle entity classes and identifying binary relationships with some ease, but find other components like ternary relationships difficult. We discuss implications for data modeling training at a novice and intermediate level, which can be extended to other areas of Information Systems education and training.
Information Systems | 2017
Young-Kyoon Suh; Richard T. Snodgrass; Sabah Currim
Modern DBMSes are designed to support many transactions running simultaneously. DBMS thrashing is indicated by the existence of a sharp drop in transaction throughput. Thrashing behavior in DBMSes is a serious concern to database administrators (DBAs) as well as to DBMS implementers. From an engineering perspective, therefore, it is of critical importance to understand the causal factors of DBMS thrashing. However, understanding the origin of thrashing in modern DBMSes is challenging, due to many factors that may interact with each other.This article aims to better understand the thrashing phenomenon across multiple DBMSes. We identify some of the underlying causes of DBMS thrashing. We then propose a novel structural causal model to explicate the relationships between various factors contributing to DBMS thrashing. Our model derives a number of specific hypotheses to be subsequently tested across DBMSes, providing empirical support for this model as well as important engineering implications for improvements in transaction processing. HighlightsWe propose a novel structural causal model to explain the origins of DBMS thrashing.We presenta space of hypotheses thatcan refine and supportthe proposed model.We extend a recent DBMS-centric research infrastructure for thrashing experiments.We propose a novelanalysis protocolthat can be applied to obtained thrashing data.We suggest engineering implications to DBMS developers and DBAs for better performance.
ACM Transactions on Database Systems | 2017
Sabah Currim; Richard T. Snodgrass; Young-Kyoon Suh; Rui Zhang