Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Faiz Currim is active.

Publication


Featured researches published by Faiz Currim.


extending database technology | 2004

A Tale of Two Schemas: Creating a Temporal XML Schema from a Snapshot Schema with τXSchema

Faiz Currim; Sabah Currim; Curtis E. Dyreson; Richard T. Snodgrass

The W3C XML Schema recommendation defines the structure and data types for XML documents. XML Schema lacks explicit support for time-varying XML documents. Users have to resort to ad hoc, non-standard mechanisms to create schemas for time-varying XML documents. This paper presents a data model and architecture, called τXSchema, for creating a temporal schema from a non-temporal (snapshot) schema, a temporal annotation, and a physical annotation. The annotations specify which portion(s) of an XML document can vary over time, how the document can change, and where timestamps should be placed. The advantage of using annotations to denote the time-varying aspects is that logical and physical data independence for temporal schemas can be achieved while remaining fully compatible with both existing XML Schema documents and the XML Schema recommendation.


data and knowledge engineering | 2008

Validating quicksand: Temporal schema versioning in τXSchema

Richard T. Snodgrass; Curtis E. Dyreson; Faiz Currim; Sabah Currim; Shailesh Joshi

The W3C XML Schema recommendation defines the structure and data types for XML documents, but lacks explicit support for time-varying XML documents or for a time-varying schema. In previous work we introduced @tXSchema, which is an infrastructure and suite of tools to support the creation and validation of time-varying documents, without requiring any changes to XML Schema. In this paper we extend @tXSchema to support versioning of the schema itself. We introduce the concept of a bundle, which is an XML document that references a base (non-temporal) schema, temporal annotations describing how the document can change, and physical annotations describing where timestamps are placed. When the schema is versioned, the base schema and temporal and physical schemas can themselves be time-varying documents, each with their own (possibly versioned) schemas. We describe how the validator can be extended to validate documents in this seeming precarious situation of data that changes over time, while its schema and even its representation are also changing.


data and knowledge engineering | 2012

Adding Temporal Constraints to XML Schema

Faiz Currim; Sabah Currim; Curtis E. Dyreson; Richard T. Snodgrass; Stephen W. Thomas; Rui Zhang

If past versions of XML documents are retained, what of the various integrity constraints defined in XML Schema on those documents? This paper describes how to interpret such constraints as sequenced constraints, applicable at each point in time. We also consider how to add new variants that apply across time, so-called nonsequenced constraints. Our approach supports temporal documents that vary over both valid and transaction time, whose schema can vary over transaction time. We do this by replacing the schema with a (possibly time-varying) temporal schema and replacing the document with a temporal document, both of which are upward compatible with conventional XML and with conventional tools like XMLLINT, which we have extended to support the temporal constraints introduced here.


international conference on conceptual modeling | 2006

Schema-mediated exchange of temporal XML data

Curtis E. Dyreson; Richard T. Snodgrass; Faiz Currim; Sabah Currim

When web servers publish data formatted in XML, only the current state of the data is (generally) published. But data evolves over time as it is updated. Capturing that evolution is vital to recovering past versions, tracking changes, and evaluating temporal queries. This paper presents a system to build a temporal data collection, which records the history of each published datum rather than just its current state. The key to exchanging temporal data is providing a temporal schema to mediate the interaction between the publisher and the reader. The schema describes how to construct a temporal data collection by “gluing” individual states into an integrated history.


Information Systems Research | 2012

Modeling Spatial and Temporal Set-Based Constraints During Conceptual Database Design

Faiz Currim; Sudha Ram

From a database perspective, business constraints provide an accurate picture of the real world being modeled and help enforce data integrity. Typically, rules are gathered during requirements analysis and embedded in code during the implementation phase. We propose that the rules be explicitly modeled during conceptual design, and develop a framework for understanding and classifying spatiotemporal set-based (cardinality) constraints and an associated syntax. The constraint semantics are formally specified using first-order logic. Modeling rules in conceptual design ensures they are visible to designers and users and not buried in application code. The rules can then be semiautomatically translated into logical design triggers yielding productivity gains. Following the principles of design science research, we evaluate the frameworks expressiveness and utility with a case study.


data and knowledge engineering | 2007

Weaving temporal and reliability aspects into a schema tapestry

Curtis E. Dyreson; Richard T. Snodgrass; Faiz Currim; Sabah Currim; Shailesh Joshi

In aspect-oriented programming (AOP) a cross-cutting concern is implemented in an aspect. An aspect weaver blends code from the aspect into a programs code at programmer-specified cut points, yielding an aspect-enhanced program. In this paper, we apply some of the concepts from the AOP paradigm to data. Like code, data also has cross-cutting concerns such as versioning, security, privacy, and reliability. We propose modeling a cross-cutting data concern as a schema aspect. A schema aspect describes the structure of the metadata in the cross-cutting concern, identifies the types of data elements that can be wrapped with metadata, i.e., the cut points, and provides some simple constraints on the use of the metadata. Several schema aspects can be applied to a single data collection, though in this paper we focus on just two aspects: a reliability aspect and a temporal aspect. We show how to weave the schema for these two aspects together with the schema for the data into a single, unified schema that we call a schema tapestry. The tapestry guides the construction, interpretation, and validation of an aspect-enhanced data collection.


international conference on conceptual modeling | 2010

The CARD system

Faiz Currim; Nicholas Neidig; Alankar Kampoowalec; Girish Mhatre

We describe a CASE tool (the CARD system) that allows users to represent and translate ER schemas, along with more advanced cardinality constraints (such as participation, co-occurrence and projection [1]). The CARD system supports previous research that proposes representing constraints at the conceptual design phase [1], and builds upon work presenting a framework for establishing completeness of cardinality and the associated SQL translation [2]. From a teaching perspective, instructors can choose to focus student efforts on data modeling and design, and leave the time-consuming and error-prone aspect of SQL script generation to the CARD system. Graduate-level classes can take advantage of support for more advanced constraints.


Information Systems | 2013

A maintenance centric approach to the view selection problem

Ray Hylock; Faiz Currim

The View Selection Problem is an optimization problem designed to enhance query performance through the pre-computation and storage of select views given resource constraints. Assuring the materialized views can be updated within a reasonable time frame has become a chief concern for recent models. However, these methods are crafted simply to fit a solution within a feasible range and not to minimize the resource intensive maintenance process. In this paper, we submit two novel advances in terms of model formulation and solution generation to reduce maintenance costs. Our proposed model, the Minimum-Maintenance View Selection Problem, combines previous techniques to minimize and constrain update costs. Furthermore, we define a series of maintenance time reducing principles in solution generation embodied in a constructor heuristic. The model and constructor heuristic are evaluated using an existing clinical data warehouse and state-of-the-art heuristics. Our analysis shows our model produces the lowest-cost solution relative to extant models. Also, they indicate algorithms seeded with our constructor heuristic to be superior solutions to all other methods tested.


Proceedings of the 1st ACM international workshop on Medical-grade wireless networks | 2009

Privacy policy enforcement for health information data access

Faiz Currim; Eunjin Jung; Xin Xiao; Insoon Jo

Wireless technology is steadily improving the access and cost-effectiveness of healthcare data management. With the growth in information access, comes the challenge of maintaining patient record privacy and security. Our work develops an algorithm to evaluate ad-hoc user queries against database policies. We consider an efficient evaluation algorithm (defined at the schema-level), based on a classification of attributes in the policy and query (both of which can be written in SQL). Our algorithm can be used for policy integration as well, and scales well with typical query sizes that may be expected for mobile devices.


international conference on digital health | 2016

Feature Importance and Predictive Modeling for Multi-source Healthcare Data with Missing Values

Karthik Srinivasan; Faiz Currim; Sudha Ram; Casey Lindberg; Esther M. Sternberg; Perry Skeath; Bijan Najafi; Javad Razjouyan; Hyoki Lee; Colin Foe-Parker; Nicole Goebel; Reuben Herzl; Matthias R. Mehl; Brian Gilligan; Judith Heerwagen; Kevin Kampschroer; Kelli Canada

With rapid development of sensor technologies and the internet of things, research in the area of connected health is increasing in importance and complexity with wide-reaching impacts for public health. As data sources such as mobile (wearable) sensors get cheaper, smaller, and smarter, important research questions can be answered by combining information from multiple data sources. However, integration of multiple heterogeneous data streams often results in a dataset with several empty cells or missing values. The challenge is to use such sparsely populated integrated datasets without compromising model performance. Naïve approaches for dataset modification such as discarding observations or ad-hoc replacement of missing values often lead to misleading results. In this paper, we discuss and evaluate current best-practices for modeling such data with missing values and then propose an ensemble-learning based sparse-data modeling framework. We develop a predictive model using this framework and compare it with existing models using a study in a healthcare setting. Instead of generating a single score on variable/feature importance, our framework enables the user to understand the importance of a variable based on the existing data values and their localized impact on the outcome.

Collaboration


Dive into the Faiz Currim's collaboration.

Top Co-Authors

Avatar

Sudha Ram

University of Arizona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Curtis E. Dyreson

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yun Wang

University of Arizona

View shared research outputs
Top Co-Authors

Avatar

Bijan Najafi

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyoki Lee

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Javad Razjouyan

Baylor College of Medicine

View shared research outputs
Researchain Logo
Decentralizing Knowledge