Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Chiu is active.

Publication


Featured researches published by David Chiu.


international conference on supercomputing | 2010

Compiler and runtime support for enabling generalized reduction computations on heterogeneous parallel configurations

Vignesh T. Ravi; Wenjing Ma; David Chiu; Gagan Agrawal

A trend that has materialized, and has given rise to much attention, is of the increasingly heterogeneous computing platforms. Presently, it has become very common for a desktop or a notebook computer to come equipped with both a multi-core CPU and a GPU. Capitalizing on the maximum computational power of such architectures (i.e., by simultaneously exploiting both the multi-core CPU and the GPU) starting from a high-level API is a critical challenge. We believe that it would be highly desirable to support a simple way for programmers to realize the full potential of todays heterogeneous machines. This paper describes a compiler and runtime framework that can map a class of applications, namely those characterized by generalized reductions, to a system with a multi-core CPU and GPU. Starting with simple C functions with added annotations, we automatically generate the middleware API code for the multi-core, as well as CUDA code to exploit the GPU simultaneously. The runtime system provides efficient schemes for dynamically partitioning the work between CPU cores and the GPU. Our experimental results from two applications, e.g., k-means clustering and Principal Component Analysis (PCA), show that, through effectively harnessing the heterogeneous architecture, we can achieve significantly higher performance compared to using only the GPU or the multi-core CPU. In k-means, the heterogeneous version with 8 CPU cores and a GPU achieved a speedup of about 32.09x relative to 1-thread CPU. When compared to the faster of CPU-only and GPU-only executions, we were able to achieve a performance gain of about 60%. In PCA, the heterogeneous version attained a speedup of 10.4x relative to the 1-thread CPU version. When compared to the faster of CPU-only and GPU-only versions, we achieved a performance gain of about 63.8%.


international conference on web services | 2009

A Dynamic Approach toward QoS-Aware Service Workflow Composition

David Chiu; Sagar Deshpande; Gagan Agrawal; Rongxing Li

Web service-based workflow management systems have garnered considerable attention for automating and scheduling dependent operations. Such systems often support user preferences, e.g., time of completion, but with the rebirth of distributed computing via the grid/cloud, new challenges are abound: multiple disparate data sources, networks, nodes, and the potential for moving very large datasets. In this paper, we present a framework for integrating QoS support in a service workflow composition system. The relationship between workflow execution time and accuracy is exploited through an automatic workflow composition scheme. The algorithm, equipped with a framework for defining cost models on service completion times and error propagation, composes service workflows which can adapt to users QoS preferences.


grid computing | 2008

Cost and accuracy sensitive dynamic workflow composition over grid environments

David Chiu; Sagar Deshpande; Gagan Agrawal; Rongxing Li

A myriad of recent activities can be seen towards dynamic workflow composition for processing complex and data intensive problems. Meanwhile, the simultaneous emergence of the grid has marked a compelling movement towards making datasets and services available for ubiquitous access. This development provides new challenges for workflow systems, including heterogeneous data repositories and high processing and access times. But beside these problems lie opportunities for exploration: The gridpsilas magnitude offers many paths towards deriving essentially the same information albeit varying execution times and errors. We discuss a framework for incorporating QoS in a dynamic workflow composition system in a geospatial context. Specific contributions include a novel workflow composition algorithm which employs QoS-aware apriori pruning and an accuracy adjustment scheme to flexibly adapt workflows to given time restrictions. A performance evaluation of our system suggests that our pruning mechanism provides significant efficiency towards workflow composition and that our accuracy adjustment scheme adapts gracefully to time and network limitations.


ACM Crossroads Student Magazine | 2010

Elasticity in the cloud

David Chiu

Take a second to consider all the essential services and utilities we consume and pay for on a usage basis: water, gas, electricity. In the distant past, some people have suggested that computing be treated under the same model as most other utility providers. The case could certainly be made. For instance, a company that supports its own computing infrastructure may suffer from the costs of equipment, labor, maintenance, and mounting energy bills. It would be more cost-effective if the company paid some third-party provider for its storage and processing requirements based on time and usage. While it made perfect sense from the client’s perspective, the overhead of becoming a computingas-a-utility provider was prohibitive until recently. Through advancements in virtualization and the ability to leverage existing supercomputing capacities, utility computing is finally becoming realized. Known to most as cloud computing, leaders, such as Amazon Elastic Compute Cloud (EC2), Azure, Cloudera, and Google’s App Engine, have already begun offering utility computing to the mainstream. A simple, but interesting property in utility models is elasticity, that is, the ability to stretch and contract services directly according to the consumer’s needs. Elasticity has become an essential expectation of all utility providers. When’s the last time you plugged in a toaster oven and worried about it not working because the power company might have run out of power? Sure, it’s one more device that sucks up power, but you’re willing to eat the cost. Likewise, if you switch to using a more efficient refrigerator, you would expect the provider to charge you less on your next billing cycle. What elasticity means to cloud users is that they should design their applications to scale their resource requirements up and down whenever possible. However, this is not as easy as plugging or unplugging a toaster oven.


statistical and scientific database management | 2009

Enabling Ad Hoc Queries over Low-Level Scientific Data Sets

David Chiu; Gagan Agrawal

Technological success has ushered in massive amounts of data for scientific analysis. To enable effective utilization of these data sets for all classes of users, supporting intuitive data access and manipulation interfaces is crucial. This paper describes an autonomous scientific workflow system that enables high-level, natural language based, queries over low-level data sets. Our technique involves a combination of natural language processing, metadata indexing, and a semantically-aware workflow composition engine which dynamically constructs workflows for answering queries based on service and data availability. A specific contribution of this work is a metadata registration scheme that allows for a unified index of heterogeneous metadata formats and service annotations. Our approach thus avoids a standardized format for storing all data sets or the implementation of a federated, mediator-based, querying framework. We have evaluated our system using a case study from the geospatial domain to show functional results. Our evaluation supports the potential benefits which our approach can offer to scientific workflow systems and other domain-specific, data intensive applications.


cluster computing and the grid | 2009

Hierarchical Caches for Grid Workflows

David Chiu; Gagan Agrawal

From personal software to advanced systems, caching mechanisms have steadfastly been a ubiquitous means for reducing workloads. It is no surprise, then, that under the grid and cluster paradigms, middlewares and other large-scale applications often seek caching solutions. Among these distributed applications, scientific workflow management systems have gained ground towards mitigating the often painstaking process of composing sequences of scientific data sets and services to derive virtual data. In the past, workflow managers have relied on low-level system cache for reuse support. But in distributed query intensive environments, where high volumes of intermediate virtual data can potentially be stored anywhere on the grid, a novel cache structure is needed to efficiently facilitate workflow planning. In this paper, we describe an approach to combat the challenges of maintaining large, fast virtual data caches for workflow composition. A hierarchical structure is proposed for indexing scientific data with spatiotemporal annotations across grid nodes. Our experimental results show that our hierarchical index is scalable and outperforms a centralized indexing scheme by an exponential factor in query intensive environments.


advances in geographic information systems | 2008

Composing geoinformatics workflows with user preferences

David Chiu; Sagar Deshpande; Gagan Agrawal; Rongxing Li

With the advent of the data grid came a novel distributed scientific computing paradigm known as service-oriented science. Among the plethora of systems included under this framework are scientific workflow management systems, which enable large-scale process scheduling and execution. To ensure quality of service, these systems typically seek to minimize workflow execution time as well as costs for slices of data grid access. The geospatial domain, among other sciences, involves yet another optimization factor, the accuracy of results. The relationship between execution time and workflow accuracy can often be exploited to offer more flexibility in handling user preferences. We present a system which meets user constraints through a dynamic adjustment of the accuracy of workflow results.


conference on decision and control | 2012

Reconciling Cost and Performance Objectives for Elastic Web Caches

Farhana Kabir; David Chiu

Web and service applications are generally I/O bound and follow a Zipf-like request distribution, ushering in potential for significant latency reduction by caching and reusing results. However, such web caches require manual resource allocation, and when deployed in the cloud, costs may further complicate the provisioning process. We propose a fully autonomous, self-scaling, and cost-aware cloud cache with the objective of accelerating data-intensive applications. Our system, which is distributed over multiple cloud nodes, intelligently provisions resources at runtime based on users cost and performance expectations, while abstracting the various low-level decisions regarding efficient cloud resource management and data placement within the cloud from the user. Our prediction model lends the system the capability to auto-configure the optimal resource requirement to automatically scale itself up (or down) to accommodate demand peaks while staying within certain cost constraints while fulfilling the performance expectations. Our evaluation shows a 5.5 time speedup for a typical web workload, while staying under cost constraints.


Concurrency and Computation: Practice and Experience | 2012

Compiler and runtime support for enabling reduction computations on heterogeneous systems

Vignesh T. Ravi; Wenjing Ma; David Chiu; Gagan Agrawal

A trend that has materialized, and has given rise to much attention, is of the increasingly heterogeneous computing platforms. Presently, it has become very common for a desktop or a notebook computer to come equipped with both a multi‐core CPU and a graphics processing unit (GPU). Capitalizing on the maximum computational power of such architectures (i.e., by simultaneously exploiting both the multi‐core CPU and the GPU), starting from a high‐level API, is a critical challenge. We believe that it would be highly desirable to support a simple way for programmers to realize the full potential of todays heterogeneous machines. This paper describes a compiler and runtime framework that can map a class of applications, namely those characterized by generalized reductions, to a system with a multi‐core CPU and GPU. Starting with simple C functions with added annotations, we automatically generate the middleware API code for the multi‐core, as well as CUDA code to exploit the GPU simultaneously. The runtime system provides efficient schemes for dynamically partitioning the work between CPU cores and the GPU. Our experimental results from two applications, for example, k‐means clustering and principal component analysis, show that, through effectively harnessing the heterogeneous architecture, we can achieve significantly higher performance compared with using only the GPU or the multi‐core CPU. In k‐means clustering, the heterogeneous version with eight CPU cores and a GPU achieved a speedup of about 32.09x relative to one‐thread CPU. When compared with the faster of CPU‐only and GPU‐only executions, we were able to achieve a performance gain of about 60%. In principal component analysis, the heterogeneous version attained a speedup of 10.4x relative to the one‐thread CPU version. When compared with the faster of CPU‐only and GPU‐only versions, the heterogeneous version achieved a performance gain of about 63.8%. Copyright


ACM Crossroads Student Magazine | 2010

Profile Hiroshi Ishii Tangible bits

David Chiu

H iroshi Ishii sees the world differently. The Massachusetts Institute of Technology professor of media arts and sciences, widely regarded as the pioneer of tangible user interfaces (TUI), is changing the way we interact with our surroundings by integrating computing and physical objects. Specifically, within his Tangible Media Group at MIT, Ishii and his students are looking for ways to tie physical objects to digital information in a vision they call Tangible Bits. Their vision, which departs from the pervasive “painted bits” within current graphical user interfaces, is led by the observation that humans have developed a lifetime of intuition manipulating objects in the physical world. By complementing physical objects with digital information, we can improve and augment the way we perform tasks. Before joining MIT Media Labs in 1995, Ishii worked for NTT Human Interface Labs in Japan, where he led a research group toward developing two critical projects: TeamWorkStation and ClearBoard. TeamWorkStation was designed in 1990 to provide real-time sharing of drawing space between geographically disparate collaborators. It was enabled through a translucent video overlay of the collaborators’ workspaces. ClearBoard, developed in 1992, allowed for vis-à-vis interaction between two collaborators, and for the first time supported gaze awareness (so that the partner’s focus of attention was communicated) over a large, clear screen for drawing. These seminal efforts have since been succeeded by a cornucopia of interface projects under Ishii’s lead. Ishii, who received his PhD in computer engineering in 1992 from Hokkaido University in Japan, recalls the circumstances that led him to his current work. “My father was a programmer of the IBM 360 mainframe when I was a kid, [which] is why I chose computer science.” He added that his initial “shock” when he first saw the Xerox Alto (hailed as the first computer with a GUI) back in 1973 was what prompted his interests in HCI. Years later, Ishii is now a leader in tangible user interface research and development. In 2006, Ishii was elected by ACM SIGCHI into the prestigious CHI Academy for his significant contributions to the field. Certainly, Ishii’s success did not come without some initial roadblocks. One of the great challenges he has faced is discovering “compelling applications” to convince people of their vision in well-established HCI conferences, which have traditionally been more focused on user-centered designs. Another ongoing challenge involves the fact that tangible user interfaces often require proprietary and non-standard hardware platforms, but Ishii says he is optimistic about their acceptance in the future. The growing number of researchers, designers, and artists who are contributing to the field of tangible user interfaces share his optimism. In fact, Ishii refers to the success of the International Conference in Embedded and Embodied Interaction series (TEI), most recently held in January 2010 at MIT Media Labs, as an encouraging sign for the community. With these challenges currently being addressed and novel high-level concepts coming to fruition, Ishii is prepared to invoke the next big idea. He believes that in the next five to ten years, we can expect to see an integration of manipulatory and ambulatory interfaces as well as “a departure from a table [interface] to an entire room, building, and city.” As tangible user interfaces continue to emerge and mature, we can surely expect Ishii to lead this movement. “By complementing physical objects with digital information, we can improve and augment the way we perform tasks.”

Collaboration


Dive into the David Chiu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Farhana Kabir

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Travis Hall

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge