Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ke-Thia Yao is active.

Publication


Featured researches published by Ke-Thia Yao.


International Journal of Industrial and Systems Engineering | 2006

Mega-scale fabrication by Contour Crafting

Behrokh Khoshnevis; Dooil Hwang; Ke-Thia Yao; Zhenghao Yeh

Contour Crafting is a mega scale layered fabrication process which builds large scale three-dimensional parts by depositing paste materials layer by layer at unprecedented speed and with superior surface quality. This paper presents an overview of related research activities and the progress aimed at extending the technology to construction of residential housing units and civil structures.


international world wide web conferences | 2002

Dynamic coordination of information management services for processing dynamic web content

In-Young Ko; Ke-Thia Yao; Robert Neches

Dynamic Web content provides us with time-sensitive and continuously changing data. To glean up-to-date information, users need to regularly browse, collect and analyze this Web content. Without proper tool support this information management task is tedious, time-consuming and error prone, especially when the quantity of the dynamic Web content is large, when many information management services are needed to analyze it, and when underlying services/network are not completely reliable. This paper describes a multi-level, lifecycle (design-time and run-time) coordination mechanism that enables rapid, efficient development and execution of information management applications that are especially useful for processing dynamic Web content. Such a coordination mechanism brings dynamism to coordinating independent, distributed information management services. Dynamic parallelism spawns/merges multiple execution service branches based on available data, and dynamic run-time reconfiguration coordinates service execution to overcome faulty services and bottlenecks. These features enable information management applications to be more efficient in handling content and format changes in Web resources, and enable the applications to be evolved and adapted to process dynamic Web content.


acm international conference on digital libraries | 2000

Asynchronous information space analysis architecture using content and structure-based service brokering

Ke-Thia Yao; In-Young Ko; Ragy Eleish; Robert Neches

Our project focuses on rapid formation and utilization of custom collections of information for groups focused on high-paced tasks. Assembling such collections, as well as organizing and analyzing the documents within them, is a complex and sophisticated task. It requires understanding what information management services and tools are provided by the system, when they appropriate to use, and how those services can be composed together to perform more complex analyses. This paper describes the architecture of a prototype implementation of the information analysis management system that we have developed. The architecture uses metadata to describe collections of documents both in term of their content and structure. This metadata allows the system to dynamically and in a content-sensitive manner to determine the set of appropriate analysis services. To facilitate the invocation of those services, the architecture also provides an asynchronous and transparent service access mechanism.


icpp workshops on collaboration and mobile computing | 1999

Synchronous and asynchronous collaborative information space analysis tools

Ke-Thia Yao; Robert Neches; In-Young Ko; Ragy Eleish; Sameer Abhinkar

The DASHER Project at USC/ISI has focused upon helping organizations with rapid-response mission requirements by providing information analysis tools that help make sense of sets of data sources in an Intranet or Internet: characterizing them, partitioning them sorting and filtering them. This paper focuses on a subset of these tools that help individuals in the organization to collaboratively, both synchronously and asynchronously, form task-oriented information repositories. Also, the paper discusses planned extension of the tools to support collaboration in a mobile computing environment.


The New Review of Hypermedia and Multimedia | 2002

GeoWorlds: integrating GIS and digital libraries for situation understanding and management

Robert Neches; Ke-Thia Yao; In-Young Ko; Alejandro Bugacov; Vished Kumar; Ragy Eleish

Abstract Helping organizations to marshal, analyze, discuss, and act on all of the available information about a situation playing out over space and time is a critical problem. GeoWorlds (http://www.isi.edu/geoworlds) is a component-based information management system that addresses this issue. It brings together information analysis, retrieval and collaboration tools and integrates digital library, geographic information systems (GIS), and remote sensor data management technologies. It provides three key services: 1) rapidly assembling a custom repository of geographic information about a region, 2) bi-directionally linking it to collections of document-based information from the World-Wide Web, and 3) monitoring real-time sensor data for information that might change conclusions or decisions formed on the basis of this rich information set. GeoWorlds framework enables synchronous and asynchronous collaboration over finding, filtering, organizing and visualizing the needed information.


international conference on data mining | 2011

Semi-supervised Failure Prediction for Oil Production Wells

Yintao Liu; Ke-Thia Yao; Shuping Liu; Cauligi S. Raghavendra; Oluwafemi Opeyemi Balogun; Lanre Olabinjo

In the petroleum industry, multivariate time series data is commonly used to monitor the performance of their assets, in which wells artificial lift systems are among the key assets that bring oil up to the surface. Failures frequently occur among these artificial lift systems, and they can greatly increase the operational expense due to loss of production and cost of repairs (also known as workovers). Predicting these failures before they occur can dramatically improve operational performance, such as by adjusting operating parameters to forestall failures or by scheduling maintenance to reduce unplanned repairs and to minimize downtime. Artificial lift failure prediction problem poses interesting challenges to data mining algorithms, because of the many real-world data issues, such as noise, missing data, delay of failure event logs, and large variability among normally functioning well artificial lift units. This paper presents the Smart Engineering Apprentice (SEA) framework that incorporates robust feature extraction algorithm, clustering and semi-supervised learning techniques, to enable learning of failure/normal patterns from noisy and poorly labeled multivariate time series, while achieving a high recall and precision for failures for real-world dataset.


winter simulation conference | 2005

Enabling 1,000,000-entity simulations on distributed Linux clusters

Gene Wagenbreth; Ke-Thia Yao; Dan M. Davis; Robert F. Lucas; Thomas D. Gottschalk

The Information Sciences Institute and Caltech are enabling USJFCOM and the Institute for Defense Analyses to conduct entity-level simulation experiments using hundreds of distributed computer nodes on Linux clusters as a vehicle for simulating millions of JSAF entities. Included below is the experience with the design and implementation of the code that increased scalability, thereby enabling two orders of magnitude growth and the effective use of DoD high-end computers. A typical JSAF experiment generates several terabytes of logged data, which is queried in near-real-time and for months afterward. The amount of logged data and the desired database query performance mandated the redesign of the original logger systems monolithic database, making it distributed and incorporating several advanced concepts. System procedures and practices were established to reliably execute the global-scale simulations, effectively operate the distributed computers, efficiently process and store terabytes of data, and provide straightforward access to the data by analysts


international conference on data mining | 2013

Weighted Task Regularization for Multitask Learning

Yintao Liu; Anqi Wu; Dong Guo; Ke-Thia Yao; Cauligi S. Raghavendra

Multitask Learning has been proven to be more effective than the traditional single task learning on many real-world problems by simultaneously transferring knowledge among different tasks which may suffer from limited labeled data. However, in order to build a reliable multitask learning model, nontrivial effort to construct the relatedness between different tasks is critical. When the number of tasks is not large, the learning outcome may suffer if there exists outlier tasks that inappropriately bias majority. Rather than identifying or discarding such outlier tasks, we present a weighted regularized multitask learning framework based on regularized multitask learning, which uses statistical metrics, such as Kullback-Leibler divergence, to assign weights prior to regularization process that robustly reduces the impact of outlier tasks and results in better learned models for all tasks. We then show that this formulation can be solved using dual form like optimizing a standard support vector machine with varied kernels. We perform experiments using both synthetic dataset and real-world dataset from petroleum industry which shows that our methodology outperforms existing methods.


Ai Edam Artificial Intelligence for Engineering Design, Analysis and Manufacturing | 1997

Multilevel modelling for engineering design optimization

Thomas Ellman; John Eric Keane; Mark Schwabacher; Ke-Thia Yao

Physical systems can be modelled at many levels of approximation. The right model depends on the problem to be solved. In many cases, a combination of models will be more effective than a single model. Our research investigates this idea in the context of engineering design optimization. We present a family of strategies that use multiple models for unconstrained optimization of engineering designs. The strategies are useful when multiple approximations of an objective function can be implemented by compositional modelling techniques. We show how a compositional modelling library can be used to construct a variety of locally calibratable approximation schemes that can be incorporated into the optimization strategies. We analyze the optimization strategies and approximation schemes to formulate and prove sufficient conditions for correctness and convergence. We also report experimental tests of our methods in the domain of sailing yacht design. Our results demonstrate dramatic reductions in the CPU time required for optimization, on the problems we tested, with no significant loss in design quality.


ieee international conference on high performance computing data and analytics | 2016

A study of complex deep learning networks on high performance, neuromorphic, and quantum computers

Thomas E. Potok; Catherine D. Schuman; Steven R. Young; Robert M. Patton; Federico M. Spedalieri; Jeremy Liu; Ke-Thia Yao; Garrett S. Rose; Gangotree Chakma

Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power.In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation.The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

Collaboration


Dive into the Ke-Thia Yao's collaboration.

Top Co-Authors

Avatar

Cauligi S. Raghavendra

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Robert Neches

Information Sciences Institute

View shared research outputs
Top Co-Authors

Avatar

Yintao Liu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuping Liu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Iraj Ershaghi

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ragy Eleish

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Dong Guo

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge