Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Matthews is active.

Publication


Featured researches published by Chris Matthews.


international conference on cloud computing | 2010

Dynamic Resource Allocation in Computing Clouds Using Distributed Multiple Criteria Decision Analysis

Yagiz Onat Yazir; Chris Matthews; Roozbeh Farahbod; Stephen W. Neville; Adel Guitouni; Sudhakar Ganti; Yvonne Coady

In computing clouds, it is desirable to avoid wasting resources as a result of under-utilization and to avoid lengthy response times as a result of over-utilization. In this paper, we propose a new approach for dynamic autonomous resource management in computing clouds. The main contribution of this work is two-fold. First, we adopt a distributed architecture where resource management is decomposed into independent tasks, each of which is performed by Autonomous Node Agents that are tightly coupled with the physical machines in a data center. Second, the Autonomous Node Agents carry out configurations in parallel through Multiple Criteria Decision Analysis using the PROMETHEE method. Simulation results show that the proposed approach is promising in terms of scalability, feasibility and flexibility.


international conference on cloud computing | 2012

Maitland: Lighter-Weight VM Introspection to Support Cyber-security in the Cloud

Chris Benninger; Stephen W. Neville; Yagiz Onat Yazir; Chris Matthews; Yvonne Coady

Despite defensive advances, malicious software (malware) remains an ever present cyber-security threat. Cloud environments are far from malware immune, in that: i) they innately support the execution of remotely supplied code, and ii) escaping their virtual machine (VM) confines has proven relatively easy to achieve in practice. The growing interest in clouds by industries and governments is also creating a core need to be able to formally address cloud security and privacy issues. VM introspection provides one of the core cyber-security tools for analyzing the run-time behaviors of code. Traditionally, introspection approaches have required close integration with the underlying hypervisors and substantial re-engineering when OS updates and patches are applied. Such heavy-weight introspection techniques, therefore, are too invasive to fit well within modern commercial clouds. Instead, lighter-weight introspection techniques are required that provide the same levels of within-VM observability but without the tight hypervisor and OS patch-level integration. This work introduces Maitland as a prototype proof-of-concept implementation a lighter-weight introspection tool, which exploits paravirtualization to meet these end-goals. The work assesses Maitlands performance, highlights its use to perform packer-independent malware detection, and assesses whether, with further optimizations, Maitland could provide a viable approach for introspection in commercial clouds.


Proceedings of the 2012 workshop on Cloud services, federation, and the 8th open cirrus summit | 2012

GENICloud and transcloud

Andy C. Bavier; Yvonne Coady; Tony Mack; Chris Matthews; Joe Mambretti; Rick McGeer; Paul Mueller; Alex C. Snoeren; Marco Yuen

In this paper, we argue that federation of cloud systems requires a standard API for users to create, manage, and destroy virtual objects, and a standard naming scheme for virtual objects. We introduce an existing API for this purpose, the Slice-Based Federation Architecture, and demonstrate that it can be implemented on a number of existing cloud management systems. We introduce a simple naming scheme for virtual objects, and discuss its implementation.


international conference on cloud computing | 2009

Virtualized recomposition: Cloudy or clear?

Chris Matthews; Yvonne Coady

Virtualization provides a coarse-grained isolation mechanism that results in large systems, with full operating systems and a complete software stack as their foundation. Though much of this foundation is not strictly necessary, the programmatic burden of building systems at a finer-granularity, on a smaller foundation, has previously been shown to be prohibitive. The aim of this work is to revisit this tension, and present an alternative, lightweight and composable approach to virtualization that we call MacroComponents—software components that run in isolation from the rest of the system, but without the full foundations of their more traditionally virtualized counterparts. We argue that this approach will provide a more scalable and sustainable approach for composing robust services in cloud environments, both in terms of dynamic system properties and software engineering qualities.


acm conference on systems programming languages and applications software for humanity | 2011

Et (smart) phone home

Leandro Collares; Chris Matthews; Justin Cappos; Yvonne Coady; Rick McGeer

Most home users are not able to troubleshoot advanced network issues themselves. Hours on the phone with an ISPs customer representative is a common way to solve this problem. With the advent of mobile devices with both Wi-Fi and cellular radios, troubleshooters at the ISP have a new back-door into a malfunctioning residential network. However, placing full trust in an ISP is a poor choice for a home user. In this paper we present Extra Technician (ET), a system designed to provide ISPs and others with an environment to troubleshoot home networking in a remote, safe and flexible manner.


technical symposium on computer science education | 2014

Taking a walk on the wild side: teaching cloud computing on distributed research testbeds

Yanyan Zhuang; Chris Matthews; Stephen Tredger; Steven R. Ness; Jesse Short-Gershman; Li Ji; Niko Rebenich; Andrew French; Josh Erickson; Kyliah Clarkson; Yvonne Coady; Rick McGeer

Distributed platforms are now a de facto standard in modern software and application development. Although the ACM/IEEE Curriculum 2013 introduces Parallel and Distributed Computing as a first class knowledge area for the first time, the right level of abstraction to teach these concepts is still an important question that needs to be explored. This work presents our findings in teaching cloud computing by exposing upper-level students to testbeds in use by the distributed systems research community. The possibility of giving students practical and relevant experience was explored in the context of new course assignment objectives. Furthermore, students were able to significantly contribute to a pilot class project with medium-scale computation based on satellite data. However, the software engineering challenges in these environments proved to be daunting. In particular, these challenges were exacerbated by a lack of debugging support relative to the environments students were more familiar with---requiring development practices that out-stripped typical course experiences. Our proposed set of experiments and project provide a basis for an evaluation of the trade-offs of teaching cloud and distributed systems on the wild side. We hope that these findings provide insight into some of the possibilities to consider when preparing the next generation of computer scientists to engage with software practices and paradigms that are already fundamental in todays highly distributed systems.


advanced information networking and applications | 2009

Quantifying Artifacts of Virtualization: A Framework for Mirco-Benchmarks

Chris Matthews; Yvonne Coady; Stephen W. Neville

One of the novel benefits of virtualization is its ability to emulate many hosts with a single physical machine. This approach is often used to support at-scale testing for large-scale distributed systems. To better understand the precise ways in which virtual machines differ from their physical counterparts, we have started to quantify some of the timing artifacts that appear to be common to two modern approaches to virtualization. Here we present several systematic experiments that highlight four timing artifacts, and begin to decipher their origins within virtual machine implementations. These micro-benchmarks serve as a means to better understand the mappings that exist between virtualized and real-world testing infrastructure. Our goal is to develop a reusable framework for micro-benchmarks that can be customized to quantify artifacts associated with specific cluster configurations and workloads. This type of quantification can then be used to better anticipate behavioral characteristics at-scale in real settings.


2013 Second GENI Research and Educational Experiment Workshop | 2013

Building Green Systems with Green Students: An Educational Experiment with GENI Infrastructure

Stephen Tredger; Yanyan Zhuang; Chris Matthews; Jesse Short-Gershman; Yvonne Coady; Rick McGeer

Experimentation in system-oriented courses is often challenging, due to the raw and complex nature of the underlying infrastructure. In this work, we present our findings in teaching cloud computing to upper-level and graduate level students with GENI testbeds that are in use by the distributed systems community. The possibility of giving students practical and relevant experience was explored in the context of new course assignment objectives. Furthermore, students were able to explore systems concepts using GENI testbeds, and contribute to a collaborative class wide project with medium scale computation using satellite data. Our proposed set of experiments and course project provide a basis for an evaluation of the tradeoffs of teaching cloud and distributed systems. However, the software engineering challenges in these environments proved to be daunting. The amount of installation, configuration, and maintenance of their experiments was more than what students anticipated. The challenges the students faced drove them towards more traditional local development than attempting to work on the testbeds we presented. We hope that our findings provide insight into some of the possibilities to consider when preparing the next generation of computer scientists to engage with software practices and paradigms that are already fundamental in todays highly distributed systems.


2012 Seventh International Conference on P2P, Parallel, Grid, Cloud and Internet Computing | 2012

Distributed Systems in the Wild: The Theoretical Foundations and Experimental Perspectives

Yanyan Zhuang; Stephen Tredger; Chris Matthews; Rick McGeer; Yvonne Coady

Modernizing experimentation in system-oriented courses such as computer networks and distributed systems is often challenging due to the raw and complex nature of infrastructure testing. In these practical courses, students not only need access to network layers and system kernels, but they often need to reason about consistency issues associated with the distributed nature of these experiments. This paper outlines the pros and cons of redesigning a traditional distributed systems course to incorporate modern experimental facilities for deploying distributed systems, such as Emulab, Seattle and Planet Lab. The possibility of giving students practical and relevant experience coupled with theoretical foundations is explored by considering traditional learning outcomes in the context of new course assignment objectives. A proposed set of experiments, along with their potential pitfalls and shortcomings, provide a basis for an evaluation of the trade-offs of studying distributed systems in the wild.


international conference on cloud computing and services science | 2011

TRANSCLOUD - Design Considerations for a High-performance Cloud Architecture Across Multiple Administrative Domains

Andy C. Bavier; Marco Yuen; Jessica Blaine; Rick McGeer; Alvin Au Young; Yvonne Coady; Chris Matthews; Christopher Pearson; Alex C. Snoeren; Joe Mambretti

Collaboration


Dive into the Chris Matthews's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rick McGeer

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge