Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amy W. Apon is active.

Publication


Featured researches published by Amy W. Apon.


international conference on cluster computing | 2009

Accelerating SIFT on parallel architectures

Seth Warn; Wesley Emeneker; Jackson Cothren; Amy W. Apon

SIFT is a widely-used algorithm that extracts features from images; using it to extract information from hundreds of terabytes of aerial and satellite photographs requires paral-lelization in order to be feasible. We explore accelerating an existing serial SIFT implementation with OpenMP parallelization and GPU execution.


measurement and modeling of computer systems | 1993

The KSR1: experimentation and modeling of poststore

Emilia Rosti; Evgenia Smirni; Thomas D. Wagner; Amy W. Apon; Lawrence W. Dowdy

Kendall Square Research introduced the KSR1 system in 1991. The architecture is based on a ring of rings of 64-bit microprocessora. It is a distributed, shared memory system and is scalable. The memory structure is unique and is the key to understanding the system. Different levels of caching eliminates physical memory addressing and leads to the ALLCACHE™ scheme. Since requested data may be found in any of several caches, the initial access time is variable. Once pulled into the local (sub) cache, subsequent access times are fixed and minimal. Thus, the KSR1 is a Cache-Only Memory Architecture (COMA) system.This paper describes experimentation and an analytic model of the KSR1. The focus is on the poststore programmer option. With the poststore option, the programm er can elect to broadcast the updated value of a variable to all processors that might have a copy. This may save time for threads on other processors, but delays the broadcasting thread and places additional traffic on the ring. The specific issue addressed is to determine under what conditions poststore is beneficial. The analytic model and the experimental observations are in good agreement. They indicate that the decision to use poststore depends both on the application and the current system load.


international conference on parallel and distributed systems | 2007

LORM: Supporting low-overhead P2P-based range-query and multi-attribute resource management in grids

Haiying Shen; Amy W. Apon; Cheng Zhong Xu

Resource management is critical to the usability and accessibility of grid computing systems. Conventional approaches to grid resource discovery are either centralized or hierarchical, and these prove to be inefficient as the size of the grid system increases. The peer-to-peer (P2P) paradigm has been applied to grid systems as a mechanism for providing scalable range- query and multi-attribute resource management. However, most current P2P-based resource management approaches support multi-attribute range queries at a high cost. They either depend on multiple P2P networks with each P2P network responsible for a single attribute, or they keep the resource information of all attributes in a single node. This paper presents a low-overhead range-query multi-attribute P2P-based resource management approach, LORM. Unlike other P2P-based approaches, it relies on a single P2P network and allocates resource information to different nodes based on resource attributes and values. Moreover, it has high capability to handle the large-scale and dynamic characteristics of resources in grids. Simulation results demonstrate the efficiency of LORM in comparison with other resource management approaches in terms of resource management overhead and resource discovery efficiency.


cluster computing and the grid | 2001

Cluster computing in the classroom: topics, guidelines, and experiences

Amy W. Apon; Rajkumar Buyya; Hai Jin; Jens Mache

With the progress of research on cluster computing, more and more universities have begun to offer various courses covering cluster computing. A wide variety of content can be taught in these courses. Because of this, a difficulty that arises is the selection of appropriate course material. The selection is complicated by the fact that some content in cluster computing is also covered by other courses such as operating systems, networking, or computer architecture. In addition, the background of students enrolled in cluster computing courses varies. These aspects of cluster computing make the development of good course material difficult. Combining our experiences in teaching cluster computing in several universities in the USA and Australia and conducting tutorials at many international conferences all over the world, we present prospective topics in cluster computing along with a wide variety of information sources (books, software, and materials on the Web) from which instructors can choose. The course material described includes system architecture, parallel programming, algorithms, and applications. We share our experiences in teaching cluster computing and the topics we have chosen depending on course objectives.


international conference on cluster computing | 2004

Implementation and design analysis of a network messaging module using virtual interface architecture

Gregory Amerson; Amy W. Apon

The buffered message interface (BMI) of PVFSv2 is a low level network abstraction that allows PVFSv2 to operate on any protocol that has BMI support. This work presents a BMI module that supports the VIA over an early release version of InfiniBand and also over Myrinet. The baseline bandwidth and latency of the implementation were compared to the BMI modules and were shown to achieve significantly higher performance than the TCP module, but slightly less than the CM module. Experimental results comparing a completion queue version with a notify version and using immediate versus rendezvous messages are useful to system implementors of network messaging modules.


IEEE Transactions on Education | 2004

Cluster computing in the classroom and integration with computing curricula 2001

Amy W. Apon; Jens Mache; Rajkumar Buyya; Hai Jin

With the progress of research on cluster computing, many universities have begun to offer various courses covering cluster computing. A wide variety of content can be taught in these courses. Because of this variation, a difficulty that arises is the selection of appropriate course material. The selection is complicated because some content in cluster computing may also be covered by other courses in the undergraduate curriculum, and the background of students enrolled in cluster computing courses varies. These aspects of cluster computing make the development of good course material difficult. Combining experiences in teaching cluster computing at universities in the United States and Australia, this paper presents prospective topics in cluster computing and a wide variety of information sources from which instructors can choose. The course material is described in relation to the knowledge units of the Joint IEEE Computer Society and the Association for Computing Machinery (ACM) Computing Curricula 2001 and includes system architecture, parallel programming, algorithms, and applications. Instructors can select units in each of the topical areas and develop their own syllabi to meet course objectives. The authors share their experiences in teaching cluster computing and the topics chosen, depending on course objectives.


ieee international conference on high performance computing data and analytics | 2001

Network Technologies

Amy W. Apon; Mark Baker

A broad and growing range of possibilities is available to designers of a cluster when choosing an interconnection technology. As the price of network hardware in a cluster can vary from almost free to several thousands of dollars per computing node, the decision is not a minor one in determining the overall price of the cluster. Many very effective clusters have been built from inexpensive products that are typically found in local-area networks. However, some recent network products specifically designed for cluster communication have a price that is comparable with the cost of a workstation. The choice of network technology depends on a number of factors, including price, performance, and compatibility with other cluster hardware and system software as well as communication characteristics of applications that will use the cluster. Performance of a network is generally measured in terms of latency and bandwidth. Latency is the time to send data from one computer to another and includes the overhead for the software to construct the message as well as the time to transfer the bits from one computer to another. Bandwidth is the number of bits per second that can be transmitted over the interconnection hardware. Ideally, applications that are written for a cluster will have a minimum amount of communication. However, if an application sends a large number of small messages, then its performance will be affected by the latency of the network, and if an application sends large messages, then its performance will be affected by the bandwidth of the network. In general, applications perform best when the latency of the network is low and the bandwidth is high. Achieving low latency and high bandwidth requires efficient communication protocols that minimize communication software overhead and fast hardware. Compatibility of network hardware with other cluster hardware and system software is a major factor in the selection of network hardware. From the user’s perspective, the network should be interoperable with the selected end node hardware and operating system and should be capable of efficiently supporting the communication protocols that are necessary for the middleware or application. Section 2 of this article gives a brief history of cluster communication protocols, leading to the development of two important standards for cluster communication. Section 3 of this article gives an overview of many common cluster interconnection products. A brief description of each technology is given, along with a comparison of products on the basis of price, performance, and support for standard communication protocols.


international conference on big data | 2015

Automotive big data: Applications, workloads and infrastructures

Andre Luckow; Ken Kennedy; Fabian Manhardt; Emil Djerekarov; Bennie Vorster; Amy W. Apon

Data is increasingly affecting the automotive industry, from vehicle development, to manufacturing and service processes, to online services centered around the connected vehicle. Connected, mobile and Internet of Things devices and machines generate immense amounts of sensor data. The ability to process and analyze this data to extract insights and knowledge that enable intelligent services, new ways to understand business problems, improvements of processes and decisions, is a critical capability. Hadoop is a scalable platform for compute and storage and emerged as de-facto standard for Big Data processing at Internet companies and in the scientific community. However, there is a lack of understanding of how and for what use cases these new Hadoop capabilities can be efficiently used to augment automotive applications and systems. This paper surveys use cases and applications for deploying Hadoop in the automotive industry. Over the years a rich ecosystem emerged around Hadoop comprising tools for parallel, in-memory and stream processing (most notable MapReduce and Spark), SQL and NOSQL engines (Hive, HBase), and machine learning (Mahout, MLlib). It is critical to develop an understanding of automotive applications and their characteristics and requirements for data discovery, integration, exploration and analytics. We then map these requirements to a confined technical architecture consisting of core Hadoop services and libraries for data ingest, processing and analytics. The objective of this paper is to address questions, such as: What applications and datasets are suitable for Hadoop? How can a diverse set of frameworks and tools be managed on multi-tenant Hadoop cluster? How do these tools integrate with existing relational data management systems? How can enterprise security requirements be addressed? What are the performance characteristics of these tools for real-world automotive applications? To address the last question, we utilize a standard benchmark (TPCx-HS), and two application benchmarks (SQL and machine learning) that operate on a dataset of multiple Terabytes and billions of rows.


IEEE Transactions on Education | 2007

Teaching Grid Computing: Topics, Exercises, and Experiences

Jens Mache; Amy W. Apon

Grid protocols and technologies are being adopted in a wide variety of academic, government, and industrial environments, and a growing body of research-oriented literature in grid computing is being compiled. However, there is a need for educational material that is suitable for classroom use. This paper describes topics, exercises, and experiences of teaching grid computing at two different universities. Course material in grid computing can be grouped into several knowledge areas. The focus in this paper is on grid programming, i.e., developing grid-enabled services using the Globus toolkit. Assessment data shows that with preparatory material and hands-on exercises, undergraduate computer science students can master grid programming. Topics and exercises for security, network programming, Web services, and grid programming are described. Recommendations include using stand-alone containers on individual student computers and following a first grid programming exercise that builds confidence with at least one more elaborate grid programming exercise


international conference on big data | 2014

Synthetic data generation for the internet of things

Jason W. Anderson; K. E. Kennedy; Linh Bao Ngo; Andre Luckow; Amy W. Apon

The concept of Internet of Things (IoT) is rapidly moving from a vision to being pervasive in our everyday lives. This can be observed in the integration of connected sensors from a multitude of devices such as mobile phones, healthcare equipment, and vehicles. There is a need for the development of infrastructure support and analytical tools to handle IoT data, which are naturally big and complex. But, research on IoT data can be constrained by concerns about the release of privately owned data. In this paper, we present the design and implementation results of a synthetic IoT data generation framework. The framework enables research on synthetic data that exhibit the complex characteristics of original data without compromising proprietary information and personal privacy.

Collaboration


Dive into the Amy W. Apon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baochuan Lu

University of Arkansas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge