Featured Researches

Performance

BenchCouncil's View on Benchmarking AI and Other Emerging Workloads

This paper outlines BenchCouncil's view on the challenges, rules, and vision of benchmarking modern workloads like Big Data, AI or machine learning, and Internet Services. We conclude the challenges of benchmarking modern workloads as FIDSS (Fragmented, Isolated, Dynamic, Service-based, and Stochastic), and propose the PRDAERS benchmarking rules that the benchmarks should be specified in a paper-and-pencil manner, relevant, diverse, containing different levels of abstractions, specifying the evaluation metrics and methodology, repeatable, and scaleable. We believe proposing simple but elegant abstractions that help achieve both efficiency and general-purpose is the final target of benchmarking in future, which may be not pressing. In the light of this vision, we shortly discuss BenchCouncil's related projects.

Read more
Performance

Benchmarking Big Data Systems: State-of-the-Art and Future Directions

The great prosperity of big data systems such as Hadoop in recent years makes the benchmarking of these systems become crucial for both research and industry communities. The complexity, diversity, and rapid evolution of big data systems gives rise to various new challenges about how we design generators to produce data with the 4V properties (i.e. volume, velocity, variety and veracity), as well as implement application-specific but still comprehensive workloads. However, most of the existing big data benchmarks can be described as attempts to solve specific problems in benchmarking systems. This article investigates the state-of-the-art in benchmarking big data systems along with the future challenges to be addressed to realize a successful and efficient benchmark.

Read more
Performance

Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor

Knights Landing (KNL) is the code name for the second-generation Intel Xeon Phi product family. KNL has generated significant interest in the data analysis and machine learning communities because its new many-core architecture targets both of these workloads. The KNL many-core vector processor design enables it to exploit much higher levels of parallelism. At the Lincoln Laboratory Supercomputing Center (LLSC), the majority of users are running data analysis applications such as MATLAB and Octave. More recently, machine learning applications, such as the UC Berkeley Caffe deep learning framework, have become increasingly important to LLSC users. Thus, the performance of these applications on KNL systems is of high interest to LLSC users and the broader data analysis and machine learning communities. Our data analysis benchmarks of these application on the Intel KNL processor indicate that single-core double-precision generalized matrix multiply (DGEMM) performance on KNL systems has improved by ~3.5x compared to prior Intel Xeon technologies. Our data analysis applications also achieved ~60% of the theoretical peak performance. Also a performance comparison of a machine learning application, Caffe, between the two different Intel CPUs, Xeon E5 v3 and Xeon Phi 7210, demonstrated a 2.7x improvement on a KNL node.

Read more
Performance

Benchmarking TinyML Systems: Challenges and Direction

Recent advancements in ultra-low-power machine learning (TinyML) hardware promises to unlock an entirely new class of smart applications. However, continued progress is limited by the lack of a widely accepted benchmark for these systems. Benchmarking allows us to measure and thereby systematically compare, evaluate, and improve the performance of systems and is therefore fundamental to a field reaching maturity. In this position paper, we present the current landscape of TinyML and discuss the challenges and direction towards developing a fair and useful hardware benchmark for TinyML workloads. Furthermore, we present our four benchmarks and discuss our selection methodology. Our viewpoints reflect the collective thoughts of the TinyMLPerf working group that is comprised of over 30 organizations.

Read more
Performance

Benchmarking the computing resources at the Instituto de Astrofísica de Canarias

The aim of this study is the characterization of the computing resources used by researchers at the "Instituto de Astrofísica de Canarias" (IAC). Since there is a huge demand of computing time and we use tools such as HTCondor to implement High Throughput Computing (HTC) across all available PCs, it is essential for us to assess in a quantitative way, using objective parameters, the performances of our computing nodes. In order to achieve that, we have run a set of benchmark tests on a number of different desktop and laptop PC models among those used in our institution. In particular, we run the "Polyhedron Fortran Benchmarks" suite, using three different compilers: GNU Fortran Compiler, Intel Fortran Compiler and the PGI Fortran Compiler; execution times are then normalized to the reference values published by Polyhedron. The same tests were run multiple times on a same PCs, and on 3 to 5 PCs of the same model (whenever possible) to check for repeatability and consistency of the results. We found that in general execution times, for a given PC model, are consistent within an uncertainty of about 10%, and show a gain in CPU speed of a factor of about 3 between the oldest PCs used at the IAC (7-8 years old) and the newest ones.

Read more
Performance

BestConfig: Tapping the Performance Potential of Systems via Automatic Configuration Tuning

An ever increasing number of configuration parameters are provided to system users. But many users have used one configuration setting across different workloads, leaving untapped the performance potential of systems. A good configuration setting can greatly improve the performance of a deployed system under certain workloads. But with tens or hundreds of parameters, it becomes a highly costly task to decide which configuration setting leads to the best performance. While such task requires the strong expertise in both the system and the application, users commonly lack such expertise. To help users tap the performance potential of systems, we present BestConfig, a system for automatically finding a best configuration setting within a resource limit for a deployed system under a given application workload. BestConfig is designed with an extensible architecture to automate the configuration tuning for general systems. To tune system configurations within a resource limit, we propose the divide-and-diverge sampling method and the recursive bound-and-search algorithm. BestConfig can improve the throughput of Tomcat by 75%, that of Cassandra by 63%, that of MySQL by 430%, and reduce the running time of Hive join job by about 50% and that of Spark join job by about 80%, solely by configuration adjustment.

Read more
Performance

Beyond Safety Drivers: Staffing a Teleoperations System for Autonomous Vehicles

Driverless vehicles promise a host of societal benefits including dramatically improved safety, increased accessibility, greater productivity, and higher quality of life. As this new technology approaches widespread deployment, both industry and government are making provisions for teleoperations systems, in which remote human agents provide assistance to driverless vehicles. This assistance can involve real-time remote operation and even ahead-of-time input via human-in-the-loop artificial intelligence systems. In this paper, we address the problem of staffing such a remote support center. Our analysis focuses on the tradeoffs between the total number of remote agents, the reliability of the remote support system, and the resulting safety of the driverless vehicles. By establishing a novel connection between queues with large batch arrivals and storage processes, we determine the probability of the system exceeding its service capacity. This connection drives our staffing methodology. We also develop a numerical method to compute the exact staffing level needed to achieve various performance measures. This moment generating function based technique may be of independent interest, and our overall staffing analysis may be of use in other applications that combine human expertise and automated systems.

Read more
Performance

Beyond Scaling: Calculable Error Bounds of the Power-of-Two-Choices Mean-Field Model in Heavy-Traffic

This paper provides a recipe for deriving calculable approximation errors of mean-field models in heavy-traffic with the focus on the well-known load balancing algorithm -- power-of-two-choices (Po2). The recipe combines Stein's method for linearized mean-field models and State Space Concentration (SSC) based on geometric tail bounds. In particular, we divide the state space into two regions, a neighborhood near the mean-field equilibrium and the complement of that. We first use a tail bound to show that the steady-state probability being outside the neighborhood is small. Then, we use a linearized mean-field model and Stein's method to characterize the generator difference, which provides the dominant term of the approximation error. From the dominant term, we are able to obtain an asymptotically-tight bound and a nonasymptotic upper bound, both are calculable bounds, not order-wise scaling results like most results in the literature. Finally, we compared the theoretical bounds with numerical evaluations to show the effectiveness of our results. We note that the simulation results show that both bounds are valid even for small size systems such as a system with only ten servers.

Read more
Performance

Blind GB-PANDAS: A Blind Throughput-Optimal Load Balancing Algorithm for Affinity Scheduling

Dynamic affinity load balancing of multi-type tasks on multi-skilled servers, when the service rate of each task type on each of the servers is known and can possibly be different from each other, is an open problem for over three decades. The goal is to do task assignment on servers in a real time manner so that the system becomes stable, which means that the queue lengths do not diverge to infinity in steady state (throughput optimality), and the mean task completion time is minimized (delay optimality). The fluid model planning, Max-Weight, and c- μ -rule algorithms have theoretical guarantees on optimality in some aspects for the affinity problem, but they consider a complicated queueing structure and either require the task arrival rates, the service rates of tasks on servers, or both. In many cases that are discussed in the introduction section, both task arrival rates and service rates of different task types on different servers are unknown. In this work, the Blind GB-PANDAS algorithm is proposed which is completely blind to task arrival rates and service rates. Blind GB-PANDAS uses an exploration-exploitation approach for load balancing. We prove that Blind GB-PANDAS is throughput optimal under arbitrary and unknown distributions for service times of different task types on different servers and unknown task arrival rates. Blind GB-PANDAS desires to route an incoming task to the server with the minimum weighted-workload, but since the service rates are unknown, such routing of incoming tasks is not guaranteed which makes the throughput optimality analysis more complicated than the case where service rates are known. Our extensive experimental results reveal that Blind GB-PANDAS significantly outperforms existing methods in terms of mean task completion time at high loads.

Read more
Performance

Bolt: Accelerated Data Mining with Fast Vector Compression

Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and up to 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.

Read more

Ready to get started?

Join us today