Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arnold L. Rosenberg is active.

Publication


Featured researches published by Arnold L. Rosenberg.


International Journal of Foundations of Computer Science | 2014

ON SCHEDULING SERIES-PARALLEL DAGs TO MAXIMIZE AREA

Gennaro Cordasco; Arnold L. Rosenberg

The AREA of a schedule for executing DAGs is the average number of DAG-chores that are eligible for execution at each step of the computation. AREA maximization is a new optimization goal for sched...


IEEE Transactions on Parallel and Distributed Systems | 2015

An AREA-Oriented Heuristic for Scheduling DAGs on Volatile Computing Platforms

Gennaro Cordasco; Rosario De Chiara; Arnold L. Rosenberg

Many modern computing platforms-notably clouds and desktop grids-exhibit dynamic heterogeneity: the availability and computing power of their constituent resources can change unexpectedly and dynamically, even in the midst of a computation. We introduce a new quality metric, AREA, for schedules that execute computations having interdependent constituent chores (jobs, tasks, etc.) on such platforms. AREA measures the average number of chores that a schedule renders eligible for execution at each step of a computation. Even though the definition of AREA does not mention any properties of host platforms (such as volatility), intuition suggests that rendering chores eligible at a faster rate will have a benign impact on the performance of volatile platforms. We report on simulation experiments that support this intuition. Earlier work has derived the basic properties of the AREA metric and has shown how to efficiently craft AREA-maximizing (A-M) schedules for several classes of significant computations. Even though A-M schedules always exist for every computation, it is not always known how to derive such schedules efficiently. In response, the current study develops an efficient algorithm that produces AREA-Oriented (A-O) schedules, which aim to efficiently approximate the AREAs of A-M schedules on arbitrary computations. The simulation experiments reported on here suggest that, in common with A-M schedules, A-O schedules complete computations on volatile heterogeneous platforms faster than a variety of heuristics that range from lightweight ones to computationally intensive ones-albeit not to the same degree as A-M schedules do. Our experiments suggest that schedules having larger AREAs have smaller completion times-but no proof of that yet exists.


ieee international conference on high performance computing data and analytics | 2017

Scheduling DAG-based workflows on single cloud instances

Arnold L. Rosenberg

The problem of achieving high-performance cost-effectively in cloud computing is challenging when workflows have Directed Acyclic Graph (DAG)-structured inter-task dependencies. We study this problem within single cloud instances and provide empirical evidence that the static Area-Oriented DAG-Scheduling (AO) paradigm, which predetermines the order for executing a DAG’s tasks, provides both high performance and cost effectiveness. AO produces schedules in a platform-oblivious manner; it ignores the performance characteristics of the platform’s resources and focuses only on the dependency structure of the workflow. Specifically, AO’s schedules strive to enhance the rate of rendering tasks eligible for execution. Using an archive of diverse DAG-structured workflows, we experimentally compare AO with a variety of competing DAG-schedulers: (a) the static locally optimal DAG-scheduler (LO), which, like AO, is static and platform-oblivious but chooses its DAG-ordering based on tasks’ outdegrees; and (b) five dynamic versions of static schedulers (including AO and LO), each of which can violate its parent static scheduler’s prescribed task orders to avoid stalling. Our results provide evidence of AO’s supremacy as compared with LO and its essential equivalence to dynamic-AO: neither competitor yields higher performance at an lower cost than AO does. Two aspects of these results are notable. Firstly, AO is platform-oblivious, whereas dynamic-AO is intensely platform-sensitive; one would expect platform sensitivity to enhance performance. Secondly, AO outperforms LO by an order of magnitude, together with lower costs; one would not expect such a performance gap.


Concurrency and Computation: Practice and Experience | 2015

On constructing DAG-schedules with large areas

Scott T. Roche; Arnold L. Rosenberg; Rajmohan Rajaraman

The Area of a schedule Σ for a directed acyclic graph (DAG) G is a quality metric that measures the rate at which Σ renders G s nodes eligible for execution. Specifically, AREA(Σ) is the average number of nodes of G that are eligible for execution as Σ executes G node by node. Extensive simulations suggest that, for many distributions of processor availability and power, DAG‐schedules having larger Areas execute DAGs faster on platforms that are dynamically heterogeneous: the platforms processors change power and availability status in unpredictable ways and at unpredictable times. (Clouds and desktop grids exemplify such platforms.) While Area‐maximal schedules can provably be found for everyDAG, efficient generators of such schedules are known only for families of well‐structured DAGs. Our first result shows that the problem of crafting Area‐maximal schedules for general DAGs is NP‐complete, hence likely computationally intractable. We also provide an efficient algorithm that approximates optimal Area to within a factor of 1/(2n) , where n is the number of tasks in the DAG—a factor that is likely interesting only for small DAGs. The lack of efficient Area‐maximizing schedulers for general DAGs has instigated the development of several heuristics for producing DAG‐schedules that have large Areas. We propose a novel polynomial‐time heuristic that produces schedules having quite large Areas; the heuristic is based on the Sidney decomposition of a DAG. (1) Simulations on DAGs having random structure yield the following results. The SIDNEY heuristic produces schedules whose Areas: (a) are at least 85% of maximal; and (b) are at least 1.25 times greater than previously known heuristics. (2) Simulations on DAGs having the structure of random LEGO®;DAGs (as formulated in earlier studies) indicate that the schedules produced by the SIDNEY heuristic have Areas that are at least 1.5 times greater than previously known heuristics. The ‘85%’ result is obtained from formulating the Area‐maximization problem as a linear program (LP); the Areas of DAG‐schedules produced by the SIDNEY heuristic are at least 85% of the Area value produced by the (unrounded) LP. (3) The reported results on random DAGs are essentially matched by a second heuristic, which produces DAG‐schedules by rounding the results of the LP formulation. Copyright


technical symposium on computer science education | 2014

NSF/IEEE-TCPP curriculum initiative on parallel and distributed computing: core topics for undergraduates (abstract only)

Sushil K. Prasad; Almadena Yu. Chtchelkanova; Anshul Gupta; Arnold L. Rosenberg; Alan Sussman

Parallelism pervades all aspects of modern computing, from in-home devices such as cell phones to large-scale supercomputers. Recognizing this - and motivated by the premise that every undergraduate student in a computer-related field should be prepared to cope with parallel computing - a working group sponsored by NSF and IEEE/TCPP, and interacting with the ACM CS2013 initiative, has developed guidelines for assimilating parallel and distributed computing (PDC) into the core undergraduate curriculum. Over 100 Early-Adopter institutions worldwide are currently modifying their computer-related curricula in response to the guidelines. Additionally, the CDER Center for Curriculum Development and Educational Resources, which grew out of the working group, is currently assembling a book of contributed essays on how to teach PDC topics in lower-level CS/CE courses, to fill the serious lack of textual material for students and instructors. This session is intended: (i) to report on the current state of this initiative; (ii) to bring together authors of book chapters and Early Adopters and other interested parties for discussions on ongoing activities and needs; (iii) to discuss the initiative and collect direct feedback from the community.


european conference on parallel processing | 2014

On Constructing DAG-Schedules with Large AREAs

Scott T. Roche; Arnold L. Rosenberg; Rajmohan Rajaraman

The Area of a schedule Σ for a dag Open image in new window measures the rate at which Σ renders Open image in new window ’s nodes eligible for execution. Specifically, AREA(Σ) is the average number of nodes that are eligible for execution as Σ executes Open image in new window node by node. Extensive simulations suggest that, for many distributions of processor availability and power, schedules having larger Areas execute dags faster on platforms that are dynamically heterogeneous: their processors change power and availability status in unpredictable ways and at unpredictable times. While Area-maximal schedules exist for every dag, efficient generators of such schedules are known only for well-structured dags. We prove that the general problem of crafting Area-maximal schedules is NP-complete, hence likely computationally intractable. This situation motivates the development of heuristics for producing dag-schedules that have large Areas. We build on the Sidney decomposition of a dag to develop a polynomial-time heuristic, Sidney, whose schedules have quite large Areas. (1) Simulations on dags having random structure indicate that Sidney’s schedules have Areas: (a) at least 85% of maximal; (b) at least 1.25 times larger than those produced by previous heuristics. (2) Simulations on dags having the structure of random “LEGO ®” dags indicate that Sidney’s schedules have Areas that are at least 1.5 times larger than those produced by previous heuristics. The “85%” result emerges from an LP-based formulation of the Area-maximization problem. (3) Our results on random dags are roughly matched by a second heuristic that emerges directly from the LP formulation.


The Computer Journal | 2014

Region Management by Finite-State Robots

Arnold L. Rosenberg

Advancing technologies have enabled simple mobile robots that collaborate to perform complex tasks. Understanding how to achieve such collaboration with simpler robots leverages these advances, potentially allowing more robots for a given cost and/or decreasing the cost of deploying a fixed number of robots. This paper is a step toward understanding the algorithmic strengths and weaknesses of robots that are identical mobile finite-state machines (FSMs)—FSMs being the avatars of simple, yet non-trivial, discrete control structures.We study the ability of (teams of) FSMs to identify and search varied-size quadrants of square (i.e. n×n) meshes of tiles—such meshes being the avatars of simple tesselated geographically constrained environments. Each team must be able to accomplish its assigned tasks in arbitrarily large meshes—i.e. for arbitrarily large values of n. Partitions of a mesh into quadrants are specified via pairs of rational numbers 〈φ, ψ〉, where 0 < φ, ψ < 1, chosen from a fixed, finite repertoire of such pairs. The quadrants specified by a pair 〈φ, ψ〉 are delimited by a horizontal line and a vertical line that that cross at the anchor mesh-tile v = 〈 φ(n−1) , ψ(n−1) 〉. The following results are established.


european conference on parallel processing | 2016

Scheduling DAGs Opportunistically: The Dream and the Reality Circa 2016

Arnold L. Rosenberg

A broad-brush tour of a platform-oblivious approach to scheduling dag-structured computations on platforms whose resources can change dynamically, both in availability and efficiency. The main focus is on the IC-scheduling and Area-oriented scheduling paradigms--the motivation, the dream, the implementation, and initial work on evaluation.


Archive | 2015

Algorithmic Insights into Finite-State Robots

Arnold L. Rosenberg

Modern technology has enabled the deployment of small computers that can act as the “brains” of mobile robots. Multiple advantages accrue if one can deploy simpler computers rather than more sophisticated ones: For a fixed cost, one can deploy more computers, hence benefit from more concurrent computing and/or more fault-tolerant design—both major issues with assemblages of mobile “intelligent” robots. This chapter explores the capabilities and limitations of computers that execute simply structured finite-state programs . The robots of interest operate within constrained physical settings such as warehouses or laboratories; they operate on tesselated “floors” within such settings—which we view formally as meshes of tiles. The major message of the chapter is that teams of (identical) robots whose “intellects” are powered by finite-state programs are capable of more sophisticated algorithmics than one might expect, even when the robots must operate: (\(a\)) without the aid of centralized control and (\(b\)) using algorithms that are scalable, in the sense that they work in meshes/“floors” of arbitrary sizes. A significant enabler of robots’ algorithmic sophistication is their ability to use their host mesh’s edges—i.e., the walls of the warehouses or laboratories—when orchestrating their activities. The capabilities of our “finite-state robots” are illustrated via a variety of algorithmic problems that involve path planning and exploration, in addition to the rearranging of labeled objects.


international symposium on computer and information sciences | 2013

Finite-State Robots in the Land of Rationalia

Arnold L. Rosenberg

Advancing technologies have enabled simple mobile robots that collaborate to perform complex tasks. Understanding how to achieve such collaboration with simpler robots leverages these advances, potentially allowing more robots for a given cost and/or decreasing the cost of deploying a fixed number of robots. This paper is a step toward understanding the algorithmic strengths and weaknesses of robots that are identical mobile finite-state machines (FSMs)—FSMs being the avatar of “simple” digital computers. We study the ability of (teams of) FSMs to identify and search within varied-size quadrants of square \(n \times n\)) meshes of tiles—such meshes being the avatars of tesselated geographically constrained environments. Each team must accomplish its assigned tasks scalably—i.e., in arbitrarily large meshes (equivalently, for arbitrarily large values of \(n\)). Each subdivision of a mesh into quadrants is specified via a pair of fractions \(\langle \varphi , \psi \rangle \), where \(0 < \varphi , \psi < 1\), chosen from a fixed, finite repertoire of such pairs. The quadrants specified by the pair \(\langle \varphi , \psi \rangle \) are delimited by a horizontal line and a vertical line that cross at anchor mesh-tile \(v^{(\varphi , \psi )} = \langle \lfloor \varphi (n-1) \rfloor , \lfloor \psi (n-1) \rfloor \rangle \). The current results: \(\bullet \) A single FSM cannot identify tile \(v^{(\varphi , \psi )}\) in meshes of arbitrary sizes, even for a single pair \(\langle \varphi , \psi \rangle \)—except when \(v^{(\varphi , \psi )}\) resides on a mesh-edge. \(\bullet \) A pair of identical FSMs can identify tiles \(v^{(\varphi _i, \psi _i)}\) in meshes of arbitrary sizes, for arbitrary fixed finite sets of \(k\) pairs \(\{\langle \varphi _i, \psi _i \rangle \}_{i=1}^k\). The pair can sweep each of the resulting quadrants in turn. \(\bullet \) Single FSMs can always verify (for all pairs and meshes) that all of the tiles of each quadrant are labeled in a way that is unique to that quadrant. This process parallelizes linearly for teams of FSMs.

Collaboration


Dive into the Arnold L. Rosenberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles C. Weems

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Gennaro Cordasco

Seconda Università degli Studi di Napoli

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy Benson

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Trilce Estrada

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge