Mary Mehrnoosh Eshaghian
New Jersey Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mary Mehrnoosh Eshaghian.
Proceedings Sixth Heterogeneous Computing Workshop (HCW'97) | 1997
Mary Mehrnoosh Eshaghian; Ying-Chieh Wu
In this paper, a generic technique for mapping heterogeneous task graphs onto heterogeneous system graphs is presented. The task and system graphs studied in this paper have nonuniform computation and communication weights associated with the nodes and the edges. Two clustering algorithms have been proposed which can be used to obtain a multilayer clustered graph called a Spec graph from a given task graph and a multilayer clustered graph called a Rep graph from a given system graph. We present a mapping algorithm which produces a suboptimal matching of a given Spec graph containing M task modules, onto a Rep graph of N processors, in O(MP) fame, where P=max(M,N). Our experimental results indicate that our mapping algorithm is the fastest one and generates results which are better than, or similar to, those of other leading techniques which work only for restricted task or system graphs.
International Journal of High Speed Computing | 1994
Mary Mehrnoosh Eshaghian; Muhammad Shaaban
We present a novel parallel programming model called Cluster-M. This model facilitates the efficient design of highly parallel portable software. The two main components of this model are Cluster-M Specifications and Cluster-M Representations. A Cluster-M Specification consists of a number of clustering levels emphasizing computation and communication requirements of a parallel solution to a given problem. A Cluster-M Representation on the other hand, represents a multi-layered partitioning of a system graph corresponding; to the topology of the target architecture. An algorithm for generating Cluster-M Representations is given. A set of basic constructs essential for writing Cluster-M Specifications using PCN is presented in this paper. Cluster-M Specifications are mapped onto the Representations using a proposed mapping methodology. Using Cluster-M a single software can be ported among various parallel computing systems.
international parallel processing symposium | 1992
Mary Mehrnoosh Eshaghian
Present a novel parallel programming model called Cluster-M. This model provides an environment for efficiently designing highly parallel machine independent software. Cluster-M is a generic model consisting of a collection of clusters such that each cluster represents a set of processors having similar level of interconnectivity. For every multiprocessor architecture there exists a Cluster-M representation. This leads to a mapping mechanism which allows a single software to be ported among various parallel computing systems. >Present a novel parallel programming model called Cluster-M. This model provides an environment for efficiently designing highly parallel machine independent software. Cluster-M is a generic model consisting of a collection of clusters such that each cluster represents a set of processors having similar level of interconnectivity. For every multiprocessor architecture there exists a Cluster-M representation. This leads to a mapping mechanism which allows a single software to be ported among various parallel computing systems.<<ETX>>
parallel computing | 1997
Mary Mehrnoosh Eshaghian; Eugen Schenfeld
Optical interconnections have the potential of becoming an appealing alternative to electrical interconnections. For long and medium range distances (e.g., local area networks and telecommunication), optical technology (fibers) is the technology of choice, offering better performance and lower costs than electrical wires. There is a trend for optics to replace electronics for shorter distances and larger connectivity applications. This special issue follows recent emerging new activity in this field as presented at recent conferences [1–3]. Optics has several important advantages over electrical interconnections. Optical interconnections are insensitive to radio wave interference effects, are free from transmission line capacitive loading, do not have geometrical planar constraints, and can be reconfigurable (circuit-switched). In an earlier publication [4], the computational limits in using optical communication technology in VLSI parallel processing systems were studied. Using a computational model called OMC, it was shown that communication-intensive problems can be solved efficiently because of the third dimension of connectivity gained through using free space optical interconnects. Since the mid 80’s, there have been many contributions to the field of parallel computing with optical interconnects. Various forms of interconnections such as free-space and fiber optics have been considered for designing parallel architectures and algorithms [1–3]. With advances made in optical technology, it is evident that photonics will play an important role in parallel computation. This special issue presents a collection of most recent contributions in this field. The papers have been grouped into five area: optoelectronic architectures, routing, efficiency, memory, and applications. In each area there are two to three papers, totaling eight regular articles and four research notes.
Journal of Parallel and Distributed Computing | 1995
Song Chen; Mary Mehrnoosh Eshaghian; Richard F. Freund; Jerry L. Potter; Ying-Chieh Wu
In this paper, we evaluate two different programming paradigms for heterogeneous computing, Cluster-M and Heterogeneous Associative Computing (HAsC). These paradigms can efficiently support heterogeneous networks by preserving a level of abstraction without containing any architectural details. The paradigms are architecturally independent and scalable for various network and problem sizes. Cluster-M can be applied to both coarse-grained and fine-grained networks. Cluster-M provides an environment for porting heterogeneous tasks onto the machines in a heterogeneous suite such that resource utilization is maximized and the overall execution time is minimized. HAsC models a heterogeneous network as a coarse-grained associative computer. It is designed to optimize the execution of problems where the program size is small compared with the amount of data processed. Unlike other existing heterogeneous orchestration tools which are MIMD based, HAsC is for data-parallel SIMD associative computing. Ease of programming and execution speed are the primary goals of HAsC. We evaluate how these two paradigms can be used together to provide an efficient scheme for heterogeneous programming. Finally, their scalability issues are discussed.
Concurrency and Computation: Practice and Experience | 1995
Song Chen; Mary Mehrnoosh Eshaghian
The paper presents a generic technique for mapping parallel algorithms onto parallel architectures. The proposed technique is a fast recursive mapping algorithm which is a component of the Cluster-M programming tool. The other components of Cluster-M are the Specification module and the Representation module. In the Specification module, for a given task specified by a high-level machine-independent program, a clustered task graph called Spec graph is generated. In the Representation module, for a given architecture or computing organization, a clustered system graph called Rep graph is generated. Given a task (or system) graph, a Spec (or Rep) graph can be generated using one of the clustering algorithms presented in the paper. The clustering is done only once for a given task graph (system graph) independent of any system graphs (task graphs). It is a machine-independent (application-independent) clustering, and therefore it is not repeated for different mappings. The Cluster-M mapping algorithm presented produces a sub-optimal matching of a given Spec graph containing M task modules, onto a Rep graph of N processors, in O(MN) time. This generic algorithm is suitable for both the allocation problem and the scheduling problem. Its performance is compared to other leading techniques. We show that Cluster-M produces better or similar results in significantly less time and using fewer or an equal number of processors as compared to the other known methods.
Journal of Parallel and Distributed Computing | 1994
Mary Mehrnoosh Eshaghian; Sing Lee; Muhammad Shaaban
In this paper, we present an overview of the impact of optical technology on parallel image computing. We study a few efficient and simple optical organizations for a set of preprocessing tasks such as texture analysis, histogramming, edge detection, dilation, and contraction. Based on a generic parallel model of computation with optical interconnects called OMC, we then discuss a set of parallel architectures and algorithms for fine grain intermediate vision processing. These include optimal solutions to problems such as connectivity and proximity using massively parallel optical arrays. In conclusion, we concentrate on higher level image understanding issues such as feature extraction and pattern recognition.
international parallel processing symposium | 1993
Mary Mehrnoosh Eshaghian; Muhammad Shaaban
Cluster-M is a new parallel programming paradigm for designing portable software. The two main components of this paradigm are cluster-M specifications and cluster-M representations. Cluster-M specifications are high level machine independent parallel code which are mapped onto cluster-M representations, system graphs representing the topologies of the underlying architectures. An algorithm for generating cluster-M representations is presented. Also, a set of high-level constructs essential for writing cluster-M specifications are shown. Using these components, an efficient methodology is proposed to map parallel algorithms onto architectures.<<ETX>>
Proceedings Heterogeneous Computing Workshop | 1994
Song Chen; Mary Mehrnoosh Eshaghian; Richard F. Freund; Jerry L. Potter; Ying-Chieh Wu
We study the heterogeneous use of two programming paradigms for heterogeneous computing called Cluster-M and HAsC. Both paradigms can efficiently support heterogeneous networks by preserving a level of abstraction which does not include any architecture mapping details. Furthermore, they are both machine independent and hence are scalable. Unlike almost all existing heterogeneous orchestration tools which are MIMD based, HAsC is based on the fundamental concepts of SIMD associative computing. HAsC models a heterogeneous network as a coarse grained associative computer and is designed to optimize the execution of problems with large ratios of computations to instructions. Ease of programming and execution speed, not the utilization of idle resources are the primary goals of HAsC. On the other hand, Cluster-M is a generic technique that can be applied to both coarse grained as well as fine grained networks. Cluster-M provides an environment for porting various tasks onto the machines in a heterogeneous suite such that resource utilization is maximized and the overall execution time is minimized. We illustrate how these two paradigms can be used together to provide an efficient medium for heterogeneous programming. Finally, their scalability is discussed.<<ETX>>
1993 Computer Architectures for Machine Perception | 1993
Mary Mehrnoosh Eshaghian; J.G. Nash; Muhammad Shaaban; David B. Shu
The authors present a set of heterogeneous algorithms for computer vision tasks using the image understanding architecture (IUA). The full-scale IUA developed jointly by Hughes Research Labs and University of Massachusetts at Amherst is a multiple level heterogeneous architecture. Each level is constructed to perform tasks most suitable to its mode of processing. The lowest level called CAAPP is an SIMD bit-serial mesh. The second level is an MIMD organization of numerically powerful digital signal processing chips. At the top level there are fewer number of MIMD general purpose processors. The authors propose a set of algorithms utilizing multiple levels of this organization, concurrently. The problems studied include Hugh transform-line detection, finding geometric properties of images, and high level image understanding tasks such as object matching.