Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gita Alaghband is active.

Publication


Featured researches published by Gita Alaghband.


parallel computing | 1989

Parallel pivoting combined with parallel reduction and fill-in control☆

Gita Alaghband

Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generation of fill-ins and check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel pivoting technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.


parallel computing | 1995

Parallel sparse matrix solution and performance

Gita Alaghband

A parallel solution to the large sparse systems of linear equations is presented. The solution method is based on a parallel pivoting technique for LU decomposition on a shared memory MIMD multiprocessor. At each application of the algorithm to the matrix several pivots for reducing the matrix in parallel are generated. During parallel pivoting steps only symmetric permutations are possible. Unsymmetric permutation for numerical stability however is possible during single pivoting steps. We will report on switching between parallel and single pivoting steps to assure numerical stability. Once the matrix is decomposed, the parallel pivoting information is used to solve structurally identical matrices repeatedly. The algorithms, their implementation, and the performance of the solution methods on actual multiprocessors are presented. Based on the resulting triangular matrix structure, two algorithms for back substitution are presented and their performance is compared.


Proceedings of the 2013 International Workshop on Data-Intensive Scalable Computing Systems | 2013

Novel parallel method for mining frequent patterns on multi-core shared memory systems

Lan Vu; Gita Alaghband

Frequent pattern mining is an important problem in data mining with many practical applications. Current parallel methods for mining frequent patterns unstably perform for different database types and under-utilize the benefits of multi-core shared memory machines. We present ShaFEM, a novel parallel frequent pattern mining method, to address these issues. Our method can dynamically adapt to the data characteristics to efficiently perform on both sparse and dense databases. Its parallel mining lock free approach minimizes the synchronization needs and maximizes the data independence to enhance the scalability. Its structure lends itself well for dynamic job scheduling resulting in well-balanced load on new multi-core shared memory architectures. We evaluate ShaFEM on a 12-core multi-socket server and find that our method runs 2.1--5.8 times faster than the state-of-the-art parallel method. For some test cases, we have shown that ShaFEM saves 4.9 days and 12.8 hours of execution time over the compared method.


parallel computing | 2014

Novel parallel method for association rule mining on multi-core shared memory systems

Lan Vu; Gita Alaghband

ShaFEM: a novel association rule mining method for multi-core shared memory systems.ShaFEM self-adapts to data characteristic to run fast on sparse and dense databases.ShaFEM uses two mining strategies and dynamically switching between them.ShaFEM applies its new lock free solution, new data structure named XFP-tree.ShaFEM is up to 5.8 times faster and 7.1 times less memory than the compared method. Association rule mining (ARM) is an important task in data mining with many practical applications. Current methods for association rule mining have shown unstable performance for different database types and under-utilize the benefits of multi-core shared memory machines. In this paper, we address these issues by presenting a novel parallel method for finding frequent patterns, the most computational intensive phase of ARM. Our proposed method, named ShaFEM, combines two mining strategies and applies the most appropriate one to each data subset of the database to efficiently adapt to the data characteristics and run fast on both sparse and dense databases. In addition, our newlock-free design minimizes the synchronization needs and maximizes the data independence to enhance the scalability. The new structure lends itself well to dynamic job scheduling resulting in a well-balanced load on the new multi-core shared memory architectures. We have evaluated ShaFEM on 12-core multi-socket servers and found that our method run up to 5.8 times faster and consumes memory up to 7.1 times less than the state-of-the-art parallel method. For some test cases, ShaFEM can save up to 4.9days of execution time over the compared method.


acm symposium on parallel algorithms and architectures | 2006

Introducing the hydra parallel programming system

Franklin E. Powers; Gita Alaghband

Hydra PPS is a collection of annotations, classes, a runtime, and a compiler designed to provide Java programmers with a fairly simple method of producing programs for Symmetric Multiprocessing (SMP) architectures. This paper introduces the basics of this new system including the basic constructs for this new programming language and the relationship between the Java VM, the compiler, the runtime, and the parallel program. Hydra will exploit parallelism when the underlying architecture supports it and will run as normal sequential Java program when the architecture does not have support for parallelism. Parallelism is expressed through events in Hydra, it is easy to use, and programs run efficiently on parallel architectures.


International Journal of Electronics | 1997

Numerical modelling and characterization of high-frequency high-power high-temperature GaN/SiC heterostructure bipolar transistors

Hamid Z. Fardi; Gita Alaghband; Jacques I. Pankove

Device modelling is used in the characterization of GaN/SiC heterostructure bipolar transistors (HBTs), operating at high power, high frequency and high temperature. The differential DC current gain was simulated to be constant for an emitter current over several orders of magnitudes and decreased significantly with increasing temperature. The current gain as a function of temperature was obtained from a maximum of 300 000 at room temperature to a value of about 200 at 300°C. These simulated results are in agreement with experimental data obtained by others. Simulated results show a maximum cut-off frequency of 6 GHz for the actual device at a current density of 3000A cm−2. It is shown that high-temperature device modelling is essential in the design and optimization of GaN/SiC HBTs for high-power high-frequency high-temperature operation.


Scientific Programming | 1994

Overview of the force scientific parallel language

Gita Alaghband; Harry F. Jordan

The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.


Concurrency and Computation: Practice and Experience | 2008

The Hydra Parallel Programming System

Franklin E. Powers; Gita Alaghband

The Hydra Parallel Programming System, a new parallel language extension to Java, and its supporting software are described. It is a fairly simple yet powerful language designed to address a number of areas that have not received much attention. One of these areas is the recompilation of parallel programs at runtime to allow a parallel program to adapt to the architecture it is executing on. The first version of this software system focuses on smaller Symmetric Multiprocessing and compatible architectures which are becoming more common. This particular class of machines has a great need for more options in the area of parallel programming among the vastly popular Java language programmers. Hydra programs will run as sequential Java on machines that do not have the parallel support or do not have an implemented Hydra runtime system without requirement of any modifications to the program. This paper describes the language, compares it with other languages (specifically with JOMP, an OpenMP implementation for Java), presents a brief discussion on compiling and executing Hydra programs, presents some sample benchmarks and their performance on three platforms, and concludes with a discussion of issues and future directions for Hydra. Copyright


Computer Applications and Information Systems (WCCAIS), 2014 World Congress on | 2014

An efficient approach for mining association rules from sparse and dense databases

Lan Vu; Gita Alaghband

Association rule mining (ARM) is an important task in data mining. This task is computationally intensive and requires large memory usage. Many existing methods for ARM perform efficiently on either sparse or dense data but not both. We address this issue by presenting a new approach for ARM that runs fast for both sparse and dense databases by detecting the characteristic of data subsets in database and applying a combination of two mining strategies: one is for the sparse data subsets and the other is for the dense ones. Two algorithms, FEM and DFEM, based on our approach are introduced in this paper. FEM applies a fixed threshold as the condition for switching between the two mining strategies while DFEM adopts this threshold dynamically at runtime to best fit the characteristics of the database during the mining process, especially when minimum support threshold is low. Additionally, we present optimization techniques for the proposed algorithms to speed up the mining process, reduce the memory usage and optimize the I/O cost. We also analyze in-depth the performance of FEM and DFEM and compare them with several existing algorithms. The experimental results show that FEM and DFEM achieve a significant improvement in execution time and consume less memory than many popular ARM algorithms including the wellknown Apriori, FP-growth and Eclat on both sparse and dense databases.


collaboration technologies and systems | 2012

High performance frequent pattern mining on multi-core cluster

Lan Vu; Gita Alaghband

Mining frequent patterns is a fundamental data mining task with numerous practical applications such as consumer market-basket analysis, web mining, and network intrusion detection. When database size is large, executing this mining task on a personal computer is non-trivial because of huge computational time and memory consumption. In our previous research, we proposed a novel algorithm named FEM which is more efficient than well-known algorithms like Apriori, Eclat or FP-growth in discovering frequent patterns from both dense and sparse databases. However, in order to apply FEM to applications with large-scale databases, it is essential to develop new parallel algorithms that are based on FEM and deploy this mining task on high performance computer systems. In this paper, we present a new method named PFEM that parallelizes the FEM algorithm for a cluster of multi-core machines. Our proposed method allows each machine in the cluster execute an independent mining workload to improve the scalability. Computations within a multi-core machine use shared memory model to reduce communication overhead and maintain load balance. With the collaboration of both distributed memory and shared memory computational models, PFEM can adapt well to large computer systems with many multi-core.

Collaboration


Dive into the Gita Alaghband's collaboration.

Top Co-Authors

Avatar

Lan Vu

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Hamid Z. Fardi

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

David Gnabasik

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Donald W. Mathis

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

William J. Wolfe

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Franklin E. Powers

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Harry F. Jordan

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Alan Baxter

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Bernardo Rodriguez

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

C. Anderson

University of Colorado Denver

View shared research outputs
Researchain Logo
Decentralizing Knowledge