Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sudip S. Dosanjh is active.

Publication


Featured researches published by Sudip S. Dosanjh.


ieee international conference on high performance computing data and analytics | 2010

Exascale computing technology challenges

John Shalf; Sudip S. Dosanjh; John Morrison

High Performance Computing architectures are expected to change dramatically in the next decade as power and cooling constraints limit increases in microprocessor clock speeds. Consequently computer companies are dramatically increasing on-chip parallelism to improve performance. The traditional doubling of clock speeds every 18-24 months is being replaced by a doubling of cores or other parallelism mechanisms. During the next decade the amount of parallelism on a single microprocessor will rival the number of nodes in early massively parallel supercomputers that were built in the 1980s. Applications and algorithms will need to change and adapt as node architectures evolve. In particular, they will need to manage locality to achieve performance. A key element of the strategy as we move forward is the co-design of applications, architectures and programming environments. There is an unprecedented opportunity for application and algorithm developers to influence the direction of future architectures so that they meet DOE mission needs. This article will describe the technology challenges on the road to exascale, their underlying causes, and their effect on the future of HPC system design.


ieee international conference on high performance computing data and analytics | 2009

IESP Exascale Challenge: Co-Design of Architectures and Algorithms

Al Geist; Sudip S. Dosanjh

There is a large gap between the peak performance of supercomputers and the actual performance realized by today’s algorithms. This architecture—algorithm performance gap will get even wider with the increase in computing power being driven by a rapid escalation in the number of cores incorporated into a single chip rather than increases in the clock rate. In order to improve the effectiveness of peta and exascale systems we need to have a paradigm shift where architectures and algorithms are co-designed.


International Journal of Distributed Systems and Technologies | 2010

On the Path to Exascale

Brian W. Barrett; Ron Brightwell; Sudip S. Dosanjh; Al Geist; Scott Hemmert; Michael A. Heroux; Doug Kothe; Richard C. Murphy; Jeff Nichols; Ron A. Oldfield; Arun Rodrigues; Jeffrey S. Vetter; Ken Alvin

There is considerable interest in achieving a 1000 fold increase in supercomputing power in the next decade, but the challenges are formidable. In this paper, the authors discuss some of the driving science and security applications that require Exascale computing a million, trillion operations per second. Key architectural challenges include power, memory, interconnection networks and resilience. The paper summarizes ongoing research aimed at overcoming these hurdles. Topics of interest are architecture aware and scalable algorithms, system simulation, 3D integration, new approaches to system-directed resilience and new benchmarks. Although significant progress is being made, a broader international program is needed.


International Journal of Distributed Systems and Technologies | 2010

The Red Storm Architecture and Early Experiences with Multi-Core Processors

Ron Brightwell; William J. Camp; Sudip S. Dosanjh; Suzanne M. Kelly; John M. Levesque; Paul Lin; Vinod Tipparaju; James L. Tomkins

The Red Storm architecture, which was conceived by Sandia National Laboratories and implemented by Cray, Inc., has become the basis for most successful line of commercial supercomputers in history. The success of the Red Storm architecture is due largely to the ability to effectively and efficiently solve a wide range of science and engineering problems. The Cray XT series of machines that embody the Red Storm architecture have allowed for unprecedented scaling and performance of parallel applications spanning many areas of scientific computing. This paper describes the fundamental characteristics of the architecture and its implementation that have enabled this success, even through successive generations of hardware and software.


international conference on hardware/software codesign and system synthesis | 2010

Hardware/software co-design for high performance computing: challenges and opportunities

X. Sharon Hu; Richard C. Murphy; Sudip S. Dosanjh; Kunle Olukotun; Stephen W. Poole

This special session aims to introduce to the hardware/software codesign community challenges and opportunities in designing high performance computing (HPC) systems. Though embedded system design and HPC system design have traditionally been considered as two separate areas of research, they in fact share quite some common features, especially as CMOS devices continue along their scaling trends and the HPC community hits hard power and energy limits. Understanding the similarities and differences between the design practices adopted in the two areas will help bridge the two communities and lead to design tool developments benefiting both communities.


International Journal of Heat and Fluid Flow | 1989

Melting and refreezing of porous media

Sudip S. Dosanjh

Abstract During severe nuclear reactor accidents similar to Three-Mile Island, the fuel rods can fragment and thus convert the reactor core into a large rubble bed composed primarily of UO2 and ZrO2 particles. In the present study a one-dimensional model is developed for the melting and refreezing of such a bed. The analysis includes mass conservation equations for the species of interest (UO2 and ZrO2); a momentum equation that represents a balance among drag, capillary and gravity forces; an energy equation that incorporates the effects of convection by the melt, radiation and conduction through the bed and internal heat generation; and a UO2ZrO2 phase diagram. A few key results are that (1) capillary forces are only important in beds composed of particles smaller than a few millimeters in diameter and in such beds, melt relocates both upward and downward until it freezes, forming crusted regions above and below the melt zone; (2) as melt flows downward and freezes, a flow blockage forms near the bottom of the bed and the location of this blockage is determined by the bottom thermal boundary layer thickness; (3) the maximum thickness of the lower crust increases linearly with the height of the bed; and (4) deviations from initially uniform composition profiles occur because ZrO2 is preferentially melted and these deviations decrease as the initial ZrO2 concentration is increased.


EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface | 2011

Achieving exascale computing through hardware/software co-design

Sudip S. Dosanjh; Richard F. Barrett; Michael A. Heroux; Arun Rodrigues

Several recent studies discuss potential Exascale architectures, identify key technical challenges and describe research that is beginning to address several of these challenges [1,2]. Co-design is a key element of the U.S. Department of Energy’s strategy to achieve Exascale computing [3]. Architectures research is needed but will not, by itself, meet the energy, memory, parallelism, locality and resilience hurdles facing the HPC community – system software and algorithmic innovation is needed as well. Since both architectures and software are expected to evolve significantly there is a potential to use the co-design methodology that has been developed by the embedded computing community. A new co-design methodology for high performance computing is needed.


hawaii international conference on system sciences | 2002

Developing a flexible system-modeling environment for engineers

David R. Gardner; Joseph Pete Castro; Paul N Demmie; Mark A. Gonzales; Gary L. Hennigan; Michael F. Young; Sudip S. Dosanjh

We are developing a module-oriented, multiphysics, mixed fidelity system simulation environment that will enable engineers to rapidly analyze the performance of a system and to optimize its design. In the environment, physical components of the system are represented by software components, and are linked by ports that transfer and transform data between them. The model fidelity in a composite module may be specified independently, eg, one composite module may have a parametric model and another may have a three-dimensional finite-element model. In a prototype of the environment users can specify thermal radiation models for each system component, embed electrical circuits in each component, and set the external conditions for the system. During the simulation users can monitor the thermal and electrical behavior of the system. The latest software design for the environment promises greater flexibility in extending the environment for analyzing and optimizing a variety of complex systems.


international conference on computer aided design | 2012

Toward codesign in high performance computing systems

Richard F. Barrett; Sudip S. Dosanjh; Michael A. Heroux; Xiaobo Sharon Hu; Steven G. Parker; John Shalf

Preparations for exascale computing have led to the realization that computing environments will be significantly different from those that provide petascale capabilities. This change is driven by energy constraints, which has compelled hardware architects to design systems that will require a significant re-thinking of how application algorithms are selected and implemented. The “codesign” principle may offer a common basis for application and system developers as well as architects to work synergistically towards achieving exascale computing. This paper aims to introduce to the embedded system design community the unique challenges and opportunities as well as exciting developments in exascale HPC system codesign. Given the success of adopting codesign practices in the embedded system design area, this effort should be mutually beneficial to both communities.


ICNAAM 2010: International Conference of Numerical Analysis and Applied Mathematics 2010 | 2010

Co‐design for High Performance Computing

Arun Rodrigues; Sudip S. Dosanjh; Scott Hemmert

Co‐design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co‐design in High Performance Computing related research in embedded computing the development of hardware/software co‐simulation methods.

Collaboration


Dive into the Sudip S. Dosanjh's collaboration.

Top Co-Authors

Avatar

Michael A. Heroux

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Arun Rodrigues

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Al Geist

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David E. Womble

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

James L. Tomkins

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

John Shalf

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Richard C. Murphy

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Richard F. Barrett

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ron A. Oldfield

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ron Brightwell

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge