Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iwona Sakrejda is active.

Publication


Featured researches published by Iwona Sakrejda.


scientific cloud computing | 2011

Magellan: experiences from a science cloud

Lavanya Ramakrishnan; Piotr T. Zbiegel; Scott Campbell; Rick Bradshaw; Richard Shane Canon; Susan Coghlan; Iwona Sakrejda; Narayan Desai; Tina Declerck; Anping Liu

Cloud resources promise to be an avenue to address new categories of scientific applications including data-intensive science applications, on-demand/surge computing, and applications that require customized software environments. However, there is a limited understanding on how to operate and use clouds for scientific applications. Magellan, a project funded through the Department of Energys (DOE) Advanced Scientific Computing Research (ASCR) program, is investigating the use of cloud computing for science at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Facility (NERSC). In this paper, we detail the experiences to date at both sites and identify the gaps and open challenges from both a resource provider as well as application perspective.


measurement and modeling of computer systems | 2012

Evaluating Interconnect and Virtualization Performance forHigh Performance Computing

Lavanya Ramakrishnan; Shane Canon; Krishna Muriki; Iwona Sakrejda; Nicholas J. Wright

Scientists are increasingly considering cloud computing platforms to satisfy their computational needs. Previous work has shown that virtualized cloud environments can have significant performance impact. However there is still a limited understanding of the nature of overheads and the type of applications that might do well in these environments. In this paper we detail benchmarking results that characterize the virtualization overhead and its impact on performance. We also examine the performance of various interconnect technologies with a view to understanding the performance impacts of various choices. Our results show that virtualization can have a significant impact upon performance, with at least a 60% performance penalty. We also show that less capable interconnect technologies can have a significant impact upon performance of typical HPC applications. We also evaluate the performance of the Amazon Cluster compute instance and show that it performs approximately equivalently to a 10G Ethernet cluster at low core counts.


Physics Letters B | 1992

Strangeness production at mid-rapidity in S + Pb collisions at 200 GeV/c per nucleon

E. Andersen; Peter D. Barnes; R. Blaes; M. Cherney; B. De La Cruz; G.E. Diebold; B. Dulny; C. Fernandez; G. Franklin; C. Garabatos; J.A. Garzon; W.M. Geist; D. Greiner; C. Gruhn; M. Hafidouni; J. Hrubec; Peter Graham Jones; E.G. Judd; J. P. M. Kuipers; M. Ladrem; P. Ladron de Guevara; G. Lovhoiden; J. MacNaughton; J. Mosquera; Z. Natkaniec; John Nelson; G. Neuhofer; W.C. Ogle; C. Pérez de los Heros; M. Plo

Abstract Experimental evidence is presented for a source of unusually high strangeness content located at mid-rapidity in 200 GeV/ c per nucleon collisions of 32 S projectiles with a Pb target. The enhancement is not seen in p+Pb reactions measured in the same apparatus at the same energy. This source becomes dominant for central collisions.


ieee international conference on high performance computing data and analytics | 2011

Evaluating interconnect and virtualization performance for high performance computing

Lavanya Ramakrishnan; Richard Shane Canon; Krishna Muriki; Iwona Sakrejda; Nicholas J. Wright

Scientists are increasingly considering cloud computing platforms to satisfy their computational needs. Previous work has shown that virtualized cloud environments can have significant performance impact. However there is still a limited understanding of the overheads and the type of applications that might do well in these environments. In this paper, we detail benchmarking results that characterize the virtualization overhead and examine the performance of various interconnect technologies. Our results show that virtualization and less capable interconnect technologies can have a significant impact upon performance of typical HPC applications. We also evaluate the performance of the Amazon Cluster Compute instances and show that it performs approximately equivalently to a 10G Ethernet cluster at low core counts.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 1989

A TPC in the context of heavy-ion collisions

C. Garabatos; P. Álvarez de Lara; E. Andersen; P.D. Barnes; R. Blaes; Helmut Braun; J.-M. Brom; B. Castaño; M. Cherney; M. Cohler; B. De La Cruz; G.E. Diebold; C. Fernandez; G. B. Franklin; J.A. Garzon; W.M. Geist; D.E. Greiner; C.R. Gruhn; M. Hafidouni; M. Heiden; C.P. de los Heros; J. Hubrec; D. Huss; J.L. Jacquot; P.G. Jones; J.P.M. Kuipers; P. Ladron de Guevara; D. Liko; G. Lovhoiden; J. MacNaughton

Abstract The design of a TPC for a high-multiplicity environment is described. The sense plane is composed of a matrix with 40 rows of short anode wires. The end-cap is completed with rod and wire electrodes to establish the amplification region and to minimize space charge effects. The digital readout of more than 6000 channels is performed by a combination of purpose-built comparators and FASTBUS TDCs. The performance of the chamber is briefly described, and some ideas to improve the hit efficiency and resolution in a new end-cap design are discussed.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2003

Hardware controls for the STAR experiment at RHIC

D Reichhold; F. Bieser; M. Bordua; M. Cherney; J. Chrin; J.C Dunlop; M.I. Ferguson; V. Ghazikhanian; J. Gross; G. Harper; M. Howe; S. Jacobson; Spencer R. Klein; P. Kravtsov; S. Lewis; J. Lin; C. Lionberger; G LoCurto; C. McParland; T. S. McShane; J. Meier; Iwona Sakrejda; Z. Sandler; J. Schambach; Y Shi; R.M. Willson; E. Yamamoto; W. M. Zhang

Abstract The STAR detector sits in a high radiation area when operating normally; therefore it was necessary to develop a robust system to remotely control all hardware. The STAR hardware controls system monitors and controls approximately 14,000 parameters in the STAR detector. Voltages, currents, temperatures, and other parameters are monitored. Effort has been minimized by the adoption of experiment-wide standards and the use of pre-packaged software tools. The system is based on the Experimental Physics and Industrial Control System (EPICS) [1] . VME processors communicate with subsystem-based sensors over a variety of field busses, with High-level Data Link Control (HDLC) being the most prevalent. Other features of the system include interfaces to accelerator and magnet control systems, a web-based archiver, and C++-based communication between STAR online, run control and hardware controls and their associated databases. The system has been designed for easy expansion as new detector elements are installed in STAR.


Nuclear Physics | 1992

Results from CERN experiment NA36 on strangeness production

E. Andersen; Peter D. Barnes; R. Blaes; Helmut Braun; J.-M. Brom; M. Cherney; M. Cohler; B. De La Cruz; G.E. Diebold; B. Dunin; B. Escobales; R. Fang; C. Fernandez; G. B. Franklin; C. Garabatos; J.A. Garzon; W.M. Geist; Alfredo de Jesús Gutiérrez Gómez; D. Greiner; C. Gruhn; M. Hafidouni; J. Hrubec; J.L. Jacquot; E. Jegham; Peter Graham Jones; J.P.M. Kuipers; M. Ladrem; P. Ladron de Guevara; D. Liko; S. Lopez-Ponte

Author(s): Andersen, E.; Barnes, P.D.; Blaes, R.; Braun, H.; Brom, J.M.; Cherney, M.D.; Cohler, M.; Cruz, B. de la; Diebold, G.E.; Dunin, B.; Escobales, B.; Fang, R.; Fernandez, C.; Franklin, G.; Garabatos, C.; Garzon, J.A.; Geist, W.M.; Gomez, A.; Greiner, D.E.; Gruhn, C.; Hafidouni, M.; Hrubec, J.; Lacquot, J.L.; Jegham, E.; Jones, P.G.; Kuipers, J.P.; Ladren, M.; Guevara, P. Ladron de; Liko, D.; Lopez-Ponte, S.; Govhoiden, G.; MacNaughton, J.; Michalon, A.; Michalon-Mentzer, M.E.; Mosquera, J.; Natkaniec, Z.; Nelson, J.M.; Neuhofer, G.; Ogle, W.; Heros, C. Perez de los; Plo, M.; Porth, P.; Powell, B.; Quinn, B.; Ramil, A.; Riester, J.L.; Rohringer, H.; Sakrejda, I.; Thorsteinsen, T.; Traxler, J.; Voltolini, C.; Yanez, A.; Ye, Y.; Zybert, R.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 1989

High-speed readout for a time projection chamber in the NA36 data acquisition system

M. Cherney; P. Álvarez de Lara; E. Andersen; P.D. Barnes; R. Blaes; Helmut Braun; J.-M. Brom; B. Castaño; M. Cohler; B. De La Cruz; G.E. Diebold; C. Fernandez; G. B. Franklin; C. Garabatos; J.A. Garzon; W.M. Geist; D.E. Greiner; C.R. Gruhn; M. Hafidouni; M. Heiden; C.P. de los Heros; J. Hubrec; D. Huss; J.L. Jacquot; P.G. Jones; J.P.M. Kuipers; P. Ladron de Guevara; D. Liko; G. Lovhoiden; J. MacNaughton

Abstract Experience with the NA36 high-speed FASTBUS-based data acquisition system is presented. The multicrate system operates with a 35 m cable segment and extensive local data compression and processing. The principle data sources are a time projection chamber and associated calorimetry. A special FASTBUS interface was developed for drift- and proportional chamber readout.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2002

Distributed drift chamber design for rare particle detection in relativistic heavy ion collisions

R. Bellwied; M.J Bennett; V Bernardo; H. Caines; W. Christie; S. Costa; H. J. Crawford; M Cronqvist; R. Debbe; R Dinnwiddie; J Engelage; I. Flores; R. Fuzesy; L Greiner; T.J. Hallman; G. W. Hoffmann; H. Z. Huang; P. Jensen; E.G Judd; K. Kainz; Morton Kaplan; S. Kelly; P. J. Lindstrom; W. J. Llope; G. LoCurto; R. S. Longacre; Z. Milosevich; J. T. Mitchell; J. W. Mitchell; E. Mogavero

This report describes a multi-plane drift chamber that was designed and constructed to function as a topological detector for the BNL AGSE896 rare particle experiment. The chamber was optimized for good spatial resolution, two track separation, and a high uniform efficiency while operating in a 1.6 Tesla magnetic field and subjected to long term exposure from a 11.6 GeV/nucleon beam of 10**6 Au ions per second.


Archive | 1996

Strangeness production at RHIC energies as seen by the STAR detector

Iwona Sakrejda

In search for the Quark Gluon Plasma, which is one of the most exciting tasks of the Nuclear Physics these days, experimentalists reach for higher and higher energies in hope to establish most favourable conditions for its creation. A new generation of experiments for the RHIC accelerator is under construction and will be ready to take data in 1999. The focus for the STAR detector, located in the Wide Angle Hall of the RHIC accelerator, are hadronic observables. This paper will present STAR capabilities in the area of strangeness measurements and its ability to correlate them with other event characteristics.

Collaboration


Dive into the Iwona Sakrejda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Hafidouni

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

R. Blaes

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Gruhn

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

D. Greiner

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge