Urs A. Muller
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Urs A. Muller.
IEEE Transactions on Neural Networks | 1995
Urs A. Muller; Anton Gunzinger; Walter Guggenbuhl
This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC (multiprocessor system with intelligent communication), is operational and runs the backpropagation algorithm at a speed of 330 million connection updates per second (continuous weight update) using 32-b floating-point precision. This is equal to 1.4 Gflops sustained performance. The complete system with 3.8 Gflops peak performance consumes less than 800 W of electrical power and fits into a 19-in rack. While reaching the speed of modern supercomputers, MUSIC still can be used as a personal desktop computer at a researchers own disposal. In neural net simulation, this gives a computing performance to a single user which was unthinkable before. The systems real-time interfaces make it especially useful for embedded applications.
international symposium on microarchitecture | 1992
Urs A. Muller; Bernhard Baumle; Peter Kohler; Anton Gunzinger; Walter Guggenbuhl
Music, a digital signal processor (DSP)-based system with a parallel distributed-memory architecture that provides enormous computing power yet retains the flexibility of a general-purpose computer, is discussed. It is shown that Music reaches a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers. The Music system hardware, programming, and backpropagation implementation are described.<<ETX>>
international conference on application specific array processors | 1992
Anton Gunzinger; Urs A. Muller; W. Scott; Bernhard Baumle; Peter Kohler; Walter Guggenbuhl
This paper describes a parallel distributed computer architecture called MUSIC (multi signal processor system with intelligent communication). A single processor element (PE) consists of a DSP 96002 from Motorola (60 MFlops), program and data memory and a fast, independent communication interface; all communication interfaces are connected through a communication ring. A system with 30 processor elements (PEs) is operational. It has a peak performance of 1.8 GFlops, an electrical power consumption of about 350 watt (including forced air cooling). It fits into a 19 inch rack. The hardware price of this system is 40000 US
international symposium on neural networks | 1993
Urs A. Muller; Anton Gunzinger; Walter Guggenbuhl
which means a selling price of approximately 200000 US
international symposium on neural networks | 1994
Urs A. Muller; Anton Gunzinger
. Beside the wellknown Mandelbrot algorithm (601 MFlops sustained), two real applications are at the moment successfully implemented on the system: the backpropagation algorithm for neural net learning results in a peak performance of 150 MCUPS (million connection updates per second) which equals 900 MFlops sustained and the molecular dynamics simulation program MD-Atom (443 MFlops sustained). Other applications of the system are in digital signal processing and finite element computation.<<ETX>>
parallel computing | 1996
Anton Gunzinger; Bernhard Baumle; Martin Frey; M. Klebl; Michael Kocheisen; Peter Kohler; R. Morel; Urs A. Muller; Matthias Rosenthal
A neural net simulation platform is described. It is implemented on the MUSIC (Multiprocessor System with Intelligent Communication) parallel distributed memory computer, which is able to run simulations at supercomputer speed and yet is very flexible and easy to use. The low price power consumption and size allow the use of the system as a desktop supercomputer which does not have to be shared among a large community of users. The programming environment is powerful enough to make the implementation of data parallel algorithms not essentially more difficult than on a single-processing environment and not more complicated than on a modern vector-processor based supercomputer, as is confirmed by practical examples.<<ETX>>
arXiv: Cryptography and Security | 2017
Ferdinand Brasser; Urs A. Muller; Alexandra Dmitrienko; Kari Kostiainen; Srdjan Capkun; Ahmad-Reza Sadeghi
Parallel computers seem to be ideal for speeding up simulations of neural nets in experimental research. However, for practical applications, still only little of these systems are being used. This paper discusses some of the reasons and describes a successfully applied implementation on the MUSIC parallel supercomputer (Multiprocessor System with Intelligent Communication).<<ETX>>
Archive | 2001
Michael Kocheisen; Urs A. Muller
At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH) in Zurich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication) has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of 400, and price is reduced by a factor of 100. Software development is a key issue of such parallel systems. This article focuses on the programming environment of the MUSIC system and on its applications.
arXiv: Cryptography and Security | 2017
Ferdinand Brasser; Srdjan Capkun; Alexandra Dmitrienko; Tommaso Frassetto; Kari Kostiainen; Urs A. Muller; Ahmad-Reza Sadeghi
neural information processing systems | 1993
Urs A. Muller; Michael Kocheisen; Anton Gunzinger