Abstract
The recent woes of the supercomputer industry and changes in federal funding have caused some scientists to re-evaluate the means by which they hope to solve Grand Challenge problems. I evaluate the potential of Massively Parallel Processors (MPP) within this context and the state of today's MPP. I stress that for solving large-scale problems MPP are crucial and that it is essential to seek a balance between CPU performance, memory access time, inter-node communications, and I/O. To achieve this it is important to preserve certain characteristics of the hardware while selecting the hottest processor to design the machine around. I emphasize that for long term stability and growth of parallel computing priority should be given to standardizing software so that the same code can run on different platforms and on machines ranging from clusters of workstations to MPP.