A supercomputer is a computer with extremely high computing power, which has computing capabilities far exceeding those of ordinary computers, and its performance is usually measured in floating-point operations per second (FLOPS). Since 2022, there have been supercomputers with computing power exceeding 1018 times per second. This type of computer is called an exascale supercomputer. In comparison, the performance of ordinary desktop computers generally ranges from hundreds of GigaFLOPS (1011) to tens of TeraFLOPS (1013).
As of November 2017, the world's 500 fastest supercomputers all run on Linux-based operating systems.
Supercomputers play an important role in the field of computational science and are used to handle a variety of computationally intensive tasks including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular simulation and physics simulation. Their speed and processing power also make them key players in cryptographic analysis research.
The concept of supercomputers began in the 1960s when Seymour Cray at Control Data Corporation (CDC) began designing computers with higher performance. Most of these early supercomputers were based on conventional designs but optimized to run faster. Gradually, the introduction of multi-core processing technology and parallel computing has brought supercomputer performance into a new stage.
The first device considered a supercomputer was the Livermore Atomic Research Computer built by UNIVAC for the United States Naval Research and Development Center.
Supercomputer design has evolved over time, from the earliest high-performance designs to the vector processors of the 1970s. The Cray-1, launched in 1976, became one of the most successful supercomputers in history, using a liquid cooling system to solve the problem of overheating. As technology develops, parallel processing supercomputers have become mainstream, and these machines are usually composed of tens of thousands of commercial processors.
Supercomputers aren't just about computing speed; the amount of data they need to process is also increasing. Supercomputers face serious challenges in managing heat and energy consumption. For example, the Tianhe-1A supercomputer consumes 4.04 MW of electricity, which results in huge hourly operating costs and cooling costs.
In the entire computer system, heat management has always been an important issue for central supercomputers. Extremely high heat density will affect the overall performance and life of the system.
Supercomputers excel in scientific research, especially in climate prediction and earth sciences. They are able to simulate a variety of environmentally relevant events, allowing researchers to better understand the impacts of climate change. In addition, the complex computing needs of the medical industry have also promoted the development of supercomputer technology, especially in protein structure prediction and drug design.
As the application scope of supercomputers continues to expand, future technological development needs to pay more attention to energy management and computing efficiency. An increasing number of studies are choosing to use graphics processing units (GPUs) to speed up computational processes, but their potential for use in everyday algorithms also needs to be further explored. In addition, advances in quantum computing have opened up new ideas for computing.
Today's supercomputers have become an important pillar of scientific research. So what new possibilities will future supercomputers create in research and technology?