Are Supercomputers Dead?


Summary

  • Supercomputers are highly specialized, extremely expensive, and power-intensive machines used for specific tasks like climate modeling.
  • Supercomputers require custom hardware and careful planning for optimal performance, costing hundreds of millions of dollars to build and maintain.
  • Despite the rise of data centers for distributed computing, supercomputers will likely continue to exist due to security needs and their role in driving computer development.


When was the last time you heard or read about “supercomputers”? These days it’s all about data centers and their number-crunching abilities, so what’s happening to the hypercars of the computer world?



Defining a Supercomputer

Before we can even talk about whether the supercomputer as a concept is on life support, I have to clarify what I mean when I say “supercomputer”.

These are computers that, obviously, have much higher performance than regular computers that typical individuals or businesses would buy. However, this performance comes from being extremely specialized. Unlike the device you’re reading this article on, supercomputers aren’t general-purpose devices. They are carefully designed to do one job or a set of closely-related jobs as fast as possible.

Areas that are typically in the domain of supercomputers include climate modeling, simulating nuclear explosions, astrophysics, and numerous other tough problems that need the very best in computing power to solve.

The Downsides of Supercomputers

Side view of a Frontier cabinet.
Oak Ridge National Laboratory


Because of their custom architecture, massive scale, and relatively narrow set of uses, supercomputers can quickly turn into projects that run into hundreds of millions of dollars. The Frontier supercomputer, which was completed in 2022, is estimated to cost around $600M!

Modern supercomputers like Frontier use bog-standard mass-produced CPUs and GPUs, but hundreds or thousands of them. The secret sauce of a modern supercomputer is how these many processors are physically connected to each other.

To minimize the performance loss caused by interconnections the hardware has to be carefully planned, and a lot of completely custom work has to be done. The physical hardware is just half of the battle, with the firmware and software for the supercomputer finely tuned to make the most of the processing power that’s theoretically available.

Supercomputers use immense amounts of power, and need lots of specialized maintenance and staffing. They take up entire floors of buildings (or have buildings built for them), and once the computer is done it’s usually too expensive or complicated to alter it to do tasks it wasn’t designed for.


How Data Centers Do It Better

A data center of a university with storage racks
Jason Dookeran/How To Geek | Leonardo AI

Data centers are buildings full of server computers. They are interconnected in largely standardized ways, though I don’t want to downplay the technical flex of how modern data centers are put together. Nonetheless, data centers are not designed for all the computers in them to work together like one big computer.

However, thanks to the development of modern GPUs, which are essentially miniature supercomputers with thousands of parallel processing elements, it’s possible to put an immense amount of processing power within each individual server blade.

This means if you have a problem that can be broken into chunks, and processed by individual computers within the data center, you don’t need a specialized computer at all. This is effectively distributed computing in the same vein as the BOINC project that underpins research projects like Folding@Home and SETI@home. Here, regular people can donate their unused CPU cycles to work on scientific projects.


The computers in a data center are, by and large, general purpose. So, one company could rent server blades in a data center to run a streaming service, and then once they no longer need them, those same computers can be repurposed for a different job, such as working on AI models.

This means that, while data centers are massively expensive in their own right, those who own and manage them can ensure those systems are working and generating revenue 24/7.

Supercomputers Are Probably Here to Stay

While most people who need a lot of computing power these days are likely better off renting out capacity in a data center, I don’t think bespoke supercomputers are going anywhere. For one thing, data centers that are owned by third-parties have all sorts of security and privacy implications.

Supercomputers are therefore something every government will want, so that they can do sensitive, classified work with complete control over the safety of that data.


Also, there are some types of problems that you simply can’t break into chunks. Only massive, parallel supercomputers with their fine-tuned interconnection systems can handle these large problems holistically.

Perhaps most importantly, supercomputers don’t need a clear business case to exist. These machines are at the forefront of computer development. The hardware and software breakthroughs that computer scientists and engineers make to push supercomputers further and further, benefit all computing down the line. So, from a pure research point of view, these massive computers are more than worth the resources, time, and money they demand.



Source link

Previous articleMicrosoft’s Project Phoenix could make Edge look better than ever in Windows 11 – but I’m not sure it’s enough to take the fight to Google Chrome
Next articleBitcoin Hits Lowest Price Since November as Dogecoin, XRP and Solana Fall