Thomas Sterling is professor of intelligent systems engineering at Indiana University,
and one of his many roles is director of the “Continuum Computer Architecture Laboratory”, which is also located at Indiana University.
Summit the most powerfulAs an American, he was eager to point out that the Summit supercomputer, at the Oak Ridge Laboratory in Tennessee, is currently the world’s most powerful supercomputer, after years of Chinese dominance. The maximum calculation speed of Summit is 200 petaflops, or 200 x 1015 floating point operations per second. But he also pointed out that this speed is not much to brag about, partly because most scientific calculations are carried out on significantly smaller supercomputers located around the world, such as the Nordic-based supercomputers at NSC, and also because the next generation of supercomputers is just around the corner, with speeds measured in exaflops (1018 floating point operations per second).
“The first supercomputer, ENIAC, was installed around the time that I was born, and today my toaster is smarter than it was”, he stated, with inimitable American humour, and used extrapolation to show that an exaflop computer will be available in October 2024, if the rate of development continues at its present level.
But he also claimed that progress will be more rapid than the extrapolation predicts, because a paradigm shift is occurring. The first such shift, according to Sterling, occurred Thomas Sterling, professor of intelligent systems engineering, University of Indiana Photo credit Göran Billesonwhen Seymour Cray invented the multicoding technique, which makes parallel processing possible. Cray computers were the most powerful in the world from the middle of the 1970s and for several years afterwards. The next step occurred around 2000, when servers started to be combined into clusters.
“Eighty percent of all supercomputers today are in clusters”, he said.
von Neumann architectureUntil now, however, the architecture of supercomputers has been based on the principles developed by mathematician John von Neumann in the 1940s, and used to construct ENIAC, which came online in 1946. To put it simplify, this architecture is composed of separate parts: arithmetical unit, memory, controller and user interface.
“So what’s wrong with the von Neumann architecture?” asked Thomas Sterling rhetorically, before immediately providing the answer himself: everything.
The first problem he describes is that 90% of the hardware in a supercomputer is needed to keep the other 10% working. Separating the logic circuits from the memory introduces the need for a hierarchy, registers and huge data flows between them.
“The transport pathways are too long and consume too much energy”, he claims.
Moore´s Law repealedThe second problem is that the calculations are carried out sequentially, one operation after the other, admittedly in parallel, but it still means that the calculations take longer.
A further aspect is that computer development has until now followed what is known as “Moore’s Law”, named after the founder of Intel, which states that the number of transistors on a chip doubles every two years. The components have now become so small that quantum phenomena, among other things, will prevent further development.
“Moore’s Law has been repealed. Stop working to make traditional processors faster: use the technology we have to combine memory and communication”, he says.
He suggests that machines with many small elements can be built, where the logic processors and the memory are united in one unit, with connections to neighbouring units.
Niclas Andersson, Matts Karlsson and Thomas Sterling. Photo credit Göran Billeson“This will be a supercomputer that understands the task, controls itself, and can distribute the calculations to achieve the greatest possible efficiency. It would be the most powerful machine in the world, and the cost would be low. It’s possible”, he assures us.
Smaller and cheaperAnd not only will the machine be cheaper and faster: it will also be smaller. The supercomputer that Thomas Sterling proposes will occupy 40 m2 (whereas Summit requires more than 8,000 m2): it will have a maximum calculation speed of 1.05 exaflops, and will consume far less energy than current supercomputers. He has drawn up a development plan, according to which it will be ready in three years, in 2022.
Niclas Andersson, technical manager at NSC, is impressed.
“As we watch the development of current technology slow, the field becomes open for various alternative architectures. Sterling’s machine is very attractive. We must study it in more detail to understand how such an architecture can be programmed and used for HPC. It’s completely clear that the future holds exciting and more diversified pathways of development”, he says.
Exa, zetta and yottaThomas Sterling takes us even further into the future.
“It’s positive that research into quantum computing is under way, but how to cope with the engineering challenges remains a problem. Computers that mimic the way in which the brain works (neuromorphic computing) can take us away from the von Neumann architecture, but we need to learn more, and the technology will not be available within the foreseeable future.”
He suggests that yottaflop speeds (1024 floating point operations per second) will instead be based on cryotechnology, supercomputers that operate at temperatures just over absolute zero, at around 4 Kelvin. In the long term. On the way, however, we have to pass exa, which we have nearly achieved, and zetta.
“It’s really great that Tomas Sterling was keynote speaker at the NSC 30th birthday. His presentation was both entertaining and thought-provoking, and he described where we are now, how we got here, and (maybe) where we’re going. A true pioneer with a glint in his eye”, comments Matts Karlsson, professor at LiU and director of the NSC.
Footnote: “HPC” is an abbreviation for “high-performance computing”, or what we call in everyday speech “supercomputing”.
Translated by George Farrants