Science Week 2016: Core-count rising
14 November 2016
From the Argonne National Laboratory's Flickr page and Wikipedia
Numerical weather prediction (NWP) involves approximating the complex mathematical equations of the atmosphere by a large number of elementary calculations. Roughly speaking, the more of these you carry out, the more accurate your forecast. Such a process is extremely time-consuming if done by hand. In the early twentieth century, Lewis Fry Richardson was the first to attempt this but it took him many months just to compute a six-hour pressure change at a single location. On the other hand, the task of performing a large number of calculations is ideally suited to modern computers and their development has made operational NWP feasible and successful.
Computer performance has broadly followed “Moore's Law”, which stated that computing power would double every 18 months or so. In the past, this has led to a continual improvement in weather forecasting accuracy. Recently, however, this trend in computing power has slowed. As a result, modern high-performance computers are based on an architecture known as parallel computing: rather than relying on a single processor, a large number of processor cores work simultaneously with tasks divided between them running in parallel. The fastest machine in the world, as of June 2016 according to www.top500.org, has over 10 million cores.
These advances in computing hardware suggest that we may hope for further improvements in weather forecasting in the future. Some difficulties lie ahead, however. Dividing out tasks efficiently on a parallel computer is not easy. Sharing information from one core to another is a relatively slow process and can lead to a bottleneck in performance. A key challenge in NWP research is therefore to ensure that the mathematical techniques in the next-generation forecasting models are designed to achieve good scalability; that is, by using a greater number of cores, we actually get a speed-up in performance. Only then will we be able to fully exploit the future available computing resources.
Interestingly, Richardson somewhat predicted these developments when reporting his attempted forecast back in 1922. He envisaged a “forecast factory” in which thousands of people worked in parallel to perform the calculations for an efficient and timely forecast. Each would work on a particular geographic region, passing the relevant data to their neighbours. Replace these human “computers” with silicon-based processors and he was essentially describing an NWP system running on a modern parallel machine.
Further reading on Richardson and the history of NWP:
Peter Lynch, The Emergence of Numerical Weather Prediction: Richardson's Dream. Published by Cambridge University Press, 2006.
A number of artistic impressions of Richardson's proposed forecast factory may be found here.
For further information contact Colm Clancy (colm dot clancy at met dot ie).