Why a £750 million supercomputer at Edinburgh University will enhance our science

Rachel Reeves’s Spending Review on 11 June 2025 included a commitment to spend up to £750 million on a new supercomputer at Edinburgh University. This is a very welcome change from the announcement in August 2024 that the project would be cut, and has important implications for The James Hutton Institute’s research in Information and Computational Sciences.

The technical term for the new supercomputer is an ‘exascale’ computer. The ‘exa’ prefix to a unit of measurement was adopted by the 15th Conférence Générale des Poids et Mesures (CGPM) in 1975 to mean 10 raised to the power of 18, or 1 followed by 18 zeros: in English, a billion-billion. The mass of planet Earth is about 6 billion exagrams (or, following resolution 3 of the 27th CGPM in 2022, 6 ronnagrams). Computer scientists measure computing performance in units of (double-precision) floating-point operations per second (‘FLOPS’). These operations are how digital computers best approximate calculations with real numbers. (Some early research in agent-based modelling in one of Hutton’s legacy Institutes showed how ignoring this approximation can lead to unexpected results.) An exascale computer is capable of a billion-billion FLOPS. Your laptop’s CPU is capable of a few billion FLOPS.

Preparations for exascale computing date back to a December 2007 report outlining the scientific benefits that exascale computing could achieve. The American report covered areas relevant to The James Hutton Institute’s research, including climate change mitigation and adaptation, renewable energy, systems biology and public policy. However, it was not until 2022 that the world’s first public exascale computer, the USA’s ‘Frontier’, achieved the exaflop milestone. It is already having scientific impact in genomics, for example, where Dan Jacobson has studied how plants will adapt to climate change.

Simulation is at the heart of taking advantage of exascale computing, and my own work in simulation of human societies and their interactions with technological and environmental change is how I came to take an interest in it. With help and support from ITS, I have been using High-Performance Computing to run agent-based models for many years. It allows me to explore the diversity of their behaviour and search their parameter spaces far better than would be possible on a personal computer.

The UK’s preparations for the advent of exascale computing have been led by the EPSRC’s ExCALIBUR Programme from 2019 to 2025, which has reached out to various disciplines and research teams in the UK to help them prepare the ground. I was lucky enough to be awarded a research grant towards the end of this Programme, to look at how agent-based models could be ported to exascale computers to support policy evaluation.

I was attracted to the funding call because a quick back-of-the-envelope calculation showed me that exascale computing was a potential game-changer for agent-based modelling of complex societal systems. In earlier work, twenty thousand runs of a Hutton model took several days to complete on the high-performance computing infrastructure we had at the time. If I could push through those computations at exascale speeds, the results would have been available in under half a second.

Why is the half-a-second threshold such a game-changer? To understand the background to that, we must go back to some research on Human-Computer Interaction by Doherty and Thadani at IBM in the early 1980s. In a landmark paper, they showed that productivity (measured as transactions with the computer per user-hour) was a nonlinear function of system response time, with a key turning point at around half a second (see figure below). So, if an online system takes more than half a second to respond to each user transaction, then you can legitimately infer that those who signed off on it place no value at all on your time or productivity.

figure for £750 million supercomputer

Suddenly, a simulation that takes days to complete can give results within the sorts of timescales that enable creative and productive interaction with a computer. This could revolutionize the way we explore policy interventions in future developing crises. The industry professionals I spoke to when preparing the proposal also said that the long execution times of agent-based models presented a barrier to using them effectively in their consultancy work.

The example we used in the proposal was, perhaps understandably, a future pandemic. We imagined that, with appropriately prepared simulation models, we could interactively evaluate ideas for how to handle the spread of infection, demand on the NHS, and the wider economic and wellbeing cascading consequences of implementing these ideas. A key learning point from discussing Covid modelling with colleagues in my academic community was that models which showthe spread of infectious diseases cannot adequately evaluate the policies which manage these diseases. Exascale computing provides the speed needed to run more models with more comprehensive coverage of the system – fast enough that ideas can be co-constructed with insights from the models.

For another example, imagine there is a sudden, significant and expected long-term global shortage of a critical foodstuff that the UK primarily accesses through imports. Examples, from the UK’s food security report 2024, include rice, for which we are wholly reliant on imports, and fresh fruit, for which we produce only about a sixth of what we consume. Neither of these are necessarily critical to providing British citizens with adequate nutrition, but their lack would not go unnoticed, and the Government would be expected to do something about the issue. Leave it to the market and individual choice? Rationing? Vitamin pills? We would also need to know whether there would be sufficient supply of whatever people substituted for these goods. With exascale speeds, we could develop policy using decision-makers’ knowledge and expertise interactively with insights from computer models, such as one of global food and nutrient trade developed at Hutton.

There is, however, a catch. Exascale computing is achieved through massively parallel computation, and in particular through taking advantage of Graphical Processing Units to perform the calculations, rather than CPUs as has conventionally been the case. Rendering realistic 3D graphics for computer games has led to specialized computing hardware to perform the relevant calculations. These calculations essentially comprise executing lots of matrix multiplications to work out where to draw lots of triangles. As GPUs became more capable, those modellers whose calculations could be conveniently reduced to matrix multiplications started to take advantage of GPUs. One particular field that is based primarily on matrix computation is neural networks, which form the basis of most contemporary artificial intelligence applications, including generative AI and Large Language Models.

For the rest of us, porting our code from CPUs to GPUs will not be so trivial. However, there are good reasons for people in the Information and Computational Sciences Department to start engaging with and using GPU computation, beyond making preparations for any potentially successful future application to use the UK’s exascale computer.

The first reason is that, per instruction, GPU computation is much more energy efficient than CPU computation. The ‘green500’ list of the top 500 computing systems with respect to gigaFLOPS per watt has only about a tenth of systems using GPUs in the bottom 100 (least energy efficient), whereas nearly all of the top 100 use GPUs. The review of ICS in 2024 rightly asked why we were not looking in to low-energy computing, and GPU programming could be an important part of the story of our response to that challenge in 2029.

Second, learning to use GPUs in our software development will require us to have appropriately configured laptops for day-to-day use. We may then find that some larger-scale work can be done on our laptops rather than using HPC systems. This will reduce competition for resources on HPC infrastructure, and avert the need to provide information HPC administrators require that is not always possible to estimate.

Third, our Crop Diversity High Performance Computing facility has a significant GPU processing provision, so we have resources we can use to test how well our software scales before using exascale infrastructure. It is generally expected that HPC infrastructure will be needed as an intermediary in the process of desktop software development through to exascale utilization. GPU programming skills will also be instrumental in making bespoke use of ILUSC’s ‘Immersive Nature-Based Solutions Space’ for exploring visualizations with colleagues, collaborators, partners and the general public.

Finally, with GPUs being central to training contemporary AI systems, our ability to use the infrastructure will enable us to make more effective personalized use of AI technology. This is something that has the potential to impact research beyond the ICS department, not least in the computational social sciences, where AI can be used to test questionnaires and analyse texts, as well as simulating scenarios in artificial societies.

The advent of exascale computing in the UK is important for an organization with an Information and Computing Sciences Department. In my opinion, The James Hutton Institute should be preparing itself to take advantage of £750 million of investment in computing infrastructure as part of its world-leading science on the sustainable management of land, crops and natural resources that support thriving communities.

Disclaimer: The views expressed in this blog post are the views of the author, and not an official position of the institute or funder.

Blog by:

Human-Natural Systems Research Scientist
Based in Aberdeen
T: +44 (0)344 928 5428 (*)
I specialise in empirical applications of agent-based modelling to socio-(techno-)environmental systems, and rigorous approaches to their design, exploration and interpretation, with a view to developing useful knowledge to support decision-making in complex and wicked systems. Agent-based modelling involves explicit representation of individuals and their interactions, observing the emergent effects these have on the dynamics of the system. As well as applying agent-based modelling in specific contexts, I am interested in methodological and theoretical development, and high-performance computing support for large-scale agent-based modelling. I occasionally run training courses in various aspects of agent-based modelling.