by Jemima M. Tabeart, TU Eindhoven
When designing data assimilation algorithms, we want to exploit access to lots of parallel computational resource, while ensuring that we can obtain our results quickly in real time. This sometimes means reformulating our original problem to reveal hidden structure. In this work we want to solve a problem of the form Akx = b1, where Ak represents uncertainty information for ocean variables, and k is a parameter which controls smoothness. The matrix A is very large, so we cannot solve this problem directly, by computing an inverse (the matrix equivalent of “dividing” by a constant), so we use an iterative method, where each step of our algorithm takes us closer to the solution. Our goal is to reduce the number of steps we need to get an answer that we are happy with, while ensuring that the steps themselves are computationally affordable.
Reformulating the problem
We start by re-writing the problem we want to solve. Instead of applying our matrix multiplications with A sequentially (one-after the other), we re-write our problem as a bigger system with more structure
This means that it is more straightforward to split multiplications across different processors, hopefully reducing our overall time to obtain the solution.
Preconditioning
The challenge is now to accelerate the solution of this larger problem. Preconditioners allow us to solve an equivalent problem, with a better mathematical structure which means that we should see more improvement to our answer with each step. We can think of this like going from the third floor to the first floor, and replacing 2 very shallow steps with one deeper step on the staircase – at each step we take, we will get closer to our goal, but it may take us more effort to make those steps
For problems of the form shown above, there are well-known preconditioners that help accelerate convergence. However, one of the components of the standard approach is too expensive for the problem we are interested in. For our staircase analogy, this is like saying our original method was to take one step from the third floor to the second floor – which is definitely too much effort!
Adding another solver
Our idea is to replace the expensive step, with another iterative method. Another iterative method will add a new staircase between our floors, making each step a bit easier, but adding more steps (which could increase the overall time). We have to solve k of these sub-problems, and we want to solve them approximately – maybe our staircase doesn’t need to go all the way to the second floor to be useful.
The main mathematical work of Tabeart et al. (2025) was to understand whether these sub-problems make staircases that definitely take us to the next floor, and how quickly the stairs will take us there. Two of the k sub-problems fit into existing theory, which tells us that we do reach the next floor, with one problem getting there much more slowly. The other k-2 stairs were more problematic – we saw numerically that our method worked, but guaranteeing it mathematically needed some technical mathematical proofs. We discovered out that all of our stairs are guaranteed to reach the next floor, and developed an easy-to-compute upper bound on how long it will take us to get there.
Making a practical algorithm
We can also use these new bounds to reduce the total number of iterations . By using information about how quickly each method descends, we can solve each of our problems to the same tolerance (build our staircases to the same height). When we have a limited computational budget, this means we are not wasting resources building one intermediate staircase much further than the others.
This figure shows the performance of our new method against solving the original problem with no preconditioner. The top panels show the number of iterations (steps) to find an acceptable solution, and the bottom panels show how much effort it cost us to get there. The left and right panels show two different versions of the preconditioner. The black line shows the computational effort to solve the original problem, the blue line the method with the new iterative preconditioner where we allocate the same budget to each sub-problem. Finally the red line shows the new iterative preconditioner where we use the mathematical theory to help us solve each problem to about the same tolerance (build the stairs to the same height). Both our new methods do much better than solving the problem with no preconditioner, for iterations and computational cost. Our second method is easy to apply, and results in even better performance than our first method. We started off with a problem where we couldn’t use the existing approach due to computational limitations. These limitations introduced us to some new mathematics and a new algorithm!
Reference:
Tabeart, J. M., Gürol, S., Pearson, J. W., & Weaver, A. W. (2025). Block Alpha-Circulant Preconditioners for All-at-Once Diffusion-Based Covariance Operators. arXiv preprint arXiv:2506.03947