Max-Planck-Institut für Informatik Michigan State University University College London

Learning Neural Antiderivatives

Vision, Modeling, and Visualization, 2025
Fizza Rubab1,2, Ntumba Elie Nsampi1, Martin Balint1, Felix Mujkanovic1, Hans-Peter Seidel1, Tobias Ritschel3, Thomas Leimkühler1
1 Max-Planck-Institut für Informatik
2 Michigan State University
3 University College London
Paper Code Data Models Slides
Overview of the two classes of approaches to learning repeated antiderivatives F(n) (green) from function samples, illustrated for a single integration of a one-dimensional function f (blue).
Left: Integral supervision involves numerically estimating antiderivative values (n) across the domain and using this estimate to guide training.
Right: Differential supervision begins by applying the differential operator 𝓓 to the model. The resulting signal is then compared to the original function samples, optionally modified by a compensation operator 𝓑 to account for approximation errors introduced by 𝓓. Different choices of 𝓓 lead to methods with varying trade-offs between accuracy and computational efficiency.

Abstract

Neural fields offer continuous, learnable representations that extend beyond traditional discrete formats in visual computing. We study the problem of learning neural representations of repeated antiderivatives directly from a function, a continuous analogue of summed-area tables. Although widely used in discrete domains, such cumulative schemes rely on grids, which prevents their applicability in continuous neural contexts. We introduce and analyze a range of neural methods for repeated integration, including both adaptations of prior work and novel designs. Our evaluation spans multiple input dimensionalities and integration orders, assessing both reconstruction quality and performance in downstream tasks such as filtering and rendering. These results enable integrating classical cumulative operators into modern neural systems and offer insights into learning tasks involving differential and integral operators.


Overview

The task of learning antiderivatives (blue) can be addressed using method classes (green), each comprising specific algorithmic realizations (orange):

We analyze and evaluate the algorithmic realizations shown above. They are as follows:


Reconstruction Results

We evaluate the accuracy of learned antiderivatives by applying higher-order automatic differentiation and comparing the reconstructed signals to the original data. The symbol ↯ indicates failure of the corresponding method.

DataIntegralAD-NaïveAD-ReducNum-FDNum-FD-𝓑Num-SmNum-Sm-𝓑
Order 1
Order 2
Order 1
Order 2
Order 1
Order 2
Order 1
Order 2

Filtering Results

We assess the use of antiderivative representations for convolution, applying piecewise polynomial approximations of Gaussian filters and comparing against Monte Carlo-based reference results.

ReferenceIntegralAD-NaïveAD-ReducNum-FDNum-FD-𝓑Num-SmNum-Sm-𝓑
Small piecewise constant kernel
Large piecewise linear kernel

Rendering Results

We use antiderivative representations of environment maps to compute glossy reflections, comparing renderings to those obtained from the original maps.

ReferenceIntegralAD-NaïveAD-ReducNum-FDNum-FD-𝓑Num-SmNum-Sm-𝓑
Piecewise
const. approx.
Piecewise
linear approx.
Piecewise
const. approx.
Piecewise
linear approx.
Piecewise
const. approx.
Piecewise
linear approx.

BibTeX

@inproceedings{rubab2024antiderivatives,
  title = {Learning Neural Antiderivatives},
  author = {Fizza Rubab and Ntumba Elie Nsampi and Martin Balint and Felix Mujkanovic and
            Hans-Peter Seidel and Tobias Ritschel and Thomas Leimk{\"u}hler},
  booktitle = {Vision, Modeling, and Visualization},
  year = {2025}
}