Left: Integral supervision involves numerically estimating antiderivative values F̂(n) across the domain and using this estimate to guide training.
Right: Differential supervision begins by applying the differential operator 𝓓 to the model. The resulting signal is then compared to the original function samples, optionally modified by a compensation operator 𝓑 to account for approximation errors introduced by 𝓓. Different choices of 𝓓 lead to methods with varying trade-offs between accuracy and computational efficiency.
Abstract
Neural fields offer continuous, learnable representations that extend beyond traditional discrete formats in visual computing. We study the problem of learning neural representations of repeated antiderivatives directly from a function, a continuous analogue of summed-area tables. Although widely used in discrete domains, such cumulative schemes rely on grids, which prevents their applicability in continuous neural contexts. We introduce and analyze a range of neural methods for repeated integration, including both adaptations of prior work and novel designs. Our evaluation spans multiple input dimensionalities and integration orders, assessing both reconstruction quality and performance in downstream tasks such as filtering and rendering. These results enable integrating classical cumulative operators into modern neural systems and offer insights into learning tasks involving differential and integral operators.
Overview
The task of learning antiderivatives (blue) can be addressed using method classes (green), each comprising specific algorithmic realizations (orange):
We analyze and evaluate the algorithmic realizations shown above. They are as follows:
- Integral: Integral supervision technique based on integral reduction.
- AD-Naïve: Differential supervision via automatic differentiation.
- AD-Reduc: Automatic differentiation combined with integral reduction.
- Num-FD and Num-FD-𝓑: Numerical differentiation using finite differences, without and with the compensation operator 𝓑, respectively.
- Num-Sm and Num-Sm-𝓑: Numerical differentiation using smooth estimators, without and with the compensation operator 𝓑, respectively.
Reconstruction Results
We evaluate the accuracy of learned antiderivatives by applying higher-order automatic differentiation and comparing the reconstructed signals to the original data. The symbol ↯ indicates failure of the corresponding method.
| Data | Integral | AD-Naïve | AD-Reduc | Num-FD | Num-FD-𝓑 | Num-Sm | Num-Sm-𝓑 | ||
|---|---|---|---|---|---|---|---|---|---|
![]() |
Order 1 | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
| Order 2 | ↯ | ![]() |
↯ | ![]() |
![]() |
↯ | ↯ | ||
![]() |
Order 1 | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
| Order 2 | ↯ | ![]() |
↯ | ![]() |
![]() |
↯ | ↯ | ||
| Order 1 | ↯ | ↯ | ↯ | ||||||
| Order 2 | ↯ | ↯ | ↯ | ↯ | ↯ | ↯ | |||
| Order 1 | ↯ | ↯ | ↯ | ||||||
| Order 2 | ↯ | ↯ | ↯ | ↯ | ↯ | ↯ |
Filtering Results
We assess the use of antiderivative representations for convolution, applying piecewise polynomial approximations of Gaussian filters and comparing against Monte Carlo-based reference results.
| Reference | Integral | AD-Naïve | AD-Reduc | Num-FD | Num-FD-𝓑 | Num-Sm | Num-Sm-𝓑 | ||
|---|---|---|---|---|---|---|---|---|---|
| Small piecewise constant kernel | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
| ↯ | ↯ | ↯ | |||||||
| Large piecewise linear kernel | ![]() |
↯ | ![]() |
![]() |
![]() |
![]() |
↯ | ↯ | |
| ↯ | ↯ | ↯ | ↯ | ↯ |
Rendering Results
We use antiderivative representations of environment maps to compute glossy reflections, comparing renderings to those obtained from the original maps.
BibTeX
@inproceedings{rubab2024antiderivatives,
title = {Learning Neural Antiderivatives},
author = {Fizza Rubab and Ntumba Elie Nsampi and Martin Balint and Felix Mujkanovic and
Hans-Peter Seidel and Tobias Ritschel and Thomas Leimk{\"u}hler},
booktitle = {Vision, Modeling, and Visualization},
year = {2025}
}







































































