Neural Poisson Solver:
A Universal and Continuous Framework
for Natural Signal Blending

ECCV 2024

pipeline

Overview of the Neural Poisson Solver.

Abstract

Implicit Neural Representation (INR) has become a popular method for representing visual signals (e.g., 2D images and 3D scenes), demonstrating promising results in various downstream applications. Given its potential as a medium for visual signals, exploring the development of a neural blending method that utilizes INRs is a natural progression. Neural blending involves merging two INRs to create a new INR that encapsulates information from both original representations. A direct approach involves applying traditional image editing methods to the INR rendering process. However, this method often results in blending distortions, artifacts, and color shifts, primarily due to the discretization of the underlying pixel grid and the introduction of boundary conditions for solving variational problems. To tackle this issue, we introduce the Neural Poisson Solver, a plug-and-play and universally applicable framework across different signal dimensions for blending visual signals represented by INRs. Our Neural Poisson Solver offers a variational problem-solving approach based on the continuous Poisson equation, demonstrating exceptional performance across various domains. Specifically, we propose a gradient-guided neural solver to represent the solution process of the variational problem, refining the target signal to achieve natural blending results. We also develop a Poisson equation-based loss and optimization scheme to train our solver, ensuring it effectively blends the input INR scenes while preserving their inherent structure and semantic content. The lack of dependence on additional prior knowledge makes our method easily adaptable to various task categories, highlighting its versatility. Comprehensive experimental results validate the robustness of our approach across multiple dimensions and blending tasks.

2d_result_compare

Displaying the blending results of the PIE method and our approach across different 2D scenes. The first and third columns show the source and target scenes for two tasks, respectively. The second and fourth columns respectively showcase the blending outcomes and related details.

Blending Control

2d_result_style

When \(\lambda<1\), the blending operation accentuates the gradient within the blending region \(\Omega\), thereby more effectively preserving the intricate details of the scene. However, this approach may lead to a noticeable color discrepancy between the blending edge \(\partial \Omega\) and the background INR \(\mathcal{S}\). On the other hand, as \(\lambda\) increases, the focus shifts towards ensuring a smooth transition at \(\partial \Omega\). While this method facilitates a seamless blend, it might slightly diminish the precision of the scene's detailed features.

Exploration in NeRF

3d_result

Our method in Radiance Fields blending.

Citation

@InProceedings{10.1007/978-3-031-72989-8_15,
	author="Wu, Delong
	and Zhu, Hao
	and Zhang, Qi
	and Li, You
	and Ma, Zhan
	and Cao, Xun",
	editor="Leonardis, Ale{\v{s}}
	and Ricci, Elisa
	and Roth, Stefan
	and Russakovsky, Olga
	and Sattler, Torsten
	and Varol, G{\"u}l",
	title="Neural Poisson Solver: A Universal and Continuous Framework for Natural Signal Blending",
	booktitle="Computer Vision -- ECCV 2024",
	year="2025",
	publisher="Springer Nature Switzerland",
	address="Cham",
	pages="259--275",
}

Acknowledgements

The website template was borrowed from Michaël Gharbi.