Image editing results of our proposed method. Our method enables 3D manipulations of objects with consistent appearance, plausible layout, and harmonious composition including occlusion, by using pre-trained diffusion models
We propose a novel image editing technique that enables 3D manipulations on single images, such as object rotation and translation. Existing 3D-aware image editing approaches typically rely on synthetic multi-view datasets for training specialized models, thus constraining their effectiveness on open-domain images featuring significantly more varied layouts and styles. In contrast, our method directly leverages powerful image diffusion models trained on a broad spectrum of text-image pairs and thus retain their exceptional generalization abilities. This objective is realized through the development of an iterative novel view synthesis and geometry alignment algorithm. The algorithm harnesses diffusion models for dual purposes: they provide appearance prior by predicting novel views of the selected object using estimated depth maps, and they act as a geometry critic by correcting misalignments in 3D shapes across the sampled views. Our method can generate high-quality 3D-aware image edits with large viewpoint transformations and high appearance and shape consistency with the input image, pushing the boundaries of what is possible with single-image 3D-aware editing.
The overall pipeline. Our 3D-aware editing method iterates among three phases. The view synthesis phase generates the novel view of the selected object using depth-based warping and layered diffusion inpainting (initial depth map obtained by monocular depth estimation). The undistortion phase rectifies the potential distortions on target-view image induced by inferior depth estimate. The shape alignment phase aligns the object shape in the original input image to the undistorted target image by optimizing the depth map and minimizing dense image correspondences. After several iterations, this process yields plausible and consistent 3D editing results.
-30°
+30°
Further left
Closer right
We showcase the consistency of our editing method's shape and appearance by presenting two sequences of results with interpolated poses. By defining two opposite target transformations and interpolating between them, we apply our method to each step in the sequence, demonstrating strong shape and texture consistency during both rotation and translation.
@misc{wang2024diff3dedit,
title={Diffusion Models are Geometry Critics: Single Image 3D Editing Using Pre-Trained Diffusion Priors},
author={Ruicheng Wang and Jianfeng Xiang and Jiaolong Yang and Xin Tong},
year={2024},
eprint={2403.11503},
archivePrefix={arXiv},
primaryClass={cs.CV}
}