We present MoGe, a powerful model for recovering 3D geometry from monocular open-domain images. Given a single image, our model directly predicts a 3D point map of the captured scene with an affine-invariant representation, which is agnostic to true global scale and shift. This new representation precludes ambiguous supervision in training and facilitate effective geometry learning. Furthermore, we propose a set of novel global and local geometry supervisions that empower the model to learn high-quality geometry. These include a robust, optimal, and efficient point cloud alignment solver for accurate global shape learning, and a multi-scale local geometry loss promoting precise local geometry supervision. We train our model on a large, mixed dataset and demonstrate its strong generalizability and high accuracy. In our comprehensive evaluation on diverse unseen datasets, our model significantly outperforms state-of-the-art methods across all tasks including monocular estimation of 3D point map, depth map, and camera field of view.
Click on the images below to see our point map results as meshes in a 3D viewer.
β Scroll to zoom in/out
β Drag to rotate
β Press "shift" and drag to pan
β Click on the buttons at the top to switch texture color on/off
We predict the point maps for the video frames and then simply use rigid (similarity) transformations computed from image matching (PDCNet) to register them.
Select a method from the dropdown menu to compare the results of MoGe with it side by side.
β Scroll to zoom in/out
β Drag to rotate
β Press "shift" and drag to pan
β Click on the buttons at the top to switch texture color on/off
Check how our method compares to other methods on uncurated images (sourced from the first 100 images in DIV2K).
*No camera intrinsics prediction; using ours instead.
@misc{wang2024moge,
title={MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision},
author={Wang, Ruicheng and Xu, Sicheng and Dai, Cassie and Xiang, Jianfeng and Deng, Yu and Tong, Xin and Yang, Jiaolong},
year={2024},
eprint={2410.19115},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.19115},
}