Representing 3D Shapes with 64 Latent Vectors for 3D Diffusion Models

ICCV 2025

Yonsei University

Abstract

Constructing a compressed latent space through a variational autoencoder (VAE) is the key for efficient 3D diffusion models. This paper introduces COD-VAE that encodes 3D shapes into a COmpact set of 1D latent vectors without sacrificing quality. COD-VAE introduces a two-stage autoencoder scheme to improve compression and decoding efficiency. First, our encoder block progressively compresses point clouds into compact latent vectors via intermediate point patches. Second, our triplane-based decoder reconstructs dense triplanes from latent vectors instead of directly decoding neural fields, significantly reducing computational overhead of neural fields decoding. Finally, we propose uncertainty-guided token pruning, which allocates resources adaptively by skipping computations in simpler regions and improves the decoder efficiency. Experimental results demonstrate that COD-VAE achieves 16x compression compared to the baseline while maintaining quality. This enables 20.8x speedup in generation, highlighting that a large number of latent vectors is not a prerequisite for high-quality reconstruction and generation

Overview

We introduce COD-VAE, a VAE that encodes 3D shapes into a COmpact set of 1D vectors with an improved compression ratio. COD-VAE replaces the direct mappings between points and latent vectors with a two-stage autoencoder scheme. This scheme enables our model to construct a significantly compressed latent space, thereby accelerating subsequent diffusion models.

COD-VAE achieves high-quality reconstruction with 16x fewer latent vectors, as well as 20.8x speedup in generation. These results highlight that a large number of latent vectors is not a prerequisite for high-quality reconstruction and generation. We only need 64 latent vectors to surpass the results of VecSet with 512, or even 1024 latent vectors.

(top) Reconstruction IoU and Rendering-FID of the generation results with varying numbers of latent vectors (M). Our COD-VAE outperforms VecSet using 16× fewer latent vectors, achieving 20.8× generation speedup.

(bottom) VecSet with M = 64 struggles to capture details, while our model accurately reconstructs detailed and complex shapes of the objects.

Method

We propose a two-stage autoencoder scheme to obtain compact 1D latent vectors. The encoder leverages intermediate point patches with a moderate compression ratio, and the decoder reconstructs triplanes from the latent vectors.

Our encoder block first projects high-resolution point features to the intermediate point patches by leveraging the attention-based downsampling of VecSet as a learnable point patchifier. These patches are then processed by self-attention layers, and then compressed into the compact 1D vectors. The global information of the latent vectors are mapped back to points at the end of the block, further refining high-resolution features.

Our decoder leverages dense triplanes as intermediate representations for efficient yet effective decoding process. We treat dense triplane embeddings as mask tokens, and reconstruct them from the latent vectors using transformers.

uncertainty-guided token pruning reduces computations in simple regions, thereby achieving further efficiency improvement of the decoder. By pruning regions with lower uncertainty, our decoder prioritizes computational resources for reconstructing more complex regions.

Our decoder leverages dense triplanes as intermediate representations for efficient yet effective decoding process. We treat dense triplane embeddings as mask tokens, and reconstruct them from the latent vectors using transformers.

uncertainty-guided token pruning reduces computations in simple regions, thereby achieving further efficiency improvement of the decoder. By pruning regions with lower uncertainty, our decoder prioritizes computational resources for reconstructing more complex regions.

Results

Reconstruction results

Reconstruction results on ShapeNet and Objaverse. Our models with 64 latent vectors achieve reconstruction quality comparable to VecSet with 512 or 1024 latent vectors, offering a more efficient yet effective option for diffusion models.

Generation results

Our model can generate high-quality and diverse 3D shapes with only 64 latent vectors, as shown in the class-conditioned (top) and the unconditional (bottom) generation results.

BibTeX

@inproceedings{cho2025cod,
  author={Cho, In and Yoo, Youngbeom and Jeon, Subin and Kim, Seon Joo},
  title={Representing 3D Shapes with 64 Latent Vectors for 3D Diffusion Models},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}