* Equal Contribution.
Recent advancements in generative models have enabled the creation of dynamic 4D content — 3D objects in motion—based on text prompts, which holds potential for applications in virtual worlds, media, and gaming. However, existing methods provide limited control over the appearance of generated content. In this work, we introduce a method for animating user-provided 3D objects by conditioning on textual prompts to guide 4D generation, enabling custom animations while maintaining the original object's identity. We first convert a 3D mesh into a static 4D Neural Radiance Field (NeRF) that preserves the object’s visual attributes. Then, we animate the object using an Image-to-Video diffusion model driven by text. To improve motion realism, we introduce an incremental viewpoint selection protocol for sampling perspectives to promote lifelike movement and a masked Score Distillation Sampling (SDS) loss, which leverages attention maps to focus optimization on relevant regions. We evaluate our model on temporal coherence, prompt adherence, and visual fidelity, and find that our method outperforms baselines based on other approaches, achieving up to threefold improvements in LPIPS scores, and effectively balancing visual quality with dynamic content.
Instead of generating a 4D dynamic object using text control only, one may want to animate an existing 3D object, like your favorite 3D toy or character. Conditioning 4D generation on 3D assets offers several advantages: it enhances control, leverages existing 3D resources efficiently, and accelerates 4D generation by using 3D as a strong initialization. Despite the availability of extensive, high-quality 3D models, current methods have not yet used 3D assets to guide 4D generation.
We introduce a novel method for generating 4D scenes from user-provided 3D representations, called 3to4D, taking a simple approach that incorporates textual descriptions to govern the animation of the 3D objects. First, we train a "static" 4D NeRF based on the 3D mesh input, effectively capturing the object appearance from multiple views, replicated across time. Then, our method modifies the 4D object using an image-to-video diffusion model, conditioned the first frame on renderings of the input object.
Unfortunately, we find that applying this approach naively is insufficient, because it dramatically reduces the level of dynamic motion. To encourage the model to generate more dynamic movements, we propose two key improvements. First, a new camera viewpoint selector that incrementally samples different viewpoints around the object during optimization. This gradual-widening sampling approach enhances the generation process, resulting in more pronounced movement. Second, we introduce a masked variant of the SDS loss, using attention maps obtained from the Image-to-Video model. This masked SDS focuses the optimization on object-relevant areas of the latent space pixels, enhancing the optimization of elements related to the object.
For a collection of 3D objects, we used the Google Scanned Objects (GSO) dataset. This is a collection of high-quality 3D scans of everyday items.
@article{rahamim2024bringingobjectslife4d,
title={Bringing Objects to Life: 4D generation from 3D objects},
author={Ohad Rahamim and Ori Malca and Dvir Samuel and Gal Chechik},
year={2024},
eprint={2412.20422},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.20422},
}