Text to 3D
59 papers with code • 1 benchmarks • 1 datasets
A giantess girl with brown hair, a flower with pink petals in her hair, is blushing, breasts and hips bigger than her torso, wearing black pants, black shirt, and purple boots
Libraries
Use these libraries to find Text to 3D models and implementationsMost implemented papers
DreamFusion: Text-to-3D using 2D Diffusion
Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.
Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
Key to Fantasia3D is the disentangled modeling and learning of geometry and appearance.
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
This unique combination of text and shape guidance allows for increased control over the generation process.
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
In this work, we investigate the problem of creating high-fidelity 3D content from only a single image.
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world.
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image
In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image.
GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation
We justify that the refined 3D geometric priors aid in the 3D-aware capability of 2D diffusion priors, which in turn provides superior guidance for the refinement of 3D geometric priors.
Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting
Building upon our MVControl architecture, we employ a unique hybrid diffusion guidance method to direct the optimization process.
DreamView: Injecting View-specific Text Guidance into Text-to-3D Generation
Text-to-3D generation, which synthesizes 3D assets according to an overall text description, has significantly progressed.