Text to 3D

59 papers with code • 1 benchmarks • 1 datasets

A giantess girl with brown hair, a flower with pink petals in her hair, is blushing, breasts and hips bigger than her torso, wearing black pants, black shirt, and purple boots

Libraries

Use these libraries to find Text to 3D models and implementations

Datasets


Most implemented papers

DreamFusion: Text-to-3D using 2D Diffusion

ashawkey/stable-dreamfusion 29 Sep 2022

Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.

Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures

eladrich/latent-nerf CVPR 2023

This unique combination of text and shape guidance allows for increased control over the generation process.

Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior

junshutang/Make-It-3D ICCV 2023

In this work, we investigate the problem of creating high-fidelity 3D content from only a single image.

ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation

threestudio-project/threestudio NeurIPS 2023

In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).

One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization

One-2-3-45/One-2-3-45 NeurIPS 2023

Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world.

SyncDreamer: Generating Multiview-consistent Images from a Single-view Image

liuyuan-pal/syncdreamer 7 Sep 2023

In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image.

GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation

baaivision/GeoDream 29 Nov 2023

We justify that the refined 3D geometric priors aid in the 3D-aware capability of 2D diffusion priors, which in turn provides superior guidance for the refinement of 3D geometric priors.

Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting

WU-CVGL/MVControl-threestudio 15 Mar 2024

Building upon our MVControl architecture, we employ a unique hybrid diffusion guidance method to direct the optimization process.

DreamView: Injecting View-specific Text Guidance into Text-to-3D Generation

isee-laboratory/dreamview 9 Apr 2024

Text-to-3D generation, which synthesizes 3D assets according to an overall text description, has significantly progressed.