Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
We create a three-dimensional signed distance field (SDF) and a textured field using two latent codes. DMTet is employed to derive a 3D surface mesh from the SDF, and we sample the texture field at the surface points to obtain color information. Our training incorporates adversarial losses focused on 2D images, specifically utilizing a rasterization-based differentiable renderer to produce both RGB images and silhouettes. To distinguish between genuine and generated inputs, we implement two separate 2D discriminators—one for RGB images and another for silhouettes. The entire framework is designed to be trainable in an end-to-end manner. As various sectors increasingly transition towards the development of expansive 3D virtual environments, the demand for scalable tools that can generate substantial quantities of high-quality and diverse 3D content has become apparent. Our research endeavors to create effective 3D generative models capable of producing textured meshes that can be seamlessly integrated into 3D rendering engines, thereby facilitating their immediate application in various downstream uses. This approach not only addresses the scalability challenge but also enhances the potential for innovative applications in virtual reality and gaming.
Description
By incorporating image conditioning techniques alongside a prompt-based editing method, we offer users innovative ways to manipulate 3D synthesis, paving the way for various creative possibilities. Magic3D excels in generating high-quality 3D textured mesh models based on textual prompts. It employs a coarse-to-fine approach that utilizes both low- and high-resolution diffusion priors to effectively learn the 3D representation of the desired content. Moreover, Magic3D produces 3D content with 8 times the resolution supervision compared to DreamFusion, while also operating at twice the speed. Once a rough model is created from an initial text prompt, we can alter elements of the prompt and subsequently fine-tune both the NeRF and 3D mesh models, resulting in an enhanced high-resolution 3D mesh. This versatility not only enhances user creativity but also streamlines the workflow for producing detailed 3D visualizations.
API Access
Has API
API Access
Has API
Integrations
No details available.
Integrations
No details available.
Pricing Details
No price information available.
Free Trial
Free Version
Pricing Details
No price information available.
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
NVIDIA
Country
United States
Website
nv-tlabs.github.io/GET3D/
Vendor Details
Company Name
Magic3D
Website
research.nvidia.com/labs/dir/magic3d/