Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Stable Diffusion AI Art Generator Now Has an Official Blender Plug-In 23

A popular app for 3D artists just received an accessible way to experiment with generative AI: Stability AI has released Stability for Blender, an official Stable Diffusion plug-in that introduces a suite of generative AI tools to Blender's free 3D modeling software. The Verge reports: The add-on allows Blender artists to create images using text descriptions directly within the software -- just like the Stable Diffusion text-to-image generator. You can also create images using existing renders, allowing you to experiment with various styles for a project without having to completely remodel the scene you're working on. Textures can similarly be generated using text prompts alongside reference images, and there's also the function to create animations from existing renders. The results for the latter are... questionable, even in Stability's own examples, but it's fun to play around with crudely transforming your projects into a video format.

Stability for Blender is completely free and doesn't require any additional software or even a dedicated GPU to run. Providing you have the latest version of Blender installed, all you need to get Stable Diffusion running inside it is an internet connection and a Stability API key (which you can get directly from Stability AI). Installing the plug-in is relatively straightforward, and Stability has provided several tutorials to walk through how to use its various features.
This discussion has been archived. No new comments can be posted.

Stable Diffusion AI Art Generator Now Has an Official Blender Plug-In

Comments Filter:
  • it's awesome, but... (Score:4, Informative)

    by test321 ( 8891681 ) on Friday March 03, 2023 @08:56PM (#63340903)

    it will be even more awesome when they integrate a 3D modelling AI into Blender. Their current plug-in is post-process, it takes a Blender render to be tuned for stylish finish. This already could be done outside blender, integrating just makes it easier. But what we really need is to type "an intricate wooden carving of a squirrel riding a motorcycle" and get a 3d model to play with in Blender, which will dramatically accelerate the initial phase of 3D projects. The squirrel example is from this 3D modelling AI here https://dreamfusion3d.github.i... [github.io] (they do mention the link inside TFA)

    • it will be even more awesome when they integrate a 3D modelling AI into Blender.

      I agree but I'm not sure how practical that is. Polygonal geometry is a LOT more strict about how shit is globbed together than 2d images are. Apps that make 3d accessible to those that aren't interested in the technical aspects of a polygonal workflow (Google Sketchup comes to mind) often have quirks in the resulting geo that have to be corrected/cleanedup/rebuilt to use it as intended elsewhere.

      They could go volumetric (think ZBrush) then generate a mesh from the point cloud... and that'll work but it

      • I agree but I'm not sure how practical that is.

        I love how humans have the tendency to say, "That sounds hard for AI!" and then throw their hands up in the air. When in fact, what you describe is pretty trivial.

        They could go volumetric (think ZBrush) then generate a mesh from the point cloud... and that'll work but it won't be easy for an artist to make adjustments to because a. it's heavy and b. you wouldn't have the polygon flows needed to make proper adjustments.

        That's exactly what current approaches do.

        • I love how humans have the tendency to say, "That sounds hard for AI!" and then throw their hands up in the air. When in fact, what you describe is pretty trivial.

          My favorite part is where you said I threw my hands up in defeat and then said my guess on how it would work is right.

          Well done. ;)

          • My favorite part is where you said I threw my hands up in defeat and then said my guess on how it would work is right.

            My favorite part was when I told you that AI engineers are already implementing the approach you said was not practical.

      • it won't be easy for an artist to make adjustments to because a. it's heavy and b. you wouldn't have the polygon flows needed [...] you wouldn't have the polygon flows needed to make proper adjustments.

        I agree but it's like the IA images we have seen. You don't have the canvas, the layers... you can't make adjustments. Still it took you minutes to create stunning artwork that would simply impossible for you to generate otherwise, except if you are a world class artist decided to spend several months on each of these images.

        Maybe AI art is not enough for movie studios, but there are plenty of uses. For the public it makes trivial tasks that were previously reserved to dedicated professionals. For professio

  • "May you live in interesting times."

    Boy, do we...
  • I sense some smoke and mirrors. Note that the user very carefully rotates the 3D hut and it cuts right as it shows the back side has a really shitty mapping. I think it's just making a texture from the angle in the render and 2D projecting it onto the 3D object, meaning you'll get shadows, untextured areas, and weird projection stretching.

    • I think it's just making a texture from the angle in the render and 2D projecting it onto the 3D object, meaning you'll get shadows, untextured areas, and weird projection stretching.

      That's exactly what's happening, but that's also exactly what they're advertising. The plugin generates 2d images. They were demonstrating what can be done once you have those images. I personally would love a tool like this for generating skies with clouds.

      That may seem disappointing to you but you may be surprised at how much 3d stuff you see in movies and tv are actually 2d things placed on cards. In live-action shows the camera rarely does more than move and pan/tilt a little, actual orbits etc a

    • Ive worked with this, and thereâ(TM)s nothing prohibiting you from using an orthographic view from one side, generate the texture, and do the same for the other sides. This will generate correct looking textures - or at least a great base to start from.
  • Well, Bite My Shiny Metal Ass!

    Oh, you said "bLender". I was thinking more of an AI plugin of the "Kill All Humans" variety, like Sydney.

    On a serious note, I don't understand the recent ruling that AI generated works cannot be copyrighted. And here is just another tool that a human artist can use. Why can't this output be copyrighted by the artist, and of what use will this plugin be? Remember my first sentence a second ago where I said I don't understand? I'm so tired of all these AI stories that I admit I

    • by narcc ( 412956 )

      , I don't understand the recent ruling that AI generated works cannot be copyrighted.

      The famous "money selfie" can't be protected by copyright either, for essentially the same reason. What don't you understand?

      • by cstacy ( 534252 )

        I don't understand the recent ruling that AI generated works cannot be copyrighted.

        The famous "money selfie" can't be protected by copyright either, for essentially the same reason. What don't you understand?

        I don't understand the difference between using these different tools: paintbrush, computer simulation of a paintbrush, AI, splattering or dripping paint, having your elephant do it.

        They all just seem like artist's tools to me, so why is using one of them not "creative"?

        • by narcc ( 412956 )

          To be protected by copyright, a work must:
          - be created by a human.
          - "fixed in a tangible medium of expression"
          - must contain "at least a modicum" of creativity.

          To your specific points of confusion:
          - Jackson Pollock wasn't just randomly dripping paint onto a canvas.
          - The elephant painting, like the AI, aren't protected by copyright because the artist is not a human.
          - Art created using a paintbrush or a computer simulation of a paintbrush can be protected by copyright provided that the artist is a human and t

    • The copyright issue I do find concerning. Where should the line in the sand be drawn?

      An author writes words that are copyrighted. But they are allowed to use tools like Grammerly, or even spell check to write part of their document.

      Someone doing digital art may be using brushes that randomly generate brushstrokes for them. Their art is still copyrightable.

      An animated movie traditionally had one set of artists producing keyframes, and another set doing the inbetweening. These days, the inbetweening is typic

  • I hear GIMP 3.0 will have this. It's due for release on the 32nd of Neverember.
  • While the current version can only produce starting points for models really, the fact that you can reduce the initial part of modeling already reduces needed work hours.

    But the big thing is, in not too distant future, future versions of this technology will be able to reduce most of modelling to the description and Q/A phases only, removing the actual modeling totally.

    In today's games the modeling is usually biggest single part of work time used. Not in all games of course, but in most modern 3d games. So

  • On registration want access to local profile and access to email. Fuck them. Sideways.

The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone

Working...