– Focused on safe and ethical release
– Compresses the visual information of humanity into a few gigabytes
– An optimized development notebook using the HuggingFace diffusers library
– A public demonstration space can be found on HuggingFace
– The recommended model weights are v1.4 470k
– Can run locally or in the cloud
– Currently, NVIDIA chips are recommended
– Optimized versions of this model will be released
– Collaboration between researchers at Stability AI, RunwayML, LMU Munich, EleutherAI and LAION
– Ban on entering commands such as sexual or violent images
You can also see my first look at Stable Diffusion here.
License – a Hugging Face Space by CompVis
Stable Diffusion with 🧨 diffusers – Colaboratory
Stable Diffusion – a Hugging Face Space by stabilityai
GitHub – xinntao/ESRGAN: ECCV18 Workshops – Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution. The training codes are in BasicSR.
Emad on Twitter: “Delighted to announce the public open source release of #StableDiffusion! Please see our release post and retweet! https://t.co/dEsBX7cRHw Proud of everyone involved in releasing this tech that is the first of a series of models to activate the creative potential of humanity” / Twitter
GitHub – CompVis/stable-diffusion
Open-source DALL-E “Open Diffusion” is now available on a website
Stable Diffusion launch announcement — Stability.Ai
Upcoming AI image generator will run on an RTX 3080 | PC Gamer
Stable Diffusion: A Model To Rival DALL·E 2 With Fewer Restrictions – Weights & Biases
Stable Diffusion release within 24-hours (Open version of DALL·E) | Hacker News
Stable Diffusion DreamStudio Beta: First Look | by Lost Books | Aug, 2022 | Medium
Stable Diffusion release within 24-hours (Open version of DALL·E)
[Code Release] textual_inversion, A fine tuning method for diffusion models has been released today, with Stable Diffusion support coming soon™ : StableDiffusion
GitHub – rinongal/textual_inversion
textual_inversion/configs/stable-diffusion at main · rinongal/textual_inversion · GitHub
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
ExponentialCookie comments on [Code Release] textual_inversion, A fine tuning method for diffusion models has been released today, with Stable Diffusion support coming soon™
I heard today is the release day for SD…. where can I get access to it? | Jupyter Notebook LibHunt
cpacker (Charles Packer) · GitHub
Lstein/Stable-diffusion Alternatives and Reviews (Aug 2022)
Stable-diffusion Alternatives and Reviews (Aug 2022)
After Text-to-Image, Now it’s Text-to-Video
Open Source DALL-E “Open Diffusion” Now Available
Stable Diffusion launch announcement — Stability.Ai
Stability AI (@StabilityAI) / Twitter
Stable Diffusion 🎨 – News, Art, Updates (@StableDiffusion) / Twitter
#stablediffusion – Twitter Search / Twitter
stable diffusion,dream studio,stability ai,stable diffusion public release,dream studio public release,emad mostaque
OK hello hello Tyler Bryden here very exciting moment in. I would say the history of technology, the history of artificial intelligence, specifically generative AI that is the release of stable diffusion, the public release and then with that also comes Dream Studio. So I’m going to try to tackle both of these quickly in a video. This is just literally happened. Now there was a 2:00 PM. If I go down a little bit on these it was sort of a countdown over the last few days and then apparently a little bit of a spell check at 2:00 PM. Eastern Standard Time, everyone sort of waiting, very excited. And then finally about 12 minutes ago, just after two, pretty close on the timeline I would say for a big release like this, they especially sort of announced the public open source release and then you’ve also got this dream studio. So let’s take a quick look here at the actual stable diffusion public release and then we’ll talk about the Dream Studio as well. But obviously a lot of excitement around this interestingly at the top.
You know, talking about this as being a safe ethical release, lots of opportunities for misinformation and and negative use of technology that’s powerful. And so they worked with the team at hugging face to make sure that they did their best to address that. And I’m sure there’s going to be a ton of edge cases. I’m sure a lot is going to pop up. But I think generally people are have a lot of respect for the way that they’re doing this and are, you know excited for you know, an attempt to do this in the right way and.
Add a couple other things, of course hugging face, so the model is being released under our Creative ML Openurl license, I got too many links open here already and I well, yeah. So here we go. We’ve got the actual license here. So what that means is it is for commercial and non commercial use and then focus on ethical and legal makes sense. But the fact that it is available for commercial use is pretty massive. Originally in the beta releases of some of these image generation platforms daily I’m thinking specifically you were not capable of doing that. So here we are.
You know a big change in the be able to do commercial releases, and so if you create an image that is yours, you can make money off it. That’s a very powerful thing and lots of challenges around. You know if this was trained on data created by artists and other images, how? What is the responsibility of IP and all this stuff? I’m not going to get into that today. I think that’s a very deep issue. And outside of me I’m just celebrating that this is actually released. So what’s the crazy part? Is is and there’s a statement here. If I can find it, I think it’s in this article, humanity.
This release is the culmination of many hours of collective effort to create a single file that compresses the visual information of humanity into a few gigabytes. That is absolutely insane. There are rumors of audio and video versions of this coming, and I’ve seen some stuff floating on Twitter already sort of foreshadowing this, so I think we’re in just an incredible time. I keep saying that, but I can’t help it. I I’m overwhelmed by the what I’m seeing. The output of this the enthusiasm.
Around it, the incredible expertise that’s being applied, and I think we’re going to see, like I don’t think we’re even fully understanding of the consequences of this release yet. So huge, huge moment in time for technology. Couple things that are suppose. So definitely a close collaboration with hugging face. So I think I have a hugging face piece here. All of this is linked so don’t worry, you’re going to have all these resources in this video and on this website. If you’re reading this on the website we’ve also got.
I don’t know where did this come from. I’ve got a collab which then ohh yeah OK beautiful collab so literally you can hit play, play Play play play and then you can start to generate images here so prompt there we go beautiful so like. I’m excited already. So much potential and people you know, I’m I. I think of myself at a very simple periphery edge of this. Just because my own time I, I think some some understanding of it and then also my own ability to applicate apply prompts, you know, looking at the the Dali prompt guide, which I shared, so prompt engineering prompt design.
You know, I’m continuing to refine that skill set so I can get the images, but if you see the DMV video that I made where I try to do image replications, you see I’ve got a way to go on that. A couple other things that sort of stick out from this are this. The recommended model weights are in here. It can run and then we’ve also got. Basically what’s interesting is NVIDIA.
So it’s currently recommended that those are the actual chips that are used and you can see that in the Google Colab that it’s in the video chip so that was interesting, but there’s going to be further optimized versions of this model along with other variants and architectures that improve the performance and the quality. Overall, this is an incredible, like exciting collaborations between the researchers. So stability I I’ve got runaway at ML. LMU Munich, you Luther AI and Lai Lai on. I don’t know if you actually, I’ve never actually heard it pronounced, I’ve only read it. So we’ve got a couple links here, a discord channel that you can see. I have lots launched and so not to be able to see too much in here. So that was really the release. There’s going to be just an exponential increase of this coming out here. And then what they’ve done even further is then release this into a basically a visual interface that allows you to interact with it. And I think this is a credible.
God damn I gotta stop saying incredible democratization of this technology that allows you to maybe not have the full technical knowledge or wanting to work more through an intuitive interface to basically be able to, you know, create images and what’s interesting is this you know, has to me memories or remnants of what Dali does in the beta and I think this is, you know this is going to be continued democratization. I’m going to hit just the one that is a pre prompt there so we can see the output. And what’s actually happening? I haven’t done this yet. Well, all right. OK. And then there are modifications that you can make to scale up. Oh, interesting size, height.
Umm. Steps. OK, so you can get. So right now I’ve only generating. Oh, so simple, right? So now if I wanted to do the same thing and I want to do 3 images and say I want to go to, I mean what’s the biggest output I can get? 1020. Yeah, I’m going to do 3. How many steps? Let’s go a little higher on that. And then we’ve got a couple different.
Samplers here, the diffusion sampling methods interesting, so this is getting super simple on this. We’ve obviously got a little bit more time to wait for this one. You can see the countdown, which is very intuitive. Sometimes just sitting there waiting you’re like, what the Hell’s going on? Let’s see if I can pop open this frequently asked questions while we do it. And then I’ve increased the height width, so it’s going to be a bigger image. So obviously a little bit more work on that side and then the number of images. So we should get 3 outputs here. I have some basically frequently asked questions on Dream Studio that they have nicely.
Prudham how do I upscale? There’s no native, upscale or interesting, so yes, this is fascinating. Basically because you can’t, you know, necessarily make this bigger. Then they’re using basically recommending other service services to then increase the scale of the image, and they’ve even given you a couple examples. Would like some hyperlinks, give me some hyperlinks there, but I’ll there we go, I’ll just click one as a resource. I’ll link these other ones as resources. In here you can see I’ve got a lot of tabs going right now, so I’ll include all these. Hopefully it’s not too overwhelming. And then you’ve got compute costs and generation counts and everything as well. They actually do have, you know, some other frequently asked questions around sharing, inviting other people.
All fantastic stuff. Here we go. Dream Studio, dream of distant Galaxy. Galaxy. I don’t know who this is, but obviously and probably an incredible artist. And then we’ve got 3 variations and these variations should be at a larger size here. Now let me just see if I can pop this up and confirm.
Don’t want to have too much sensitive data there? Apologies, let’s tech. I take a quick look properties one point so 1.9 megabytes fantastic and 1024 by 1024 pixels, so you’re at a pretty high size with that already. You’re you know, capable of sharing this directly on social media or doing whatever you want to do with it. Incredible democratization of this technology, I said incredible again, but here I am gushing over this. I really, truly believe that this is just.
Just it’s I. I feel in the presence of of of almost like a it’s that’s hard to say this but like some sort of God force that this is now possible and I think that’s. Maybe even a scary thing that I I feel that way, but I know that other people must with just this raw power of creativity now available, I just think it’s absolutely beautiful, absolutely insane, and I can’t wait to see what comes out of this, so I’m not going to spend too much more time on this. There’s a bunch of great links on this page for you to check out, or on this video and and below on the on the website, and I truly encourage you to check this out if you’ve been interested in it already. Or, you know, just getting interested now. This is something to pay attention to. It’s going to have massive ramifications on us, on our sort of future as as people, especially as we not just diversify from images but into audio, into video. We’ve already seen a proliferation of this with text. This is a huge life changing world changing release from stability. I appreciate all the work that all the teams have done on this and just very, very excited to see what comes. Thank you very much for checking this out with Tyler Braden and love.
Covering this stuff, have a bunch of other videos on Dolly and and this I would classify myself as an explorer, as a curious commentator or narrator following along, trying to amplify the message of some of these things, still needing to dedicate more time myself and trying to find the path to doing that. But here I am, completely enthusiastic. Probably if as you are if you’re here watching this video and I wish you all the well on your experimentations and journeys with stable diffusion and dream Studio, it’s amazing time to be alive. Thank you.