dream-textures
After-Diffusion
dream-textures | After-Diffusion | |
---|---|---|
72 | 6 | |
7,629 | 39 | |
- | - | |
5.8 | 8.3 | |
1 day ago | 10 months ago | |
Python | JavaScript | |
GNU General Public License v3.0 only | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dream-textures
- Donut done with Artificial Intelligence and Blender
- Tell HN: The next generation of videogames will be great with midjourney
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
I'm a long time advanced AE user and would gladly give feedback according to how I envision a nice workflow to be if you want. I recently got into dream textures for blender, which I think is a great reference for the direction things could be heading. It's still not viable for consistent video, but I love how they expose multiple control nets and their weights to be animatable for example. I also suggested them exposed (animatable) prompt weights, which the author now also plans for future release. I see you have such things planned as well for this plugin so big thumbs up!
-
Resources for artists interesting in using StableDiffusion as a tool?
Dream Textures (SD for Blender) - https://github.com/carson-katri/dream-textures
- Using AI for 3d Game art
-
ControlNet fully integrated with Blender using nodes!
Yes, and it can also automatically bake the texture onto the original UV map instead of the projected UVs. The guide is here: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection
- Using DALL-E 2 to create brick and water textures in Unity.
- 3D animation attempt using Sketchup screenshots and ControlNet
- Blender 3.5
-
Master AI Texture Projection for Blender 3
Dream AI latest release: https://github.com/carson-katri/dream-textures/releases
After-Diffusion
-
After Diffusion: An Open-Source CEP Extension bringing the Stable Diffusion webUI directly into After Effects!
I'm very much open to feedback, so if you get a chance to check it out, or you have any suggestions outright, feel free to let me know! You can find more info and get the extension here!
-
Experimenting w/ high denoising strength and Temporal Coherence in After Effects, without the use of ebsynth.
Side Note: If you DO use Adobe products, feel free to use this extension I had chatGPT help me build, which is meant to integrate the webUI with After Effects more directly, so you don't have to constantly import/export and change windows. It offers TXT2IMG, IMG2IMG, IMG2IMG Inpaint, Inpaint Sketch, multi-Controlnet, and more, directly in After Effects. I'm maintaining it as open source, following suit with the plethora of other SD related implementations, so you'll always have free access to it.
-
Unleashing After-Diffusion V2.0: Stable Diffusion Meets Adobe After Effects!
Jump into the future of Adobe After Effects! (https://github.com/Trentonom0r3/After-Diffusion/tree/Beta-Branch). I'm working around the clock to have this release ready in the next two weeks, so stay tuned!
- GitHub - Trentonom0r3/After-Diffusion: A CEP Extension for Adobe After Effects that allows for seamless integration of the Stable Diffusion Web-UI. (Trent is not me) (for advertisement purposes)
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
Trentonom0r3/After-Diffusion: A CEP Extension for Adobe After Effects that allows for seamless integration of the Stable Diffusion Web-UI. (github.com)
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
TemporalKit - An all in one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
artistic-videos - Torch implementation for the paper "Artistic style transfer for videos"
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-nvidia-docker - GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
DeepBump - Normal & height maps generation from single pictures
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
stable-diffusion
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
Blender-GPT - An all-in-one Blender assistant powered by GPT3/4 + Whisper integration
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image