YOLO-World
StyleCLIP
YOLO-World | StyleCLIP | |
---|---|---|
3 | 23 | |
3,442 | 3,902 | |
13.4% | - | |
9.0 | 0.0 | |
6 days ago | 12 months ago | |
Python | HTML | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
YOLO-World
-
A History of CLIP Model Training Data Advances
2024 is shaping up to be the year of multimodal machine learning. From real-time text-to-image models and open-world vocabulary models to multimodal large language models like GPT-4V and Gemini Pro Vision, AI is primed for an unprecedented array of interactive multimodal applications and experiences.
- FLaNK Stack Weekly 19 Feb 2024
-
Making My Bookshelves Clickable
Post author here. I like this idea. I plan to explore it and make a more generic solution. I'd love to have a point-and-click interface for annotating scenes.
For example, I'd like to be able to click on pieces of coffee equipment in a photo of my coffee setup so I can add sticky note annotations when you hover over each item.
For the bookshelves idea specifically, I would love to have a correction system in place. The problem isn't so much SAM as it is Grounding DINO, the model I'm using for object identification. I then pass each identified region to SAM and map the segmentation mask to the box.
Grounding DINO detects a lot of book spines, but often misses 1-2. I am planning to try out YOLO-World (https://github.com/AILab-CVC/YOLO-World), which, in my limited testing, performs better for this task.
StyleCLIP
-
A History of CLIP Model Training Data Advances
While CLIP on its own is useful for applications such as zero-shot classification, semantic searches, and unsupervised data exploration, CLIP is also used as a building block in a vast array of multimodal applications, from Stable Diffusion and DALL-E to StyleCLIP and OWL-ViT. For most of these downstream applications, the initial CLIP model is regarded as a “pre-trained” starting point, and the entire model is fine-tuned for its new use case.
-
[D] What is the largest / most diverse GAN model currently out there?
I'm currently building a fork for StyleCLIP global directions which allows you to control multiple semantic parameters simoultaneously to generate and edit an image with StyleGAN and CLIP in realtime. I want to showcase its potential as a design tool. Unfortunately, GAN weights are trained on very domain-specific (faces, cars, churches) data. This makes them inferior to modern diffusion models which I can use to generate whatever comes to mind. Although I know we won't have a GAN-based DALL-E counterpart anytime soon, I still would love to use my system with weights that can output a wide variety of things.
-
test
(Added Feb. 15, 2021) StyleCLIP - Colaboratory by orpatashnik. Uses StyleGAN to generate images. GitHub. Twitter reference. Reddit post.
- I am David Bau, and I study the structure of the complex computations learned within deep neural networks.
-
Dragon Age Origins Companions as Photorealistic People.
I used StyleCLIP. I purchased some Google Colab time to use their GPUs. I'll probably do some more later this week.
- Turning BDO characters into blursed people with AI
-
I used AI to generate real life for honor character faces
Link for Styleclip
-
AI-generated 'real' faces of CGI characters - description in comments
So, I watched this Corridor Crew video on generating realistic faces from CG characters, and I wanted to try it out on the RDR2 models. The github link for the original work is here. If you guys are interested I can generate the faces of more characters from RDR2 and RDR1. I can even try some from RD Revolver.
-
AI Generated Art Scene Explodes as Hackers Create Groundbreaking New Tools - New AI tools CLIP+VQ-GAN can create impressive works of art based on just a few words of input.
Combining these methods with CLIP allows you to generate images based on text. This one uses a face generator. https://github.com/orpatashnik/StyleCLIP
- [D] How to save latent code edited from StyleClip.
What are some alternatives?
encoder4editing - Official implementation of "Designing an Encoder for StyleGAN Image Manipulation" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02766
compare_gan - Compare GAN code.
NVAE - The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
pixel2style2pixel - Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
alias-free-gan - Alias-Free GAN project website and code
tensor2tensor - Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Story2Hallucination
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
CLIP-Style-Transfer - Doing style transfer with linguistic features using OpenAI's CLIP.
stylegan-xl - [SIGGRAPH'22] StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets
StyleCLIP - Using CLIP and StyleGAN to generate faces from prompts.