segment-anything
CLIP
segment-anything | CLIP | |
---|---|---|
56 | 104 | |
44,293 | 22,316 | |
2.1% | 3.0% | |
0.0 | 1.2 | |
24 days ago | 8 days ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
segment-anything
-
What things are happening in ML that we can't hear oer the din of LLMs?
- segment anything: https://github.com/facebookresearch/segment-anything
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
Generate new version of a living-room with specific furniture
Render a new living room using a controlnet model of your choice to keep the basic structure. Load the original living room image and look for the furniture you want to change with a Segment Anything Model to create a mask. Use that mask on the new living room to inpaint new furniture.
-
How Do I read Github Pages? It is so exhausting, I always struggle, oh and I am on windows
Hello,So I am trying to run some programs, python scripts from this page: https://github.com/facebookresearch/segment-anything, and found myself spending hours without succeeding in even understanding what's is written on that page. And I think this is ultimately related to programming.
-
Autodistill: A new way to create CV models
Some of the foundation/base models include: * GroundedSAM (Segment Anything Model) * DETIC * GroundingDINO
-
How to Fine-Tune Foundation Models to Auto-Label Training Data
Webinar from last week on how to fine-tune VFMs, specifically Meta's Segment Anything Model (SAM).
What you'll need to follow along the fine-tuning walkthrough:
Images, ground-truth masks, and optionally, prompts from the Stamp Verification (StaVer) Dataset on Kaggle (https://www.kaggle.com/datasets/rtatman/stamp-verification-s...)
Download the model weights for SAM the official GitHub repo (https://github.com/facebookresearch/segment-anything)
Good understanding of the model architecture Segment Anything paper (https://ai.meta.com/research/publications/segment-anything/)
GPU infra the NVIDIA A100 should do for this fine-tuning.
Data curation and model evaluation tool Encord Active (https://github.com/encord-team/encord-active)
Colab walkthrough for fine-tuning: https://colab.research.google.com/github/encord-team/encord-...
I'd love to get your thoughts and feedback. Thank you.
-
Deploying a ML model (segment-anything) to GCP - how would you do it?
I now want users to be able to use the segment-anything model (https://github.com/facebookresearch/segment-anything) in my app. It's in pytorch if that matters. How it should work is that
-
The Mathematics of Training LLMs
Yeah, they are great and some of the reason (up the causal chain) for some of the work I've done! Seems really fun! <3 :))))
Facebook's Segment Anything Model I think has a lot of potentially really fun usecases. Plaintext description -> Network segmentation (https://github.com/facebookresearch/segment-anything/blob/ma...) Not sure if that's what you're looking for or not, but I love that impressing your kids is where your heart is. That kind of parenting makes me very, very, very, happy. :') <3
-
How hard is it to "code" a tool based on segment-anything and Stable diffusion ?
There are some snippets of Python code on the segment-anything github readme that show how to do this. Once you have it installed you can import functions from the segment-anything module, load a segmentation model, and generate masks for input images that match the prompt of your choice. You don't need Stable Diffusion for this, but you could load it through diffusers to do things like inpaint your images using the masks.
- The less i know the better
CLIP
-
Anomaly Detection with FiftyOne and Anomalib
pip install -U huggingface_hub umap-learn git+https://github.com/openai/CLIP.git
-
How to Cluster Images
We will also need two more libraries: OpenAI’s CLIP GitHub repo, enabling us to generate image features with the CLIP model, and the umap-learn library, which will let us apply a dimensionality reduction technique called Uniform Manifold Approximation and Projection (UMAP) to those features to visualize them in 2D:
-
Show HN: Memories, FOSS Google Photos alternative built for high performance
Biggest missing feature for all these self hosted photo hosting is the lack of a real search. Being able to search for things like "beach at night" is a time saver instead of browsing through hundreds or thousands of photos. There are trained neural networks out there like https://github.com/openai/CLIP which are quite good.
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper | Project Page)
-
NLP Algorithms for Clustering AI Content Search Keywords
the first thing that comes to mind is CLIP: https://github.com/openai/CLIP
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Stability Matrix v1.1.0 - Portable mode, Automatic updates, Revamped console, and more
Command: "C:\StabilityMatrix\Packages\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip --prefer-binary
-
[D] LLM or model that does image -> prompt?
CLIP might work for your needs.
What are some alternatives?
Segment-Everything-Everywhere-All-At-Once - [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
open_clip - An open source implementation of CLIP.
backgroundremover - Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
ComfyUI-extension-tutorials
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
stable-diffusion-webui-Layer-Divider - Layer-Divider, an extension for stable-diffusion-webui using the segment-anything model (SAM)
disco-diffusion
Grounded-Segment-Anything - Grounded-SAM: Marrying Grounding-DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation