LAVIS
clip-retrieval
LAVIS | clip-retrieval | |
---|---|---|
18 | 11 | |
8,838 | 2,152 | |
2.9% | - | |
6.3 | 7.7 | |
24 days ago | 28 days ago | |
Jupyter Notebook | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LAVIS
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
-
[D] Why is most Open Source AI happening outside the USA?
For multimodal, there's China (*many), then Salesforce.
-
Need help for a colab notebook running Lavis blip2_instruct_vicuna13b?
Been trying for all day to get a working inference for this example: https://github.com/salesforce/LAVIS/tree/main/projects/instructblip
-
most sane web3 job listing
There's also been big breakthroughs in computer vision. Not that long ago it was hard to recognize if a photo contained a bird; that's solved now by models like CLIP, Yolo, or Segment Anything. Now research has moved on to generating 3D scenes from images or interactively answering questions about images.
-
I work at a non-tech company and have been asked to make software that is impossible. How do I explain it to my boss?
The new hotness is multimodal vision-language models like InstructBLIP that can interactively answer questions about images. Check out the examples in the github repo, I would not have thought this was possible a few years ago.
-
Two-minute Daily AI Update (Date: 5/15/2023)
Salesforce’s BLIP family has a new member– InstructBLIP, a vision-language instruction-tuning framework using BLIP-2 models. It has achieved state-of-the-art zero-shot generalization performance on a wide range of vision-language tasks, substantially outperforming BLIP-2 and Flamingo. (Source)
-
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Github
-
Can I use my own art as a training set?
Most of my workflows are self-made. For captioning I used Blip-2 in a custom script I made that automates the process by going into directories and their sub-directories and creates a .txt file beside each image. This way I can keep my images organized in their proper directories, without having to put dump them all in a single place.
- FLiP Stack Weekly for 13-Feb-2023
clip-retrieval
- FLaNK AI for 11 March 2024
-
[D] data for handwriting recognition
The tool clip-retreival lets you filter those 400 million images to whatever subsets you're interested in --- for example, 10,000 images of (mostly) handwriting.
- Stable Attribution
-
Same.energy: Image Search by Similarity
Hehe, well you know, PR welcome, the front end is 500 lines https://github.com/rom1504/clip-retrieval/blob/main/front/sr...
Other people have done a few alternate front ends already
This one is meant to be functional, but could sure be made prettier
-
Is there a way to use clip or blip to search a massive collection of images for specific things within the picture?
This might work: https://github.com/rom1504/clip-retrieval .
-
Ai art
HaveIBeenTrained uses clip retrieval to search the Laion-5B and Laion-400M image datasets. These are currently the largest public text-to-image datsets, and they are used to train models like Stable Diffusion, Imagen, among many others.
- Image Similarity Score using transfer learning
-
Exploring 12M of the 2.3B Images Used to Train Stable Diffusion
Done https://github.com/rom1504/clip-retrieval/commit/53e3383f58b...
Using clip for searching is better than direct text indexing for a variety of reasons but here for example because it matches better what stable diffusion sees
-
Semantic and Similarity Image Search Engine
Based on OpenAI's CLIP and the clip-retrieval library (https://github.com/rom1504/clip-retrieval), I've built an end-to-end demo for a semantic and similarity image search engine. It's incredibly powerful for finding similar images amongst large image datasets, or just submitting text/natural language queries and finding the most relevant images in your dataset. Really useful tool for introspection into large datasets before annotation or ML work begins. This could potentially be used to filter or downsize your datasets by several orders of magnitude and make annotation and ML work easier and less costly.
Checkout the demo here:
http://ec2-52-39-251-116.us-west-2.compute.amazonaws.com/
And you can checkout our website or email me for updates and email list, etc.:
https://machineperception.co
-
What every software engineer should know about search
Assuming you have an NVIDIA GPU, you can build a semantic search engine by indexing CLIP embeds (image or text).
https://github.com/rom1504/clip-retrieval
What are some alternatives?
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
Typesense - Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
MoTIS - [NAACL 2022]Mobile Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP)
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
laion-aesthetic-datasette - Use Datasette to explore LAION improved_aesthetics_6plus training data used by Stable DIffusion
robo-vln - Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
open_clip - An open source implementation of CLIP.
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
clip-italian - CLIP (Contrastive Language–Image Pre-training) for Italian
linkis - Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
Queryable - Run OpenAI's CLIP model on iOS to search photos.