LAVIS
dino
LAVIS | dino | |
---|---|---|
18 | 7 | |
8,838 | 5,904 | |
2.9% | 2.3% | |
6.3 | 1.0 | |
24 days ago | 1 day ago | |
Jupyter Notebook | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LAVIS
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
-
[D] Why is most Open Source AI happening outside the USA?
For multimodal, there's China (*many), then Salesforce.
-
Need help for a colab notebook running Lavis blip2_instruct_vicuna13b?
Been trying for all day to get a working inference for this example: https://github.com/salesforce/LAVIS/tree/main/projects/instructblip
-
most sane web3 job listing
There's also been big breakthroughs in computer vision. Not that long ago it was hard to recognize if a photo contained a bird; that's solved now by models like CLIP, Yolo, or Segment Anything. Now research has moved on to generating 3D scenes from images or interactively answering questions about images.
-
I work at a non-tech company and have been asked to make software that is impossible. How do I explain it to my boss?
The new hotness is multimodal vision-language models like InstructBLIP that can interactively answer questions about images. Check out the examples in the github repo, I would not have thought this was possible a few years ago.
-
Two-minute Daily AI Update (Date: 5/15/2023)
Salesforce’s BLIP family has a new member– InstructBLIP, a vision-language instruction-tuning framework using BLIP-2 models. It has achieved state-of-the-art zero-shot generalization performance on a wide range of vision-language tasks, substantially outperforming BLIP-2 and Flamingo. (Source)
-
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Github
-
Can I use my own art as a training set?
Most of my workflows are self-made. For captioning I used Blip-2 in a custom script I made that automates the process by going into directories and their sub-directories and creates a .txt file beside each image. This way I can keep my images organized in their proper directories, without having to put dump them all in a single place.
- FLiP Stack Weekly for 13-Feb-2023
dino
- Batch-wise processing or image-by-image processing? (DINO V1)
-
[P] Image search with localization and open-vocabulary reranking.
I also implemented one based on the self attention maps from the DINO trained ViT’s. This worked pretty well when the attention maps were combined with some traditional computer vision to get bounding boxes. It seemed an ok compromise between domain specialization and location specificity. I did not try any saliency or gradient based methods as i was not sure on generalization and speed respectively. I know LAVIS has an implementation of grad cam and it seems to work well in the plug'n'play vqa.
-
Unsupervised semantic segmentation
You will probably need an unwieldy amount of data and compute to reproduce it, so your best option would be to use the pretrained models available on github.
-
[D] Why Transformers are taking over the Compute Vision world: Self-Supervised Vision Transformers with DINO explained in 7 minutes!
[Full Explanation Post] [Arxiv] [Project Page]
-
A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers
Except he is actually talking about the new DINO model created by facebook that was released on friday. Which is a new approach to image transformers for unsupervised segmentation. Here's its github.
-
[D] Paper Explained - DINO: Emerging Properties in Self-Supervised Vision Transformers (Full Video Analysis)
Code: https://github.com/facebookresearch/dino
- [R] DINO and PAWS: Advancing the state of the art in computer vision with self-supervised Transformers
What are some alternatives?
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
simsiam-cifar10 - Code to train the SimSiam model on cifar10 using PyTorch
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
Transformer-SSL - This is an official implementation for "Self-Supervised Learning with Swin Transformers".
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
pytorch-metric-learning - The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
robo-vln - Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
linkis - Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
lightly - A python library for self-supervised learning on images.