lama
segment-anything
lama | segment-anything | |
---|---|---|
17 | 58 | |
7,361 | 44,715 | |
2.7% | 1.5% | |
5.1 | 0.0 | |
21 days ago | about 2 months ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lama
-
Can someone please help me with inpainting settings to remove the subject from this image? I want to rebuild as much of the original background as possible.
You could try to use ControlNet inpaint+lama locally, but results aren't as good in my experience. Or you could try local install of lama directly, but the setup process isn't very smooth.
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
You may be able to remove the actor with lama. https://github.com/advimman/lama
-
ControlNet Update: [1.1.222] Preprocessor: inpaint_only+lama
LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions (Apache-2.0 license) Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky (Samsung Research and EPFL)
-
[Inpainting] [Q] Want to remove a person/ group of people from an image.
You could try using LaMA: https://github.com/saic-mdal/lama .
-
[task] Python developer for two tasks, one 30$ the other 50$
Task 1: 30$ Write a script to prepare data for https://github.com/saic-mdal/lama. You should be able to prove on video that your script works properly after i give you some data and you train and there's some acceptable output (not perfect, since i know how ml works).
-
Research Topics for Master/PHD
I have some interest in image inpainting, but the recent paper introduced by Samsung - https://github.com/saic-mdal/lama seems like it reached a stage where there isn't much room that we can add for novelty. I am quite interested but not confident if I can come up with a novel idea that can do better than the proposed model, especially if they trained the model on multiple GPUs for days, which is something that I don't have.
- The Black Hole Photographs: Censored Images from America’s Great Depression
- Image inpainting tool powered by LaMa
- Resolution-Robust Large Mask Inpainting with Fourier Convolutions
-
[D] Paper Explained - Resolution-robust Large Mask Inpainting with Fourier Convolutions (w/ Author Interview)
Code: https://github.com/saic-mdal/lama
segment-anything
-
What things are happening in ML that we can't hear oer the din of LLMs?
- segment anything: https://github.com/facebookresearch/segment-anything
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
Generate new version of a living-room with specific furniture
Render a new living room using a controlnet model of your choice to keep the basic structure. Load the original living room image and look for the furniture you want to change with a Segment Anything Model to create a mask. Use that mask on the new living room to inpaint new furniture.
-
How Do I read Github Pages? It is so exhausting, I always struggle, oh and I am on windows
Hello,So I am trying to run some programs, python scripts from this page: https://github.com/facebookresearch/segment-anything, and found myself spending hours without succeeding in even understanding what's is written on that page. And I think this is ultimately related to programming.
-
Autodistill: A new way to create CV models
Some of the foundation/base models include: * GroundedSAM (Segment Anything Model) * DETIC * GroundingDINO
-
How to Fine-Tune Foundation Models to Auto-Label Training Data
Webinar from last week on how to fine-tune VFMs, specifically Meta's Segment Anything Model (SAM).
What you'll need to follow along the fine-tuning walkthrough:
Images, ground-truth masks, and optionally, prompts from the Stamp Verification (StaVer) Dataset on Kaggle (https://www.kaggle.com/datasets/rtatman/stamp-verification-s...)
Download the model weights for SAM the official GitHub repo (https://github.com/facebookresearch/segment-anything)
Good understanding of the model architecture Segment Anything paper (https://ai.meta.com/research/publications/segment-anything/)
GPU infra the NVIDIA A100 should do for this fine-tuning.
Data curation and model evaluation tool Encord Active (https://github.com/encord-team/encord-active)
Colab walkthrough for fine-tuning: https://colab.research.google.com/github/encord-team/encord-...
I'd love to get your thoughts and feedback. Thank you.
-
Deploying a ML model (segment-anything) to GCP - how would you do it?
I now want users to be able to use the segment-anything model (https://github.com/facebookresearch/segment-anything) in my app. It's in pytorch if that matters. How it should work is that
-
The Mathematics of Training LLMs
Yeah, they are great and some of the reason (up the causal chain) for some of the work I've done! Seems really fun! <3 :))))
Facebook's Segment Anything Model I think has a lot of potentially really fun usecases. Plaintext description -> Network segmentation (https://github.com/facebookresearch/segment-anything/blob/ma...) Not sure if that's what you're looking for or not, but I love that impressing your kids is where your heart is. That kind of parenting makes me very, very, very, happy. :') <3
-
How hard is it to "code" a tool based on segment-anything and Stable diffusion ?
There are some snippets of Python code on the segment-anything github readme that show how to do this. Once you have it installed you can import functions from the segment-anything module, load a segmentation model, and generate masks for input images that match the prompt of your choice. You don't need Stable Diffusion for this, but you could load it through diffusers to do things like inpaint your images using the masks.
- The less i know the better
What are some alternatives?
IOPaint - Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.
Segment-Everything-Everywhere-All-At-Once - [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Torrent-To-Google-Drive-Downloader-v3 - Simple notebook to stream torrent files to Google Drive using Google Colab and python3.
backgroundremover - Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.
strv-ml-mask2face - Virtually remove a face mask to see what a person looks like underneath
ComfyUI-extension-tutorials
cleanup.pictures - Code for https://cleanup.pictures
stable-diffusion-webui-Layer-Divider - Layer-Divider, an extension for stable-diffusion-webui using the segment-anything model (SAM)
Colab-Crypto-Mining - Cryptocurrency Mining Experiments on Google CoLab Notebooks
Grounded-Segment-Anything - Grounded-SAM: Marrying Grounding-DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
dl-colab-notebooks - Try out deep learning models online on Google Colab
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"