ImageNetV2
A new test set for ImageNet (by modestyachts)
photoguard
Raising the Cost of Malicious AI-Powered Image Editing (by MadryLab)
ImageNetV2 | photoguard | |
---|---|---|
1 | 7 | |
225 | 531 | |
0.4% | 2.1% | |
2.1 | 1.8 | |
about 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ImageNetV2
Posts with mentions or reviews of ImageNetV2.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Assignment 2 Part C.3 - Which Dataset?
Part C.3 seems to expect us the use the ImageNet dataset, but ImageNet is not publicly available anymore - it can only be downloaded from here: http://image-net.org/download-images, and you need to sign up for it. What dataset are we expected to use for the question then - would using ImageNetV2 (https://github.com/modestyachts/ImageNetV2) be fine? Would it be okay to split this up into a train set and test set (with the test set having a length of 128 as requested in the question), and use the test set for evaluation and the train set for retraining our modified AlexNet model?
photoguard
Posts with mentions or reviews of photoguard.
We have used some of these posts to build our list of alternatives
and similar projects.
- PhotoGuard - сервіс для захисту зображень від нейромереж. Працює за допомогою моделей редагування фотографій на основі машинного навчання, таких як Stable Diffusion.
-
Are there any tools for "Defend Against the Dark Arts" of diffusion?
I've been searching for a tool to create Diffusion-resistant images, and I came across the Photoguard repository and tried it, the results weren't so good (very strange colorful noises on the image, which makes it unusable)
- Raising the Cost of Malicious AI-Powered Image Editing
- PhotoGuard: Defending Against Diffusion-Based Image Manipulation
-
Welcome to a community to discuss what to do about the negative effects of AI art
- PhotoGuard can help to protect images from being able to be edited by Stable Diffusion with inpainting, it can theoretically also help to protect artwork from being finetuned by a model: https://github.com/MadryLab/photoguard
- Somebody implemented a photoguard against inpainting.
What are some alternatives?
When comparing ImageNetV2 and photoguard you can also consider the following projects:
web-stable-diffusion - Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
timm-vis - Visualizer for PyTorch image models
stable-diffusion-webui-colab - stable diffusion webui colab