-
big-sleep
A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
-
stylized-neural-painting
Official Pytorch implementation of the preprint paper "Stylized Neural Painting", in CVPR 2021.
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
clip-glass
Repository for "Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search"
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
deep-daze
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
-
StyleCLIP
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)
-
TediGAN
[CVPR 2021] Pytorch implementation for TediGAN: Text-Guided Diverse Face Image Generation and Manipulation
-
Colab-deep-daze
Discontinued Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network)
-
AuViMi
AuViMi stands for audio-visual mirror. The idea is to have CLIP generate its interpretation of what your webcam sees, combined with the words thare are spoken.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
(Added Feb. 5, 2021) Big Sleep - Colaboratory by lucidrains. Uses BigGAN to generate images. The GitHub repo has a local machine version. GitHub. How to use the latest features in Colab.
(Added Mar. 9, 2021) PaintCLIP.ipynb - Colaboratory by advadnoun. Uses Stylized Neural Painter to generate images. As of time of writing, this gave me an error message.
(Added Feb. 5, 2021) Text2Image_v3 - Colaboratory by tg_bomze. Uses BigGAN (default) or Sigmoid to generate images. GitHub.
(Added Feb. 5, 2021) ClipBigGAN.ipynb - Colaboratory by eyaler. Uses BigGAN to generate images/videos. GitHub. Notebook copy by levindabhi.
(Added Feb. 5, 2021) Story2Hallucination.ipynb - Colaboratory by bonkerfield. Uses BigGAN to generate images/videos. GitHub.
(Added Feb. 5, 2021) CLIP-GLaSS.ipynb - Colaboratory by Galatolo. Uses BigGAN (default) or StyleGAN to generate images. The GPT2 config is for image-to-text, not text-to-image. GitHub.
(Added Feb. 5, 2021) CLIP + TADNE (pytorch) v2 - Colaboratory_v2.ipynb) by nagolinc. Uses TADNE ("This Anime Does Not Exist") to generate images. Instructions and examples. GitHub. Notebook copy_v2.ipynb) by levindabhi
(Added Feb. 5, 2021) Deep Daze - Colaboratory by lucidrains. Uses SIREN to generate images. The GitHub repo has a local machine version. GitHub. Notebook copy by levindabhi.
(Added Feb. 15, 2021) StyleCLIP - Colaboratory by orpatashnik. Uses StyleGAN to generate images. GitHub. Twitter reference. Reddit post.
(Added Feb. 15, 2021) StyleCLIP by vipermu. Uses StyleGAN to generate images.
(Added Feb. 23, 2021) TediGAN - Colaboratory by weihaox. Uses StyleGAN to generate images. GitHub. I got error "No pre-trained weights found for perceptual model!" when I used the Colab notebook, which was fixed when I made the change mentioned here. After this change, I still got an error in the cell that displays the images, but the results were in the remote file system. Use the "Files" icon on the left to browse the remote file system.
(Added Feb. 24, 2021) Colab-BigGANxCLIP.ipynb - Colaboratory by styler00dollar. Uses BigGAN to generate images. "Just a more compressed/smaller version of that [advadnoun's] notebook". GitHub.
(Added Feb. 24, 2021) clipping-CLIP-to-GAN by cloneofsimo. Uses FastGAN to generate images.
(Added Feb. 24, 2021) Colab-deep-daze - Colaboratory by styler00dollar. Uses SIREN to generate images. I did not get this notebook to work, but your results may vary. GitHub.
(Added Feb. 28, 2021) DALLECLIP by vipermu. Uses DALL-E's discrete VAE (variational autoencoder) component to generate images. Twitter reference.
(Added Mar. 1, 2021) Aphantasia.ipynb - Colaboratory by eps696. Uses FFT (Fast Fourier Transform) from Lucent/Lucid to generate images. GitHub. Twitter reference. Example #1. Example #2.
(Added Mar. 7, 2021) StyleGAN2-CLIP-approach.ipynb - Colaboratory by l4rz. Uses StyleGAN to generate images. GitHub. Twitter reference.
(Added Mar. 8, 2021) CLIP Style Transfer Test.ipynb - Colaboratory by Zasder3. Uses VGG19's conv4_1 to generate images. GitHub. Twitter reference.
(Added Mar. 9, 2021) VectorAscent by ajayjain. Uses diffvg to generate images.
(Added Mar. 16, 2021) AuViMi by NotNANtoN. Uses BigGAN or SIREN to generate images.
(Added Mar. 23, 2021) ClipMeshOptimize.ipynb - Colaboratory by EvgenyKashin. Uses PyTorch3D to generate images. GitHub.
Related posts
-
Google's StyleDrop can transfer style from a single image
-
One year ago I got access to closed beta DALL-E 2.
-
Besides Gaming - for what can be a 4080 useful?
-
Is creating a StableDiffusion-inspired model feasible for my Master's thesis?
-
TEDx talk on how to prepare for a career in vfx with the rapid changes caused by AI / machine learning