stylegan
stylegan2-pytorch
stylegan | stylegan2-pytorch | |
---|---|---|
31 | 1,990 | |
13,982 | 3,655 | |
0.2% | - | |
0.0 | 2.7 | |
about 2 months ago | about 1 month ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stylegan
-
An AI artist isn't an artist
Been following generative AI since 2017 when nvidia released their first GAN paper & the results always fascinated me. Trained my own models with their repo then experimented with other open source projects. went thru the pain of assembling my own data set, tweaking code parameters to achieve what i'm looking for, had to deal with all kinds of hardware/software issues. I know it's not easy. (screenshot of a motorbike GAN model i was training in 2018 https://imgur.com/a/SIULFhR, was taken after 5 hours of training on a gtx 1080) or this, cinema camera output from another locally trained model. So yeah i have a couple ideas of how generative AI works. yup things were that bad few years ago, that technology has come a long way. Using & setting up something like stable diffusion with automatic1111 webui isn't really a complex process. Though generating AI art locally is always gonna feel more rewarding than using a cloud based service.
-
Clearview AI scraped 30 billion images from Facebook and gave them to cops: it puts everyone into a 'perpetual police line-up'
Their algorithm is public, you could do it yourself if you have the proper hardware: https://github.com/NVlabs/stylegan
-
StyleGAN-T Nvidia, 30x Faster than SD?
Umm, StyleGAN was the first decent image generation model, and it was producing great images from random seeds 5 years ago. Now, that's with the obvious caveat that each model was trained to produce one specific type of image and it helped immensely if the training images were all aligned the same. Diffusion models are certainly the trendy current architecture for image generation, but AFAIK there's no fundamental theoretical limitation to the output quality of any architecture except the general rule that more parameters is better.
- The Concept Art Association updates their AI-restricting gofundme campaign, revealing their lack of AI understanding & nefarious plans! [detailed breakdown]
- This was taken outdoors with no special lighting
-
What the F**k
Jokes aside, ML moves extremely fast and our field is quickly advancing. The honest truth is that no researcher can even keep up other than their extremely niche corner. I'll show you an example. Here's what state of the art image generation looked like in 2014, 2018, and here is today (which now is highly controllable using text prompts instead of data prompts).
- Garfield
-
Teaching AI to Generate New Pokemon
The fundamental technology we will use in this work is a generative adversarial network. Specifically, the Style GAN variant.
-
A100 vs A6000 vs 3090 for computer vision and FP32/FP64
Based on my findings, we don't really need FP64 unless it's for certain medical applications. But The Best GPUs for Deep Learning in 2020 — An In-depth Analysis is suggesting A100 outperforms A6000 ~50% in DL. Also the Stylegan project GitHub - NVlabs/stylegan: StyleGAN - Official TensorFlow Implementation uses NVIDIA DGX-1 with 8 Tesla V100 16G(Fp32=15TFLOPS) to train dataset of high-res 1024*1024 images, I'm getting a bit uncertain if my specific tasks would require FP64 since my dataset is also high-res images. If not, can I assume A6000*5(total 120G) could provide similar results for StyleGan?
-
[D] Which gpu should I choose?
Yes that's what I thought. But StyleGan https://github.com/NVlabs/stylegan uses NVIDIA DGX-1 with 8 Tesla V100 16G GPUs(FP32=15) to do the training, not sure if it's related to its high-res training images or something else.
stylegan2-pytorch
- Wikipedia No Longer Considers CNET "Generally Reliable" Source After AI Scandal
-
Discord Clone Using Next.js and Tailwind - Part 3: Channel List
export default function ChannelListBottomBar(): JSX.Element { const { client } = useChatContext(); const [micActive, setMicActive] = useState(false); const [audioActive, setAudioActive] = useState(false); return (
{client.user?.image && (div> )}setMicActive((currentValue) => !currentValue)} > button> setAudioActive((currentValue) => !currentValue)} > button> button> div> ); }{client.user?.name} span> {client.user?.online ? 'Online' : 'Offline'} span> p> button>
-
Realism Engine SDXL v2.0 just released
I wonder if we will ever get a realism model which can produce normal faces like https://thispersondoesnotexist.com instead of like super symmetrical faces of models
-
Spongebob!!!
this has been in circulation since a while AI images took off, and they certainly weren't convincing before they did. You know the old "try to name one thing in this image" macro? Pretty sure that was AI generated, there was also thispersondoesnotexist.com which was always pretty good but of course it is
- 🤣🤣🤣
-
Many AI images have are photorealistic, but have a strangely empty and ''soulless'' expression. Something is wrong, but it's hard to say what
Not the workflow for these images, but if you easily want to spice up your gens with a bit more natural look, try using images from thispersondoesnotexist.com with IPAdapter face model.
-
‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity
...then they just use the app on generated images of not real people (potentially based on specific inputs to remind you of a real person).
-
Quan Chi is from Massachusetts
Also we can generate very realistic faces with AI, https://thispersondoesnotexist.com/ (old example), fully 3D faces are doable at this point. So in another 5-10 years a low profile model like him wouldn't even be hired for this, they would just generate a digital face model. Unionizing will only speed up studios adoption of digital replacements.
-
Sketchy Youtube comments talking about Hostinger
It looks like they all use a profile picture made with https://thispersondoesnotexist.com/
- Lorem picsum but for avatars?
What are some alternatives?
pix2pix - Image-to-image translation with conditional adversarial nets
DeepFaceLab - DeepFaceLab is the leading software for creating deepfakes.
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
awesome-pretrained-stylegan2 - A collection of pre-trained StyleGAN 2 models to download
lucid-sonic-dreams
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)
stylegan2-ada - StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
ffhq-dataset - Flickr-Faces-HQ Dataset (FFHQ)
dalle-mini - DALL·E Mini - Generate images from a text prompt