-
transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
Regarding CPU inference, quantization is very easy, and supported by Transformer-deploy , however performance on transformer are very low outside corner cases (like no batch, very short sequence and distilled model), and last Intel generation CPU based instance like C6 or M6 on AWS are quite expensive compared to a cheap GPU like Nvidia T4, to say it otherwise, on transformer, until you are ok with slow inference and takes a small instance (for a PoC for instance), CPU inference is probably not a good idea.
If you are ever interested in looking at pruning, happy to integrate my open source library https://github.com/marsupialtail/sparsednn. Latest update has unstructured and structured sparse int8 kernels. 3x speedup over dense int8 at 90 percent sparsity with 1x4 blocks.
Have you tried the new Torch-TensorRT compiler from NVIDIA?
https://github.com/open-mmlab/mmrazor ,it may work for you~
Related posts
-
Learn TensorRT optimization
-
FLaNK Stack 05 Feb 2024
-
[D] Is there an affordable way to host a diffusers Stable Diffusion model publicly on the Internet for "real-time"-inference? (CPU or Serverless GPU?)
-
[D]deploy stable diffusion
-
30% Faster than xformers? voltaML vs xformers stable diffusion - NVIDIA 4090