[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • transformer-deploy

    Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀

  • Regarding CPU inference, quantization is very easy, and supported by Transformer-deploy , however performance on transformer are very low outside corner cases (like no batch, very short sequence and distilled model), and last Intel generation CPU based instance like C6 or M6 on AWS are quite expensive compared to a cheap GPU like Nvidia T4, to say it otherwise, on transformer, until you are ok with slow inference and takes a small instance (for a PoC for instance), CPU inference is probably not a good idea.

  • sparsednn

    Fast sparse deep learning on CPUs

  • If you are ever interested in looking at pruning, happy to integrate my open source library https://github.com/marsupialtail/sparsednn. Latest update has unstructured and structured sparse int8 kernels. 3x speedup over dense int8 at 90 percent sparsity with 1x4 blocks.

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT (by pytorch)

  • Have you tried the new Torch-TensorRT compiler from NVIDIA?

  • mmrazor

    OpenMMLab Model Compression Toolbox and Benchmark.

  • https://github.com/open-mmlab/mmrazor ,it may work for you~

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Learn TensorRT optimization

    2 projects | /r/computervision | 6 Feb 2023
  • FLaNK Stack 05 Feb 2024

    49 projects | dev.to | 5 Feb 2024
  • [D] Is there an affordable way to host a diffusers Stable Diffusion model publicly on the Internet for "real-time"-inference? (CPU or Serverless GPU?)

    1 project | /r/MachineLearning | 6 Dec 2022
  • [D]deploy stable diffusion

    1 project | /r/MachineLearning | 27 Nov 2022
  • 30% Faster than xformers? voltaML vs xformers stable diffusion - NVIDIA 4090

    1 project | /r/StableDiffusion | 25 Nov 2022