-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
stable-fast
Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.
-
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
-
Shellgpt: Chat with LLM in your terminal, be it shell generator, story teller
-
Omost: A project to convert LLM's coding capability to image generation
-
Take control! Run ChatGPT and Github Copilot yourself!
-
The DevRel Digest May 2024: Documentation and the Developer Journey
-
Why your Linux kernel bug report might be fruitless