FP8 quantized results are bad compared to int8 results

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • TensorRT-LLM

    TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

  • I have followed the instructions on https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/llama to convert the float16 Llama2 13B to FP8 and build a tensorRT-LLM engine.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • How to Use ChatGPT to Kickstart Your Project and Begin Your Journey as a Programmer

    2 projects | dev.to | 1 Jun 2024
  • CERN Root

    1 project | news.ycombinator.com | 1 Jun 2024
  • Testing Sync at Dropbox

    1 project | news.ycombinator.com | 31 May 2024
  • Quickly checking whether a string needs escaping

    1 project | news.ycombinator.com | 31 May 2024
  • How to Copy a File From a 30-year-old Laptop

    1 project | news.ycombinator.com | 31 May 2024