80% faster, 50% less memory, 0% accuracy loss Llama finetuning

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • hyperlearn

    2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.

  • Sorry about that - I'm super new to pricing and stuff so it might seem off since I'm literally making the plans with my bro as we go along.

    If you don't believe the timings, I was the author of Hyperlearn https://github.com/danielhanchen/hyperlearn which makes ML faster - I also listed the papers which cite the algos.

    I also used to work at NVIDIA making TSNE 2000x faster on GPUs and some other algos like Randomized SVD, sparse matrix multiplies etc.

    If you have any suggestions on a more appropriate pricing strategy - I'm all ears!!

    I really don't know much about pricing and the open core model, so I'm making stuff up literally.

  • unsloth

    Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory

  • This seems to just be a link to the Unsloth Github repo[0], which in turn is the free version of Unsloth Pro/Max[1]. Maybe the link should be changed?

    [0]: https://github.com/unslothai/unsloth

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning

    6 projects | news.ycombinator.com | 1 Dec 2023
  • How to Build a Logistic Regression Model: A Spam-filter Tutorial

    1 project | dev.to | 5 May 2024
  • Ask HN: How does deploying a fine-tuned model work

    4 projects | news.ycombinator.com | 23 Apr 2024
  • Frouros: An open-source Python library for drift detection in machine learning

    1 project | news.ycombinator.com | 6 Apr 2024
  • Ask HN: Most efficient way to fine-tune an LLM in 2024?

    6 projects | news.ycombinator.com | 4 Apr 2024