DBRX: A New Open LLM

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • dbrx

    Code examples and resources for DBRX, a large language model developed by Databricks

  • Looking at the license restrictions: https://github.com/databricks/dbrx/blob/main/LICENSE

    "If, on the DBRX version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Databricks, which we may grant to you in our sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Databricks otherwise expressly grants you such rights."

    I'm glad to see they aren't calling it open source, unlike some LLM projects. Looking at you LLama 2.

  • llama

    Inference code for Llama models

  • Ironically, the LLaMA license text [1] this is lifted verbatim from is itself copyrighted [2] and doesn't grant you the permission to copy it or make changes like s/meta/dbrx/g lol.

    [1] https://github.com/meta-llama/llama/blob/main/LICENSE#L65

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • makeMoE

    From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)

  • This repo I created and the linked blog will help in understanding this: https://github.com/AviSoori1x/makeMoE

  • mixtral-offloading

    Run Mixtral-8x7B models in Colab or consumer desktops

  • Waiting for Mixed Quantization with MQQ and MoE Offloading [1]. With that I was able to run Mistral 8x7B on my 10 GB VRAM rtx3080... This should work for DBRX and should shave off a ton of VRAM requirement.

    1. https://github.com/dvmazur/mixtral-offloading?tab=readme-ov-...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Hello OLMo: A Open LLM

    3 projects | news.ycombinator.com | 8 Apr 2024
  • Mixtral in Colab

    1 project | news.ycombinator.com | 7 Jan 2024
  • FLaNK AI - 01 April 2024

    31 projects | dev.to | 1 Apr 2024
  • FLaNK AI for 11 March 2024

    46 projects | dev.to | 11 Mar 2024
  • FLaNK 04 March 2024

    26 projects | dev.to | 4 Mar 2024