mistral-src VS micrograd

Compare mistral-src vs micrograd and see what are their differences.

mistral-src

Reference implementation of Mistral AI 7B v0.1 model. (by mistralai)

micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API (by karpathy)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
mistral-src micrograd
9 22
8,732 8,397
4.1% -
7.3 0.0
about 2 months ago 2 days ago
Jupyter Notebook Jupyter Notebook
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mistral-src

Posts with mentions or reviews of mistral-src. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-01.
  • Mistral 7B vs. Mixtral 8x7B
    1 project | dev.to | 26 Mar 2024
    A French startup, Mistral AI has released two impressive large language models (LLMs) - Mistral 7B and Mixtral 8x7B. These models push the boundaries of performance and introduce a better architectural innovation aimed at optimizing inference speed and computational efficiency.
  • How to have your own ChatGPT on your machine (and make him discussed with himself)
    1 project | dev.to | 24 Jan 2024
    However, some models are publicly available. It’s the case for Mistral, a fast, and efficient French model which seems to outperform GPT4 on some tasks. And it is under Apache 2.0 license 😊.
  • How to Serve LLM Completions in Production
    1 project | dev.to | 18 Jan 2024
    I recommend starting either with llama2 or Mistral. You need to download the pretrained weights and convert them into GGUF format before they can be used with llama.cpp.
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    > Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!

    actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...

    Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...

    and both of these are SOTA open source models.

  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    MistralAI, a French startup, developed a 7.3 billion parameter LLM named Mistral for various applications. Committed to open-sourcing its technology under Apache 2.0, the training dataset details for Mistral remain undisclosed. The Mistral Instruct model was fine-tuned using publicly available instruction datasets from the Hugging Face repository, though specifics about the licenses and potential constraints are not detailed. Recently, MistralAI released Mixtral 8x7B, a model based on the sparse mixture of experts (SMoE) architecture, consisting of several specialized models (likely eight, as suggested by its name) activated as needed.
  • Mistral website was just updated
    3 projects | /r/LocalLLaMA | 11 Dec 2023
  • Mistral AI – open-source models
    1 project | news.ycombinator.com | 8 Dec 2023
  • Mistral 8x7B 32k model [magnet]
    6 projects | news.ycombinator.com | 8 Dec 2023
  • Ask HN: Why the LLaMA code base is so short
    2 projects | news.ycombinator.com | 22 Nov 2023
    I was getting into LLM and I pick up some projects. I tried to dive into the code to see what is secret sauce.

    But the code is so short to the point there is nothing to really read.

    https://github.com/facebookresearch/llama

    I then proceed to check https://github.com/mistralai/mistral-src and suprsingly it's same.

    What is exactly those codebases? It feels like just download the models.

micrograd

Posts with mentions or reviews of micrograd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-20.
  • Micrograd-CUDA: adapting Karpathy's tiny autodiff engine for GPU acceleration
    3 projects | news.ycombinator.com | 20 Mar 2024
    I recently decided to turbo-teach myself basic cuda with a proper project. I really enjoyed Karpathy’s micrograd (https://github.com/karpathy/micrograd), so I extended it with cuda kernels and 2D tensor logic. It’s a bit longer than the original project, but it’s still very readable for anyone wanting to quickly learn about gpu acceleration in practice.
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.

    Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.

    If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.

    [1] https://github.com/karpathy/llama2.c

    [2] https://github.com/karpathy/micrograd

  • Writing a C compiler in 500 lines of Python
    4 projects | news.ycombinator.com | 4 Sep 2023
    Perhaps they were thinking of https://github.com/karpathy/micrograd
  • Linear Algebra for Programmers
    4 projects | news.ycombinator.com | 1 Sep 2023
  • Understanding Automatic Differentiation in 30 lines of Python
    9 projects | news.ycombinator.com | 24 Aug 2023
  • Newbie question: Is there overloading of Haskell function signature?
    1 project | /r/haskell | 26 May 2023
    I was (for fun) trying to recreate micrograd in Haskell. The ideia is simple:
  • [D] Backpropagation is not just the chain-rule, then what is it?
    2 projects | /r/MachineLearning | 18 May 2023
    Check out this repo I found a few years back when I was looking into understanding pytorch better. It's basically a super tiny autodiff library that only works on scalars. The whole repo is under 200 lines of code, so you can pull up pycharm or whatever and step through the code and see how it all comes together. Or... you know. Just read it, it's not super complicated.
  • Neural Networks: Zero to Hero
    5 projects | news.ycombinator.com | 5 Apr 2023
    I'm doing an ML apprenticeship [1] these weeks and Karpathy's videos are part of it. We've been deep down into them. I found them excellent. All concepts he illustrates are crystal clear in his mind (even though they are complicated concepts themselves) and that shows in his explanations.

    Also, the way he builds up everything is magnificent. Starting from basic python classes, to derivatives and gradient descent, to micrograd [2] and then from a bigram counting model [3] to makemore [4] and nanoGPT [5]

    [1]: https://www.foundersandcoders.com/ml

    [2]: https://github.com/karpathy/micrograd

    [3]: https://github.com/karpathy/randomfun/blob/master/lectures/m...

    [4]: https://github.com/karpathy/makemore

    [5]: https://github.com/karpathy/nanoGPT

  • Rustygrad - A tiny Autograd engine inspired by micrograd
    2 projects | /r/rust | 7 Mar 2023
    Just published my first crate, rustygrad, a Rust implementation of Andrej Karpathy's micrograd!
  • Hey Rustaceans! Got a question? Ask here (10/2023)!
    6 projects | /r/rust | 6 Mar 2023
    I've been trying to reimplement Karpathy's micrograd library in rust as a fun side project.

What are some alternatives?

When comparing mistral-src and micrograd you can also consider the following projects:

ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models

deepnet - Educational deep learning library in plain Numpy.

lida - Automatic Generation of Visualizations and Infographics using Large Language Models

tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]

ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines

deeplearning-notes - Notes for Deep Learning Specialization Courses led by Andrew Ng.

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

ML-From-Scratch - Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

llama - Inference code for Llama models

NNfSiX - Neural Networks from Scratch in various programming languages

text-generation-webui-colab - A colab gradio web UI for running Large Language Models

yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors