mistral-src VS sharegpt

Compare mistral-src vs sharegpt and see what are their differences.

mistral-src

Reference implementation of Mistral AI 7B v0.1 model. (by mistralai)

sharegpt

Easily share permanent links to ChatGPT conversations with your friends (by domeccleston)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
mistral-src sharegpt
9 37
8,732 1,683
4.1% -
7.3 6.9
about 2 months ago 6 months ago
Jupyter Notebook TypeScript
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mistral-src

Posts with mentions or reviews of mistral-src. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-01.
  • Mistral 7B vs. Mixtral 8x7B
    1 project | dev.to | 26 Mar 2024
    A French startup, Mistral AI has released two impressive large language models (LLMs) - Mistral 7B and Mixtral 8x7B. These models push the boundaries of performance and introduce a better architectural innovation aimed at optimizing inference speed and computational efficiency.
  • How to have your own ChatGPT on your machine (and make him discussed with himself)
    1 project | dev.to | 24 Jan 2024
    However, some models are publicly available. It’s the case for Mistral, a fast, and efficient French model which seems to outperform GPT4 on some tasks. And it is under Apache 2.0 license 😊.
  • How to Serve LLM Completions in Production
    1 project | dev.to | 18 Jan 2024
    I recommend starting either with llama2 or Mistral. You need to download the pretrained weights and convert them into GGUF format before they can be used with llama.cpp.
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    > Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!

    actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...

    Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...

    and both of these are SOTA open source models.

  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    MistralAI, a French startup, developed a 7.3 billion parameter LLM named Mistral for various applications. Committed to open-sourcing its technology under Apache 2.0, the training dataset details for Mistral remain undisclosed. The Mistral Instruct model was fine-tuned using publicly available instruction datasets from the Hugging Face repository, though specifics about the licenses and potential constraints are not detailed. Recently, MistralAI released Mixtral 8x7B, a model based on the sparse mixture of experts (SMoE) architecture, consisting of several specialized models (likely eight, as suggested by its name) activated as needed.
  • Mistral website was just updated
    3 projects | /r/LocalLLaMA | 11 Dec 2023
  • Mistral AI – open-source models
    1 project | news.ycombinator.com | 8 Dec 2023
  • Mistral 8x7B 32k model [magnet]
    6 projects | news.ycombinator.com | 8 Dec 2023
  • Ask HN: Why the LLaMA code base is so short
    2 projects | news.ycombinator.com | 22 Nov 2023
    I was getting into LLM and I pick up some projects. I tried to dive into the code to see what is secret sauce.

    But the code is so short to the point there is nothing to really read.

    https://github.com/facebookresearch/llama

    I then proceed to check https://github.com/mistralai/mistral-src and suprsingly it's same.

    What is exactly those codebases? It feels like just download the models.

sharegpt

Posts with mentions or reviews of sharegpt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    Vicuna is another instruction-focused LLM rooted in LLaMA, developed by researchers from UC Berkeley, Carnegie Mellon University, Stanford, and UC San Diego. They adapted Alpaca’s training code and incorporated 70,000 examples from ShareGPT, a platform for sharing ChatGPT interactions.
  • create the best coder open-source in the world?
    2 projects | /r/LocalLLaMA | 21 Jun 2023
    We can say that a 13B model per language is reasonable. Then it means we need to create a democratic way for teaching coding by examples and solutions and algorithms, that we create, curate and use open-source. Much like sharegpt.com but for coding tasks, solutions ways of thinking. We should be wary of 'enforcing' principles rather showing different approaches, as all approaches can have advantages and disadvantages.
  • Thank you ChatGPT
    1 project | /r/ChatGPT | 26 May 2023
    You can see the url in the comment, https://sharegpt.com and if you go there it gives you the option for installing the chrome extension, after that it shouldn’t be hard to use it
  • The conversation started as what would AI do if it became self aware and humans tried to shut it down. The we got into interdimensional beings. Most profound GPT conversation I have had.
    1 project | /r/ChatGPT | 14 May 2023
  • Übersicht aller nützlichen Links für ChatGPT Prompt Engineering
    20 projects | /r/ChatGPTPro_DE | 8 May 2023
    ShareGPT - Share your prompts and your entire conversations
  • (Reverse psychology FTW) Congratulations, you've played yourself.
    1 project | /r/ChatGPT | 29 Apr 2023
    Or used https://sharegpt.com
  • "Prompt engineering" is easy as shit and anybody who tells you otherwise is a fucking clown.
    6 projects | /r/ChatGPT | 23 Apr 2023
    you can gets lots of ideas here > https://sharegpt.com/ (180,000+ prompts)
  • I built a ChatGPT Mac app in just 20 minutes with no coding experience - thanks ChatGPT!
    1 project | /r/OpenAI | 21 Apr 2023
    I would love to read the whole conversation: Check out this cool little GPT sharing extension: https://sharegpt.com - that way the code snippets can be copied easily
  • Teaching ChatGPT to Speak My Son’s Invented Language
    3 projects | news.ycombinator.com | 10 Apr 2023
    > Cool, that’s really the only point I’m making.

    To be clear, I'm saying that I don't know if they are, not that we know that it's not the same.

    It's not at all clear that humans do much more than "that basic token sequence prediction" for our reasoning itself. There are glaringly obvious auxiliary differences, such as memory, but we just don't know how human reasoning works, so writing off a predictive mechanism like this is just as unjustified as assuming it's the same. It's highly likely there are differences, but whether they are significant remains to be seen.

    > Not necessarily scaling limitations fundamental to the architecture as such, but limitations in our ability to develop sufficiently well developed training texts and strategies across so many problem domains.

    I think there are several big issues with that thinking. One is that this constraint is an issue now in large part because GPT doesn't have "memory" or an ability to continue learning. Those two need to be overcome to let it truly scale, but once they are, the game fundamentally changes.

    The second is that we're already at a stage where using LLMs to generate and validate training data works well for a whole lot of domains, and that will accelerate, especially when coupled with "plugins" and the ability to capture interactions with real-life users [1]

    E.g. a large part of human ability to do maths with any kind of efficiency comes down to rote repetition and generating large sets of simple quizzes for such areas is near trivial if you combine an LLM at tools for it to validate its answers. And unlike with humans where we have to do this effort for billions of humans, once you have an ability to let these models continue learning you make this investment in training once (or once per major LLM effort).

    A third is that GPT hasn't even scratched the surface in what is available in digital collections alone. E.g. GPT3 was trained on "only" about 200 million Norwegian words (I don't have data for GPT4). Norwegian is a tiny language - this was 0.1% of GPT3's total corpus. But the Norwegian National Library has 8.5m items, which includes something like 10-20 billion words in books alone, and many tens of billions more in newspapers, magazines and other data. That's one tiny language. We're many generations of LLM's away from even approaching exhausting the already available digital collections alone, and that's before we look at having the models trained on that data generate and judge training data.

    [1] https://sharegpt.com/

  • Humans in Humans Out: GPT Converging Toward Common Sense in Both Success/Failure
    3 projects | news.ycombinator.com | 8 Apr 2023
    of that conversation. Perhaps something like shareGPT[1] can help?

    [1] https://sharegpt.com

What are some alternatives?

When comparing mistral-src and sharegpt you can also consider the following projects:

ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models

ChatGPT - Lightweight package for interacting with ChatGPT's API by OpenAI. Uses reverse engineered official API.

lida - Automatic Generation of Visualizations and Infographics using Large Language Models

llm-workflow-engine - Power CLI and Workflow manager for LLMs (core package)

ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines

unofficial-chatgpt-api - This repo is unofficial ChatGPT api. It is based on Daniel Gross's WhatsApp GPT

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

openai-python - The official Python library for the OpenAI API

llama - Inference code for Llama models

chatgpt-conversation - Have a conversation with ChatGPT using your voice, and have it talk back.

text-generation-webui-colab - A colab gradio web UI for running Large Language Models

langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]