text-generation-inference
Mage
text-generation-inference | Mage | |
---|---|---|
29 | 77 | |
8,137 | 7,202 | |
3.1% | 2.1% | |
9.6 | 9.9 | |
2 days ago | 6 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
text-generation-inference
- FLaNK AI-April 22, 2024
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
I wanted to write that TGI inference engine is not Open Source anymore, but they have reverted the license back to Apache 2.0 for the new version TGI v2.0: https://github.com/huggingface/text-generation-inference/rel...
Good news!
- Hugging Face reverts the license back to Apache 2.0
- HuggingFace text-generation-inference is reverting to Apache 2.0 License
- FLaNK Stack 05 Feb 2024
- Is there any open source app to load a model and expose API like OpenAI?
-
AI Code assistant for about 50-70 users
Setting up a server for multiple users is very different from setting up LLM for yourself. A safe bet would be to just use TGI, which supports continuous batching and is very easy to run via Docker on your server. https://github.com/huggingface/text-generation-inference
-
LocalPilot: Open-source GitHub Copilot on your MacBook
Okay, I actually got local co-pilot set up. You will need these 4 things.
1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.
2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)
3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.
4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.
-
Mistral 7B Paper on ArXiv
A simple microservice would be https://github.com/huggingface/text-generation-inference .
Works flawlessly in Docker on my Windows machine, which is extremely shocking.
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
Mage
- FLaNK AI-April 22, 2024
-
A mage on the Hero’s Journey: a fantasy epic on how a startup rose from the ashes
In the coming years, Mage will create a cooperative experience so that developers can build data pipelines with their team and level up together. After that journey, Mage will go on an epic quest to create the 1st open world community experience in the data universe.
-
Data sources episode 2: AWS S3 to Postgres Data Sync using Singer
Link to original blog: https://www.mage.ai/blog/data-sources-ep-2-aws-s3-to-postgres-data-sync-using-singer
-
What are some open-source ML pipeline managers that are easy to use?
I would recommend the following: - https://www.mage.ai/ - https://dagster.io/ - https://www.prefect.io/ - https://metaflow.org/ - https://zenml.io/home
-
Mage Battlegrounds: Craft insights from real-time customer behavior analysis
You're invited to participate in the very first Mage Battlegrounds: Craft insights from real-time customer behavior analysis, a 24-hour virtual hackathon hosted by Shashank Mishra! This data engineering competition will take place on Saturday, April 15, 2023 beginning at 11am (PST). This will be a global event open to all participants who register.
-
Looking for an open-source project
Try this feature: https://github.com/mage-ai/mage-ai/issues/1166
-
Daskqueue: Dask-based distributed task queue
Seeing if we can use it in https://github.com/mage-ai/mage-ai
-
Data Pipeline on a Shoestring
That being said there’s a solid family of services just breaking ground that make the local pipeline deployment easier (check out https://www.mage.ai, which does have a clear path to cloud deployment of locally developed pipes, it just isn’t well documented yet, and also https://www.neuronsphere.io - which doesn’t have a public solution YET (they’re internally testing an alpha) but they built a cloud deployable solution for their paying customers and working to release one for freemium use)
-
Trending ML repos of the week 📈
7️⃣ mage-ai/mage-ai
-
Delta without using Spark
Yes, check out how Mage does it: https://github.com/mage-ai/mage-ai/tree/master/mage_integrations/mage_integrations/destinations/delta_lake_s3
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
dagster - An orchestration platform for the development, production, and observation of data assets.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
vscode-dvc - Machine learning experiment tracking and data versioning with DVC extension for VS Code
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
sqlmesh - Efficient data transformation and modeling framework that is backwards compatible with dbt.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
mito - The mitosheet package, trymito.io, and other public Mito code.
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
Data-Science-Roadmap - Data Science Roadmap from A to Z