txtinstruct
safetensors
txtinstruct | safetensors | |
---|---|---|
13 | 31 | |
221 | 2,589 | |
2.7% | 3.5% | |
5.0 | 7.9 | |
10 months ago | 19 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
txtinstruct
-
Questions about memory, tree-of-thought, planning
I tried cromadb but had terrible performance and could not pin down the cause (likely a problem on my end). Weaviate was easy to setup and had excellent performance, this is probably what I will use in the future. Next on my list is txtinstruct, to finetune a model with data that does not change and using a vector db for everything else seems promising.
-
[R] Let Language Models be Language Models
The closest thing I've seen to this is txtinstruct
-
Create a ChatGPT-like program using an open source model and custom data.
txtinstruct is a framework for training instruction-tuned models
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Great to see the continued release of open models. The only disappointing thing is that models keep building on CC-BY-NC licensed datasets, which severely limits their use.
Hopefully, people consider txtinstruct (https://github.com/neuml/txtinstruct) and other approaches to generate instruction-tuning datasets without the baggage.
- Build open instruction-tuned datasets and models (r/MachineLearning)
- Build open instruction-tuned datasets and models
- [P] Build open instruction-tuned datasets and models
- Create open instruction-tuned datasets and LLM models
- Show HN: Build open instruction-tuned datasets and models
safetensors
-
Llamafile lets you distribute and run LLMs with a single file
The ML field is doing work in that area: https://github.com/huggingface/safetensors
-
Hugging Face raises $235M from investors including Salesforce and Nvidia
FYI the file format, safetensors, was proposed, developed and maintained by HF, and involved people from groups such as Eleuther and Stability for external security audits.
https://github.com/huggingface/safetensors https://huggingface.co/blog/safetensors-security-audit
-
I Made Stable Diffusion XL Smarter by Finetuning It on Bad AI-Generated Images
Thank you for note on this. I had not heard there were already trojan horse malware being slipped into tensor files as python scripts. Apparently torch pickle uses eval on the tensor file with no filter.
Heard surprisingly little commentary on this topic. The full explanation of how Safetensors are "Safe" can be found from the developer at: https://github.com/huggingface/safetensors/discussions/111
- Pickle safety in Python
-
What makes .safetensors files safe?
Here the developer goes into some detail about what kinds of protections .safetensor files have : https://github.com/huggingface/safetensors/discussions/111
-
Security PSA: huggingface models are code. not just data.
Use the safetensors format, which allows safe persistence and loading of models for common libraries - TensorFlow, PyTorch, JAX, etc. We went through external audits in the last few months (blog post). The current direction will be to have this as the default format.
- What's your favorite model. Right now I'm really enjoying dreamshaper.
- Lora, ggml, safetensors, hf, etc. Is there a glossary and guide on which model to choose?
-
Stability AI Launches the First of Its StableLM Suite of Language Models
I've been diving in lately and while it's not efficient, the only way to do manage is to create a new conda/mamba environment, or a custom Docker image for all the conflicting packages.
For safety and speed, you should prefer the safetensor format: https://huggingface.co/docs/safetensors/speed
If you know what you are doing you can do your own conversions: https://github.com/huggingface/safetensors or for safety, https://huggingface.co/spaces/diffusers/convert
-
CKPT to Safetensors
GitHub - huggingface/safetensors: Simple, safe way to store and distribute tensors
What are some alternatives?
StableLM - StableLM: Stability AI Language Models
stable-diffusion-webui - Stable Diffusion web UI
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
llama.cpp - LLM inference in C/C++
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
Safe-and-Stable-Ckpt2Safetensors-Conversion-Tool-GUI - Convert your Stable Diffusion checkpoints quickly and easily.
cataclysm - Cataclysm - Code generation library for the end game
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
instruct-eval - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
Stable-Diffusion-Pickle-Scanner-GUI - Pickle Scanner GUI
lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.
stable-diffusion-webui-model-toolkit - A Multipurpose toolkit for managing, editing and creating models.