SaaSHub helps you find the best software and product alternatives Learn more →
Local_llama Alternatives
Similar projects and alternatives to local_llama
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
h2ogpt
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
-
localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
EmbedAI
An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
local_llama reviews and mentions
-
Discussion: Biggest Roadblocks to Deploy LLMs to Production
I work with AWS daily, terraform, Python and java creating and maintaining enterprise solutions. I have played with sagemaker but it is so expensive I hate to leave it up for longer than a day. I downloaded and created a chat with your docs (entirely in airplane mode) here point being that I’ve hosted models both locally and in the cloud. But just ended up sticking to API calls as it’s so cheap
-
You can now chat with your documents privately!
I posted the speed of mine in the readme https://github.com/jlonge4/local_llama
-
Textgen webui for gpt_chatwithPDF
I would like to use this tool https://github.com/jlonge4/gpt_chatwithPDF/blob/main/gpt_chat_api.py but unfortunately the local version (https://github.com/jlonge4/local_llama) is bound to the CPU and thus quiet slow. Is there any way i could get textgenwebui working with the above stated tool?
-
Is there a way to ask questions about ç multiple PDF files?
This is what you want https://github.com/jlonge4/local_llama it’s fully offline with no third parties, but the setup is a bit involved
-
Newbie here. Need help with choosing a llm model for pdf ingestion and summarization locally
Or try this https://github.com/jlonge4/local_llama
- Local GPT (completely offline and no OpenAI!)
- Local GPT (completely offline and no OpenAI!) [P]
-
Offline llama
Code here if interested
-
A note from our sponsor - SaaSHub
www.saashub.com | 3 Jun 2024
Stats
jlonge4/local_llama is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of local_llama is Python.
Sponsored