-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
https://github.com/TabbyML/tabby can run self-hosted AI coding assistants. I tried it a while ago and it worked with Nvim pretty easily. There is a VS code extension too. The extension will just sort of "read" with you and provide suggestions from time to time. Anytime the suggestion is good you can press some key ( by default) to accept it. It's basically autocomplete on steroids.
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
Related posts
-
FireCoder: Don't leave IDE for searching, just ask codegemma LLM in your VSCode.
-
Google CodeGemma: Open Code Models Based on Gemma [pdf]
-
More Agents Is All You Need: LLMs performance scales with the number of agents
-
Show HN: Tabby back end in 20 Python lines (self-hosted AI coding assistant)
-
Show HN: macOS GUI for running LLMs locally