SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 Go llm Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
LLocalSearch
LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
-
flyte
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
-
casibase
⚡️Open-source AI LangChain-like RAG (Retrieval-Augmented Generation) knowledge database with web UI and Enterprise SSO⚡️, supports OpenAI, Azure, LLaMA, Google Gemini, HuggingFace, Claude, Grok, etc., chat bot demo: https://demo.casibase.com, admin UI demo: https://demo-admin.casibase.com
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
BricksLLM
🔒 Enterprise-grade API gateway that helps you monitor and impose cost or rate limits per API key. Get fine-grained access control and monitoring per user, application, or environment. Supports OpenAI, Azure OpenAI, Anthropic, vLLM, and open-source LLMs.
-
aqueduct
Aqueduct is no longer being maintained. Aqueduct allows you to run LLM and ML workloads on any cloud infrastructure. (by RunLLM)
-
hof
Framework that joins data models, schemas, code generation, and a task engine. Language and technology agnostic.
-
agency
🕵️♂️ Library designed for developers eager to explore the potential of Large Language Models (LLMs) and other generative AI through a clean, effective, and Go-idiomatic approach. (by neurocult)
-
evalgpt
EvalGPT is an code interpreter framework that utilizes large language models to automate the process of code-writing and execution, delivering precise results for user-defined tasks. (by index-labs)
-
helix
Multi-node production AI stack. Run the best of open source AI easily on your own servers. Create your own AI by fine-tuning open source models (by helixml)
-
llama-nuts-and-bolts
A holistic way of understanding how LLaMA and its components run in practice, with code and detailed documentation.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: Computer Vision Meetup: Develop a Legal Search Application from Scratch using Milvus and DSPy! | dev.to | 2024-05-02Legal practitioners often need to find specific cases and clauses across thousands of dense documents. While traditional keyword-based search techniques are useful, they fail to fully capture semantic content of queries and case files. Vector search engines and large language models provide an intriguing alternative. In this talk, I will show you how to build a legal search application using the DSPy framework and the Milvus vector search engine.
Project mention: Ask HN: What's with the Gatekeeping in Open Source? | news.ycombinator.com | 2024-05-02Today I tried to post my open source project on the /r/opensource subreddit. It's an AGPL 3.0-licensed, terminal-based AI coding tool that defaults to OpenAI, but can also be used with other models, including open source models.
The subreddit's rules in the sidebar state that a project must be open source under the definition on Wikipedia (https://en.wikipedia.org/wiki/Open_source) and also that limited and responsible self-promotion is ok.
My post was automatically blocked, seemingly by the mere mention of "OpenAI". The auto-message stated that "ChatGPT wrappers" were not allowed on the subreddit.
I messaged the mods to tell them about the mistake, since my project plainly was not a "ChatGPT wrapper". One of them replied saying only "Working as intended" and that because my project uses OpenAI models by default, that it isn't welcome in the subreddit.
I asked why projects using OpenAI in particular are penalized (despite this being mentioned nowhere in the rules on the sidebar), considering that there are many posts for projects interfacing with MacOS, Windows, AWS, GitHub, and countless other closed source technologies. I received no answer to this question. I was only told that any project "advertising" OpenAI was "against the spirit of FOSS" and therefore did not belong on the subreddit. The mod also continued derisively referring to my project as a "ChatGPT wrapper" and "OpenAI plugin" despite my earlier explanation. I was also called "egocentric" for wanting to share my project.
It made me sad that a subreddit with over 200k members that seems to have a lot of cool discussions going on is being moderated like this. What's with all the gatekeeping? Why are people so interested in excluding the "wrong" type of open source projects? As far as I'm concerned, if you have an open source license and people can run your code, then your project is open source.
Am I right to be miffed by this or does the moderator have a point? Have you experienced this kind of thing with your own projects? How have you dealt with it?
This is my project, by the way: https://github.com/plandex-ai/plandex
Project mention: Show HN: I made a better Perplexity for developers | news.ycombinator.com | 2024-05-08Ah nice, good work :) I might steal some design ideas for my own project haha https://github.com/nilsherzig/LLocalSearch
9. Flyte by Union AI | Github | tutorial
Project mention: Open-source AI knowledge database with web UI and Enterprise SSO | news.ycombinator.com | 2023-12-21
Project mention: Show HN: Ellipsis – Automated PR reviews and bug fixes | news.ycombinator.com | 2024-05-09Hmm, that searches issues, which isn't the best way to see Ellipsis' work.
Example of PR review: https://github.com/getzep/zep-js/pull/67#discussion_r1594781...
Example of issue-to-PR: https://github.com/getzep/zep/issues/316
Example of bug fix on a PR: https://github.com/jxnl/instructor/pull/546#discussion_r1544...
Project mention: [Discussion] Guidance on training ML models on Kubernetes | /r/MachineLearning | 2023-05-24You could use https://github.com/kubeflow/training-operator directly.
You might reuse simple LLaMA tokenizer right in your Go code, look there:
https://github.com/gotzmann/llama.go/blob/8cc54ca81e6bfbce25...
Project mention: What AI assistants are already bundled for Linux? | news.ycombinator.com | 2024-03-01Perhaps this: https://github.com/yusufcanb/tlm?
it is not distro bundled (yet), but I have it running on my Fedora Linux 39 running on a NUC with 16GB of RAM. Performance is good enough for me.
Project mention: Show HN: Syntax highliting tool for code snippets in HTML | news.ycombinator.com | 2024-05-16I'm sure it is still a good learning experience. I learned a lot by working on a universal formatting setup.
https://github.com/hofstadter-io/hof/tree/_dev/formatters
I would, at the very least, wrap the errors being returned inside the process function https://github.com/neurocult/agency/blob/14b14e50a7570189388...
Or, I suppose the user must handle exception behavior in their custom `OperationHandler`
Model downloaders of the world unite!
Here’s my PR ;,) https://github.com/bodaay/HuggingFaceModelDownloader/pull/25
Project mention: Galah: An LLM-powered web honeypot using the OpenAI API | news.ycombinator.com | 2024-02-02
Project mention: Show HN: EvalGPT – Code interpreter and agent framework inspired by Google Borg | news.ycombinator.com | 2023-09-05
Project mention: Show HN: We got fine-tuning Mistral-7B to not suck | news.ycombinator.com | 2024-02-07If you look at the source [1] you can see how they solved their what are the doctors going to do problem. It is literally included in one of the prompts now:
Users tend to ask broad, vague questions of the document in order to test that the system is working. We want those queries to work well. For example, a user would ask "what are the doctors going to do?" of a document that is about a junior doctors' strike. Take this into account when generating the questions - in particular, refer to noun phrases by less specific descriptions, so for example instead of "junior doctors", say "doctors" in your questions.
[1]: https://github.com/helixml/helix/blob/main/api/pkg/dataprep/...
Project mention: AIKit: Build and deploy LLMs easily with only Docker | news.ycombinator.com | 2023-12-12
Project mention: Generating Code Without Generating Technical Debt? | news.ycombinator.com | 2023-11-10I’ve built conviction that code generation only gets useful in the long term when it is entirely deterministic, or filtered through humans. Otherwise it is almost always technical debt. Hence LLM code generation products are a cool toy, but no sensible teams will use them without an amazing “Day 2” workflow.
As an example, in my day job (https://speakeasyapi.dev), we sell code generation products using the OpenAPI specification to generate downstream artefacts (language SDKs, terraform providers, markdown documentation). The determinism makes it useful — API updates propagate continuously from server code, to specifications, then to the SDKs / providers / docs site. There are no breaking changes because the pipeline is deterministic and humans are in control of the API at the start. The code generation itself is just a means to an end : removing boilerplate effort and language differences by driving it from a source of truth (server api routes/types). Continuously generated, it is not debt.
We’ve put a lot of effort into trying to make an LLM agent useful in this context. However giving them control of generated code directly means it’s hard to keep the “no breaking changes”, and “consistency” restrictions that’s needed to make code generation useful.
The trick we’ve landed on to get utility out of an LLM in a code generation task, is to restrict it to manipulating a strictly typed interface document, such that it can only do non-breaking things to code (e.g. adjust comments / descriptions / examples) by making changes through this interface.
Project mention: Show HN: LLaMA Nuts and Bolts, A holistic way of understanding how LLMs run | news.ycombinator.com | 2024-03-20
Project mention: Show HN: Open-source SDK for creating custom code interpreters with any LLM | news.ycombinator.com | 2024-04-19We'll have nice and easy support for self-hosting soon-ish.
In the meantime, everything is open-source and the infra is codified with Terraform. GCP should have the best support now. If you want to dig into it, we'd love to give you support along the road so we can improve the process.
Our infra repo [0] is a good place to start. Once you have E2B deployed, you can just change E2B_DOMAIN env var and use our SDK.
Feel free to email me, join our Discord, or open an issue if you have any questions
[0] https://github.com/e2b-dev/infra
Go llm related posts
-
Ask HN: What's with the Gatekeeping in Open Source?
-
Fixing a real-world bug with AI using Claude Opus 3 with Plandex [video]
-
Looking for cofounders to build open reliable LLM infra
-
Show HN: Plandex – an AI coding engine for complex tasks
-
Discovering Devin, Devika, and OpenDevin
-
We built a highly scalable LLM gateway with go
-
I built an open-source tool that helps add usage-based billing for your LLM projects
-
A note from our sponsor - SaaSHub
www.saashub.com | 16 May 2024
Index
What are some of the best open-source llm projects in Go? This list will help you:
Project | Stars | |
---|---|---|
1 | Milvus | 27,206 |
2 | plandex | 9,433 |
3 | LLocalSearch | 5,149 |
4 | flyte | 4,853 |
5 | casibase | 2,202 |
6 | zep | 2,034 |
7 | training-operator | 1,463 |
8 | llama.go | 1,178 |
9 | tlm | 1,062 |
10 | BricksLLM | 755 |
11 | aqueduct | 521 |
12 | hof | 479 |
13 | lingoose | 484 |
14 | agency | 384 |
15 | HuggingFaceModelDownloader | 373 |
16 | galah | 282 |
17 | evalgpt | 243 |
18 | helix | 206 |
19 | llama2.go | 183 |
20 | aikit | 183 |
21 | speakeasy | 144 |
22 | llama-nuts-and-bolts | 109 |
23 | infra | 98 |
Sponsored