cody
ollama
cody | ollama | |
---|---|---|
22 | 220 | |
2,074 | 69,806 | |
12.6% | 10.3% | |
10.0 | 9.9 | |
1 day ago | 4 days ago | |
TypeScript | Go | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cody
-
Ask HN: Cheapest way to use LLM coding assistance?
checkout the cody extension https://github.com/sourcegraph/cody available for various editors like vscode
-
The lifecycle of a code AI completion
I don't think it is. There is a test file which includes C#, Kotlin, etc among supported languages, which aren't included in the file you linked: https://github.com/sourcegraph/cody/blob/main/vscode/src/com...
But this test didn't seem to include TypeScript so it's obviously not comprehensive. I'm not convinced this information is actually in one place.
-
Ollama is now available on Windows in preview
Cody (https://github.com/sourcegraph/cody) supports using Ollama for autocomplete in VS Code. See the release notes at https://sourcegraph.com/blog/cody-vscode-1.1.0-release for instructions. And soon it'll support Ollama for chat/refactoring as well (https://twitter.com/sqs/status/1750045006382162346/video/1).
Disclaimer: I work on Cody.
-
My 2024 AI Predictions
Have you tried Cody (https://cody.dev)? Cody has a deep understanding of your codebase and generally does much better at code gen than just one-shotting GPT4 without context.
(disclaimer: I work at Sourcegraph)
-
š 7 AI Tools to Improve your productivity: A Deep Dive šŖāØ
3ļøā£ Cody AI š¤
-
An ex-Googler's guide to dev tools
Author of the post hereāas another commenter mentioned, this is indeed a bit dated now, someone should probably write an updated post!
There's been a ton of evolution in dev tools in the past 3 years with some old workhorses retiring (RIP Phabricator) and new ones (like Graphite, which is awesome) emerging... and of course AI-AI-AI. LLMs have created some great new tools for the developer inner loopāthat's probably the most glaring omission here. If I were to include that category today, it would mention tools like ChatGPT, GH Copilot, Cursor, and our own Sourcegraph Cody (https://cody.dev). I'm told that Google has internal AI dev tools now that generate more code than humans.
Excited to see what changes the next 3 years bringāthe pace of innovation is only accelerating!
-
LocalPilot: Open-source GitHub Copilot on your MacBook
I'm sorry to hear that. We have made a lot of improvements to Cody recently. We had a big release on Oct 4 that significantly decreased latency while improving completion quality. You can read all about it here: https://about.sourcegraph.com/blog/feature-release-october-2...
We love feedback and ideas as well, and like I said are constantly iterating on the UI to improve it. I'm actually wrapping up a blog post on how to better leverage Cody w/ VS Studio, that'll be out either later today or sometime tomorrow. As far as feedback though: https://github.com/sourcegraph/cody/discussions/new?category... would be the place to share ideas :)
-
Show HN: Ollama for Linux ā Run LLMs on Linux with GPU Acceleration
Ollama is awesome. I am part of a team building a code AI application[1], and we want to give devs the option to run it locally instead of only supporting external LLMs from Anthropic, OpenAI, etc. Those big remote LLMs are incredibly powerful and probably the right choice for most devs, but it's good for devs to have a local option as wellāfor security, privacy, cost, latency, simplicity, freedom, etc.
As an app dev, we have 2 choices:
(1) Build our own support for LLMs, GPU/CPU execution, model downloading, inference optimizations, etc.
(2) Just tell users "run Ollama" and have our app hit the Ollama API on localhost (or shell out to `ollama`).
Obviously choice 2 is much, much simpler. There are some things in the middle, like less polished wrappers around llama.cpp, but Ollama is the only thing that 100% of people I've told about have been able to install without any problems.
That's huge because it's finally possible to build real apps that use local LLMsāand still reach a big userbase. Your userbase is now (pretty much) "anyone who can download and run a desktop app and who has a relatively modern laptop", which is a big population.
I'm really excited to see what people build on Ollama.
(And Ollama will simplify deploying server-side LLM apps as well, but right now from participating in the community, it seems most people are only thinking of it for local apps. I expect that to change when people realize that they can ship a self-contained server app that runs on a cheap AWS/GCP instance and uses an Ollama-executed LLM for various features.)
[1] Shameless plug for the WIP PR where I'm implementing Ollama support in Cody, our code AI app: https://github.com/sourcegraph/cody/pull/905.
-
Cody ā The AI that knows your entire codebase
Awesome. The repository is at https://github.com/sourcegraph/cody for anyone who hasn't seen it yet.
- Code AI with Codebase Context
ollama
- Ollama v0.1.34 Is Out
-
Ask HN: What do you use local LLMs for?
- Basic internet search (I start ollama CLI faster than I can start a browser - https://ollama.com)
- Formatting/changing text
- Troubleshooting code, esp. new frameworks/libs
- Recipes
- Data entry
- Organizing thoughts: High-level lists, comparison, classification, synonyms, jargon & nomenclature
- Learning esp. by analogy and example
RAG for:
- Website assistants (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Game NPCs (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Discord/Slack/forum bots (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Character-driven storytelling and creating art in a specific style for video game loading screens, background images, avatars, website art, etc. (https://github.com/bennyschmidt/ragdoll-studio/tree/master/r...)
- FLaNK-AIM Weekly 06 May 2024
-
Introducing Jan
Jan goes a step further by integrating with other local engines like LM Studio and ollama.
- Ollama v0.1.33
-
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
# install the Ollama curl -fsSL https://ollama.com/install.sh | sh # get the llama3 model ollama pull llama2 # install the MLFlow pip install mlflow
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
Ollama for running LLMs locally
-
Setup Llama 3 using Ollama and Open-WebUI
curl -fsSL https://ollama.com/install.sh | sh
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
-
I Said Goodbye to ChatGPT and Hello to Llama 3 on Open WebUI - You Should Too
Iām a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, Iāve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
What are some alternatives?
zoekt - Fast trigram based code search
llama.cpp - LLM inference in C/C++
lsp-cody - A Client to Connect to the Cody LSP Gateway
gpt4all - gpt4all: run open-source LLMs anywhere
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llm-ls - LSP server leveraging LLMs for code completion (and more?)
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
localpilot
llama - Inference code for Llama models
react-agent - The open-source React.js Autonomous LLM Agent
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.