LocalPilot: Open-source GitHub Copilot on your MacBook

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • localpilot

  • refact

    WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding

  • You should check-out [refact.ai](https://github.com/smallcloudai/refact). It has both autocomplete and chat. It's in active development, with lots of new features coming soon (context search, fine-tuning for larger models, etc)

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • llm-ls

    LSP server leveraging LLMs for code completion (and more?)

  • Okay, I actually got local co-pilot set up. You will need these 4 things.

    1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.

    2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)

    3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.

    4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.

  • text-generation-inference

    Large Language Model Text Generation Inference

  • Okay, I actually got local co-pilot set up. You will need these 4 things.

    1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.

    2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)

    3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.

    4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.

  • cody

    AI that knows your entire codebase

  • I'm sorry to hear that. We have made a lot of improvements to Cody recently. We had a big release on Oct 4 that significantly decreased latency while improving completion quality. You can read all about it here: https://about.sourcegraph.com/blog/feature-release-october-2...

    We love feedback and ideas as well, and like I said are constantly iterating on the UI to improve it. I'm actually wrapping up a blog post on how to better leverage Cody w/ VS Studio, that'll be out either later today or sometime tomorrow. As far as feedback though: https://github.com/sourcegraph/cody/discussions/new?category... would be the place to share ideas :)

  • OpenAI-sublime-text

    Sublime Text OpenAI completion plugin with GPT-4 support!

  • Hey, check the one that I made[1].

    It isn’t the cone completion assistant like the one you mentioned above, and it probably never will be. I see it more as a perfect coding companion, that is always under your fingertips and relieves you of googling most of the times.

    Yet it’s tied with OpenAI, and you have to pay it by yourself, but the former should be changed rather sooner than later.

    Bonus: in develop branch there is some-kind-of release candidate that a way more robust that the current release is.

    [1]: https://github.com/yaroslavyaroslav/OpenAI-sublime-text

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat

    3 projects | news.ycombinator.com | 12 Apr 2024
  • [P] What are the latest "out of the box solutions" for deploying the very large LLMs as API endpoints?

    3 projects | /r/MachineLearning | 23 Feb 2023
  • Quick tip: Using R, OpenAI and SingleStore Notebooks

    1 project | dev.to | 1 May 2024
  • Hugging Face reverts the license back to Apache 2.0

    1 project | news.ycombinator.com | 8 Apr 2024
  • HuggingFace text-generation-inference is reverting to Apache 2.0 License

    2 projects | news.ycombinator.com | 8 Apr 2024