openrouter-runner
llm
openrouter-runner | llm | |
---|---|---|
12 | 27 | |
388 | 3,189 | |
15.5% | - | |
9.5 | 9.4 | |
about 1 month ago | 9 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openrouter-runner
- Openrouter
- Integra múltiples APIs de IA en una sola plataforma
-
Collection of notebooks showcasing some fun and effective ways of using Claude
Why not use something like http://openrouter.ai? Pay as you go and you can select any model you want. Heaven!
-
World_SIM: LLM prompted to act as a sentient CLI universe simulator
teknium / Nous released Mistral finetunes (Hermes) that are quite great, and even published the datasets used for training.
But for the worldsim I think they are really using Claude (probably Haiku or Sonnet) via openrouter (https://openrouter.ai/).
-
Show HN: Plandex – an AI coding engine for complex tasks
Not affiliated with the project but you could use something like OpenRouter to give users a massive list of models to choose from with fairly minimal effort
https://openrouter.ai/
-
The Next Generation of Claude (Claude 3)
> I hate that they require a phone number
https://openrouter.ai/ lets you make one account and get API access to a bunch of different models. They also provide access to hosted versions of a bunch of open models.
Useful if you want to compare 15 different models without bothering to create 15 different accounts or download 15 x 20GB of models :)
-
The killer app of Gemini Pro 1.5 is video
You sure can! NeuroEngine[1] hosts some nice free demos of what are basically the state of the art in unfiltered models, and if you need API access, OpenRouter[2] has dozens of unfiltered models to choose from.
[1] https://www.neuroengine.ai/
[2] https://openrouter.ai/
-
OpenAI has Text to Speech Support now!
However, this needs to be changed as other providers like OpenRouter can also start supporting this feature in the future.
-
How to narrow down interest and where to begin?
Great resources for me are - The LangChain Blog: Very technical but great graphics of complex topics. Gives you a good understanding of what is currently possible and what the hot topics are - Product Hunt: Great resource to see what others are building with AI - Replicate and OpenRouter for custom made / fine tuned models - AI Twitter
- Show HN: Ranking LLMs by Usage over Time
llm
- FLaNK AI-April 22, 2024
-
Show HN: I made a tool to clean and convert any webpage to Markdown
That's a great use case, you might be able to do this if you've got a copy and paste on the command line with
https://github.com/simonw/llm
In between. An alias like pdfwtf translating to "paste | llm command | copy"
-
Command R+: A Scalable LLM Built for Business
I added support for this model to my LLM CLI tool via a new plugin: https://github.com/simonw/llm-command-r
So now you can do this:
pipx install llm
-
The Next Generation of Claude (Claude 3)
If you're willing to use the CLI, Simon Willison's llm library[0] should do the trick.
[0] https://github.com/simonw/llm
- Show HN: I made an app to use local AI as daily driver
-
Localllm lets you develop gen AI apps on local CPUs
I'm not thrilled about https://github.com/GoogleCloudPlatform/localllm/blob/main/ll... calling their Python package "llm" and installing "llm" as a CLI command, when my similar https://llm.datasette.io/ project has that namespace reserved on PyPI already: https://pypi.org/project/llm/
- FLaNK 15 Jan 2024
- Show HN: Simple Script for Enhanced LLM Interaction in Vim
-
Bash One-Liners for LLMs
I've been gleefully exploring the intersection of LLMs and CLI utilities for a few months now - they are such a great fit for each other! The unix philosophy of piping things together is a perfect fit for how LLMs work.
I've mostly been exploring this with my https://llm.datasette.io/ CLI tool, but I have a few other one-off tools as well: https://github.com/simonw/blip-caption and https://github.com/simonw/ospeak
I'm puzzled that more people aren't loudly exploring this space (LLM+CLI) - it's really fun.
-
Semantic Kernel
Seems nice if you're using c# or java. It also supports python, but for that Simon's llm library is nice because he designed it as both a library and a command line tool: https://github.com/simonw/llm
What are some alternatives?
llm-claude-3 - LLM plugin for interacting with the Claude 3 family of models
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
plandex - AI driven development in your terminal. Designed for large, real-world tasks.
langroid - Harness LLMs with Multi-Agent Programming
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
multi-gpt - A Clojure interface into the GPT API with advanced tools like conversational memory, task management, and more
jehuty - Fluent API to interact with chat based GPT model
llm-replicate - LLM plugin for models hosted on Replicate
aipl - Array-Inspired Pipeline Language
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
ad-llama - Structured inference with Llama 2 in your browser
onprem - A tool for running on-premises large language models with non-public data