horsey-books
Generate phrases using Markov chains (by longears)
llimo
Large language and image models in pure JavaScript. (by bennyschmidt)
horsey-books | llimo | |
---|---|---|
1 | 3 | |
4 | 10 | |
- | - | |
- | 7.6 | |
about 11 years ago | about 1 month ago | |
Python | JavaScript | |
- | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
horsey-books
Posts with mentions or reviews of horsey-books.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-10.
-
Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
[2] https://github.com/longears/horsey-books
llimo
Posts with mentions or reviews of llimo.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-10.
-
Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
This system predicts "was" as the next word because it usually is the next word after "dog" (in the source data). This library was built to ultimately provide completions, not have a conversation, so no doubt OpenAI's approach works better for chat.
I am however already making a chat model. Here's my approach if anyone cares: The completer already gives great completions and fast, but some of them make no sense to what was asked. The chat model I'm working on here (https://github.com/bennyschmidt/llimo/pull/1) can just get all completions and use parts-of-speech codes to match a completion to the cursor. I don't have this fully implemented yet, but you can get the idea in this PR. This is like an NLP layer specific to chat - has nothing to do with the next-token prediction in general, and there are no NLP libraries in `next-token-prediction` (the npm). The example I've been using to explain this is:
User: "Where is Paris?"