-
EasyDeL
Accelerate your training with this open-source library. Optimize performance with streamlined training and serving options with JAX. 🚀
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Yes - thanks for pointing that out. The README is being updated, you can see an updated WIP in the dev branch: https://github.com/google/gemma.cpp/tree/dev?tab=readme-ov-f...
This is not a production backend, as it says in the readme.
There are some very interesting efforts in JAX/TPU land like https://github.com/erfanzar/EasyDeL
llama.cpp has integrated gemma support. So you can use llamafile for this. It is a standalone executable that is portable across most popular OSes.
https://github.com/Mozilla-Ocho/llamafile/releases
So, download the executable from the releases page under assets. You want either just main or just server. Don't get the huge ones with the model inlined in the file. The executable is about 30MB in size,
https://github.com/Mozilla-Ocho/llamafile/releases/download/...
Thanks so much!
Everyone working on this self-selected into contributing, so I think of it less as my team than ... a team?
Specifically want to call out: Jan Wassenberg (author of https://github.com/google/highway) and I started gemma.cpp as a small project just a few months ago + Phil Culliton, Dan Zheng, and Paul Chang + of course the GDM Gemma team.
Related posts
-
Llamafile 0.7 Brings AVX-512 Support: 10x Faster Prompt Eval Times for AMD Zen 4
-
Permuting Bits with GF2P8AFFINEQB
-
Six times faster than C
-
AMD EPYC 97x4 “Bergamo” CPUs: 128 Zen 4c CPU Cores for Servers, Shipping Now
-
10~17x faster than what? A performance analysis of Intel' x86-SIMD-sort(AVX-512)