Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 23 Python Jax Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
-
d2l-en
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
-
wandb
🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
einops
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
-
foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
-
EasyLM
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
-
pennylane
PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
-
numpyro
Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU.
-
equinox
Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/
-
machine_learning_refined
Notes, examples, and Python demos for the 2nd edition of the textbook "Machine Learning Refined" (published by Cambridge University Press).
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Sure, knowing the basics of LLM math is necessary. But it's also _enough_ to know this math to fully grasp the code. There are only 4 concepts - attention, feed-forward net, RMS-normalization and rotary embeddings - organized into a clear structure.
Now compare it to the Hugginface implementation [1]. In addition to the aforementioned concepts, you need to understand the hierarchy of `PreTrainedModel`s, 3 types of attention, 3 types of rotary embeddings, HF's definition of attention mask (which is not the same as mask you read about in transformer tutorials), several types of cache class, dozens of flags to control things like output format or serialization, etc.
It's not that Meta's implementation is good and HF's implementation is bad - they pursue different goals in their own optimal way. But if you just want to learn how the model works, Meta's code base is great.
[1]: https://github.com/huggingface/transformers/blob/main/src/tr...
Keras
The dual numbers exist just as surely as the real numbers and have been used well over 100 years
https://en.m.wikipedia.org/wiki/Dual_number
Pytorch has had them for many years.
https://pytorch.org/docs/stable/generated/torch.autograd.for...
JAX implements them and uses them exactly as stated in this thread.
https://github.com/google/jax/discussions/10157#discussionco...
As you so eloquently stated, "you shouldn't be proclaiming things you don't actually know on a public forum," and doubly so when your claimed "corrections" are so demonstrably and totally incorrect.
See also https://github.com/unifyai/ivy which I have not tried but seems along the lines of what you are describing, working with all the major frameworks
Project mention: A list of SaaS, PaaS and IaaS offerings that have free tiers of interest to devops and infradev | dev.to | 2024-02-05Weights & Biases — The developer-first MLOps platform. Build better models faster with experiment tracking, dataset versioning, and model management. Free tier for personal projects only, with 100 GB of storage included.
Project mention: Maxtext: A simple, performant and scalable Jax LLM | news.ycombinator.com | 2024-04-23Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
Not sure if the wrapper you’re talking about is your own custom code, but I really like using einops lately. It’s got similar axis naming capabilities and it dispatches to both numpy and pytorch
http://einops.rocks/
Project mention: Maxtext: A simple, performant and scalable Jax LLM | news.ycombinator.com | 2024-04-23Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
Project mention: Maxtext: A simple, performant and scalable Jax LLM | news.ycombinator.com | 2024-04-23Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
Project mention: JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation | news.ycombinator.com | 2023-09-28Agree, though I wouldn’t call PyTorch a drop-in for NumPy either. CuPy is the drop-in. Excepting some corner cases, you can use the same code for both. Thinc’s ops work with both NumPy and CuPy:
https://github.com/explosion/thinc/blob/master/thinc/backend...
Go ahead, play with any adversarial attacks from https://github.com/bethgelab/foolbox you will not find an attack that is both robust to perturbations and almost visually imperceptible
Project mention: Maxtext: A simple, performant and scalable Jax LLM | news.ycombinator.com | 2024-04-23
Project mention: PennyLane: Python library for differentiable programming of quantum computers | news.ycombinator.com | 2024-05-07
I wrote a JAX-based neural network library (Equinox [1]) and numerical differential equation solving library (Diffrax [2]).
At the time I was just exploring some new research ideas in numerics -- and frankly, procrastinating from writing up my PhD thesis!
But then one of the teams at Google starting using them, so they offered me a job to keep developing them for their needs. Plus I'd get to work in biotech, which was a big interest of mine. This was a clear dream job offer, so I accepted.
Since then both have grown steadily in popularity (~2.6k GitHub stars) and now see pretty widespread use! I've since started writing several other JAX libraries and we now have a bit of an ecosystem going.
[1] https://github.com/patrick-kidger/equinox
Python Jax related posts
-
Maxtext: A simple, performant and scalable Jax LLM
-
Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch
-
[P] LagrangeBench: A Lagrangian Fluid Mechanics Benchmarking Suite
-
Apple releases MLX for Apple Silicon
-
About Monte Carlo tree search in Jax
-
MLPerf training tests put Nvidia ahead, Intel close, and Google well behind
-
[P] Optimistix, nonlinear optimisation in JAX+Equinox!
-
A note from our sponsor - InfluxDB
www.influxdata.com | 18 May 2024
Index
What are some of the best open-source Jax projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | transformers | 126,170 |
2 | Keras | 61,044 |
3 | jax | 28,174 |
4 | d2l-en | 21,858 |
5 | best-of-ml-python | 15,672 |
6 | ivy | 14,025 |
7 | wandb | 8,294 |
8 | trax | 7,962 |
9 | einops | 7,971 |
10 | flax | 5,567 |
11 | datasets | 4,200 |
12 | scenic | 3,037 |
13 | alpa | 2,989 |
14 | dm-haiku | 2,816 |
15 | thinc | 2,798 |
16 | foolbox | 2,663 |
17 | deepxde | 2,365 |
18 | EasyLM | 2,247 |
19 | mctx | 2,212 |
20 | pennylane | 2,129 |
21 | numpyro | 2,056 |
22 | equinox | 1,837 |
23 | machine_learning_refined | 1,596 |
Sponsored