Dockerized LLM inference server with constrained output (JSON mode), built on top of vLLM and outlines. Faster, cheaper and without rate limits. Compare the quality and latency to your current LLM API provider.
Why do you think that https://github.com/huggingface/pytorch-image-models is a good alternative to fastassert