E2B
DeepSpeed
E2B | DeepSpeed | |
---|---|---|
35 | 51 | |
6,196 | 33,018 | |
4.4% | 2.5% | |
9.9 | 9.8 | |
3 days ago | 4 days ago | |
TypeScript | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
E2B
-
Ask HN: Who is hiring? (May 2024)
E2B | https://e2b.dev | San Francisco, CA | Full-time | In-person
[E2B](https://e2b.dev) is building a secure open-source runtime that will power next billion of AI apps & agents.
We found an early traction with making it easy for developers to add [code interpreting](https://github.com/e2b-dev/code-interpreter) to their AI apps with our SDK built on top of our [agentic runtime](https://github.com/e2b-dev/e2b). We have paying customers from seed to enterprise companies.
We're hiring:
- Frontend/Product Engineer
- Infrastructure Engineer
Check the roles here https://e2b.dev/careers
-
Show HN: Add AI code interpreter to any LLM via SDK
Hi, I'm the CEO of the company that built this SDK.
We're a company called E2B [0]. We're building and open-source [1] secure environments for running untrusted AI-generated code and AI agents. We call these environments sandboxes and they are built on top of micro VM called Firecracker [2].
You can think of us as giving small cloud computers to LLMs.
We recently created a dedicated SDK for building custom code interpreters in Python or JS/TS. We saw this need after a lot of our users have been adding code execution capabilities to their AI apps with our core SDK [3]. These use cases were often centered around AI data analysis so code interpreter-like behavior made sense
The way our code interpret SDK works is by spawning an E2B sandbox with Jupyter Server. We then communicate with this Jupyter server through Jupyter Kernel messaging protocol [4].
We don't do any wrapping around LLM, any prompting, or any agent-like framework. We leave all of that on users. We're really just a boring code execution layer that sats at the bottom that we're building specifically for the future software that will be building another software. We work with any LLM. Here's how we added code interpreter to Claude [5].
Our long-term plan is to build an automated AWS for AI apps and agents.
Happy to answer any questions and hear feedback!
[0] https://e2b.dev/
[1] https://github.com/e2b-dev
[2] https://github.com/firecracker-microvm/firecracker
[3] https://e2b.dev/docs
[4] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
[5] https://github.com/e2b-dev/e2b-cookbook/blob/main/examples/c...
- Open Source Python Code Interpreter for Any LLM
- Show HN: Open-Source Infrastructure for AI Code Interpreters
-
We're building cloud runtime for AI agents and gradually open-sourcing everything
Hey folks, we're building an open source runtime for AI agents at E2B.
- Show HN: Run LLM-generated code in sandboxed envs
- Sandboxed cloud environments for AI agents & apps with a single line of code
- We're building a cloud for AI agents & AI apps, It's free and we're gradually open-sourcing the infra. Would love to hear your feedback!
- [P] We're building a cloud for AI agents & AI apps, It's free and we're gradually open-sourcing the infra. Would love to hear your feedback!
DeepSpeed
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.
- [P][D] A100 is much slower than expected at low batch size for text generation
- DeepSpeed-FastGen: High-Throughput for LLMs via MII and DeepSpeed-Inference
- DeepSpeed-FastGen: High-Throughput Text Generation for LLMs
- Why async gradient update doesn't get popular in LLM community?
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (r/MachineLearning)
- [P] DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
-
A comprehensive guide to running Llama 2 locally
While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:
* I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.
* If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].
You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)
Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)
[1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...
[2] https://github.com/microsoft/DeepSpeed/issues/1580
[3] https://github.com/TimDettmers/bitsandbytes/issues/485
[4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
[5] https://forums.macrumors.com/threads/ai-generated-art-stable...
-
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced
And https://github.com/microsoft/deepspeed
-
April 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
What are some alternatives?
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
ColossalAI - Making large AI models cheaper, faster and more accessible
chatgpt-shell - ChatGPT and DALL-E Emacs shells + Org babel 🦄 + a shell maker for other providers
Megatron-LM - Ongoing research training transformer models at scale
IncognitoPilot - An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
fairscale - PyTorch extensions for high performance and large scale training.
Selefra - The open-source policy-as-code software that provides analysis for Multi-Cloud and SaaS environments, you can get insight with natural language (powered by OpenAI).
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.