ROCm
Cgml
ROCm | Cgml | |
---|---|---|
11 | 22 | |
21 | 40 | |
- | - | |
10.0 | 8.6 | |
over 3 years ago | 4 months ago | |
HTML | C++ | |
- | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ROCm
- ROCm 6.1.0
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
ROCm is not spelled out anywhere in their documentation and the best answers in search come from Github and not AMD official documents
"Radeon Open Compute Platform"
https://github.com/ROCm/ROCm/issues/1628
And they wonder why they are losing. Branding absolutely matters.
-
AMD Instinct MI300X Accelerators
https://github.com/ROCm/ROCm/issues/1353
Bought in 2020. Stopped working in 2020. Not the latest, but in-production, advertised ROCm-capable, and what I could find during the Great GPU Shortage of 2020.
-
AMD leaps after launching AI chip that could challenge Nvidia dominance
Maybe so. But it isn't confidence inspiring when I go to see which cards are supported and I see this issue:
https://github.com/ROCm/ROCm/issues/1714
With Nvidia cards, I know that if I buy any Nvidia card made in the last 10 years, CUDA code will run on it. Period. (Yes, different language levels require newer hardware, but Nvidia docs are quite clear about which CUDA versions require which silicon.)
The will-they-won't-they and the rapidly dropped support is hurting the otherwise excellent ROCm and HIP projects. There is a huge API surface to implement and it looks like they're making rapid gains.
-
GCN2, GCN3: What is the Technical, Non-Business Reason for Limited Supported in Linux (OpenSYCL/HIP/ROCM)? [Exasperated client]
Like, there is: https://github.com/ROCm/ROCm.github.io/blob/master/hardware.md but I'm pretty sure that's very very outdated, maybe from 4.x?
-
AMD’s Best GPU has some problems — Radeon RX 7900XTX VR Performance Review
Fair enough I'll give you that. Although it is listed as officially supported here, other documentation says it works but is not officially supported.
-
Finally, ROCm packages in [community]!
Do you have a source? The 580 and several older cards are listed as officially supported here, and even some 2xx/3xx cards are listed as unofficially supported.
-
[D] What’s the word on AMD gpus these days?
Some of the GPUs listed in your link are for consumers. For a more extensive list, see https://github.com/ROCm/ROCm.github.io/blob/master/hardware.md
-
Told an AI to generate Linux. Looks about right
Very conveniently, your linked page (the therein linked pages) do not talk about which GPUs actually do support ROCm. This is probably because AMDs newest cards do not support ROCm in any way, and would guess they don't want the sales pact this lack of feature could cause. Please do evaluate yourself, here: https://github.com/ROCm/ROCm.github.io/blob/master/hardware.md
Cgml
-
Asynchronous Programming in C#
> Meant no offense
None taken.
> computervison project in c#
Yeah, for CV applications nuget.org is indeed not particularly great. Very few people are using C# for these things, people typically choose something else like Python and OpenCV.
BTW, same applies to ML libraries, most folks are using Python/Torch/CUDA stack. For that hobby project https://github.com/Const-me/Cgml/ I had to re-implement the entire tech stack in C#/C++/HLSL.
-
Groq CEO: 'We No Longer Sell Hardware'
> If there is a future with this idea, its gotta be just shipping the LLM with game right?
That might be a nice application for this library of mine: https://github.com/Const-me/Cgml/
That’s an open source Mistral ML model implementation which runs on GPUs (all of them, not just nVidia), takes 4.5GB on disk, uses under 6GB of VRAM, and optimized for interactive single-user use case. Probably fast enough for that application.
You wouldn’t want in-game dialogues with the original model though. Game developers would need to finetune, retrain and/or do something else with these weights and/or my implementation.
-
Ask HN: How to get started with local language models?
If you just want to run Mistral on Windows, you could try my port: https://github.com/Const-me/Cgml/tree/master/Mistral/Mistral...
The setup is relatively easy: install .NET runtime, download 4.5 GB model file from BitTorrent, unpack a small ZIP file and run the EXE.
-
OpenAI postmortem – Unexpected responses from ChatGPT
Speaking about random sampling during inference, most ML models are doing it rather inefficiently.
Here’s a better way: https://github.com/Const-me/Cgml/blob/master/Readme.md#rando...
My HLSL is easily portable to CUDA, which has `__syncthreads` and `atomicInc` intrinsics.
- Nvidia's Chat with RTX is a promising AI chatbot that runs locally on your PC
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I did a few times with Direct3D 11 compute shaders. Here’s an open-source example: https://github.com/Const-me/Cgml
Pretty sure Vulkan gonna work equally well, at the very least there’s an open source DXVK project which implements D3D11 on top of Vulkan.
-
Brave Leo now uses Mixtral 8x7B as default
Here’s an example of a custom 4 bits/weight codec for ML weights:
https://github.com/Const-me/Cgml/blob/master/Readme.md#bcml1...
llama.cpp does it slightly differently but still, AFAIK their quantized data formats are conceptually similar to my codec.
- Efficient LLM inference solution on Intel GPU
-
Vcc – The Vulkan Clang Compiler
> the API was high-friction due to the shader language, and the glue between shader and CPU
Direct3D 11 compute shaders share these things with Vulkan, yet D3D11 is relatively easy to use. For example, see that library which implements ML-targeted compute shaders for C# with minimal friction: https://github.com/Const-me/Cgml The backend implemented in C++ is rather simple, just binds resources and dispatches these shaders.
I think the main usability issue with Vulkan is API design. Vulkan was only designed with AAA game engines in mind. The developers of these game engines have borderline unlimited budgets, and their requirements are very different from ordinary folks who want to leverage GPU hardware.
-
I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
Minor update https://github.com/Const-me/Cgml/releases/tag/1.1a Can’t edit that comment anymore, too late.
What are some alternatives?
rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform
PowerInfer - High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
ROCR-Runtime - ROCm Platform Runtime: ROCr a HPC market enhanced HSA based runtime
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
mlx - MLX: An array framework for Apple silicon
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
EmotiVoice - EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine
stable-diffusion-webui - Stable Diffusion web UI
llamafile - Distribute and run LLMs with a single file.
ZLUDA - CUDA on AMD GPUs
clspv - Clspv is a compiler for OpenCL C to Vulkan compute shaders