Open-source project ZLUDA lets CUDA apps run on AMD GPUs

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • ZLUDA

    CUDA on AMD GPUs

  • It now supports AMD GPUs since 3 weeks ago, check the latest commit at the repo:

    https://github.com/vosen/ZLUDA

    The article also mentions exactly this fact.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • ZLUDA

    CUDA on AMD GPUs (by lshqqytiger)

  • > it won't ever be a viable option

    For production workloads, I generally agree. It's an unsupported hack with a questionable future, I wouldn't do anything money-making with it.

    However, for tinkering and consumer workloads, it already works pretty well. Enough of cuDNN and cuBLAS work to run PyTorch and in turn, Stable Diffusion with https://github.com/lshqqytiger/ZLUDA - there's even a fairly user-friendly setup process already in https://github.com/vladmandic/automatic .

    I was able to get a personal non-ML related project working on my AMD card in just a few minutes, which saved me a lot of development time before I then deployed the production workload on NV hardware (this is probably why AMD pulled the plug on the project - it's almost more of a boost to NV than anything else, AMD really need people to be writing code on ROCm to deploy on AMD datacenter hardware).

  • automatic

    SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models

  • > it won't ever be a viable option

    For production workloads, I generally agree. It's an unsupported hack with a questionable future, I wouldn't do anything money-making with it.

    However, for tinkering and consumer workloads, it already works pretty well. Enough of cuDNN and cuBLAS work to run PyTorch and in turn, Stable Diffusion with https://github.com/lshqqytiger/ZLUDA - there's even a fairly user-friendly setup process already in https://github.com/vladmandic/automatic .

    I was able to get a personal non-ML related project working on my AMD card in just a few minutes, which saved me a lot of development time before I then deployed the production workload on NV hardware (this is probably why AMD pulled the plug on the project - it's almost more of a boost to NV than anything else, AMD really need people to be writing code on ROCm to deploy on AMD datacenter hardware).

  • amdmiscompile

    AMD OpenCL miscompilation bug

  • This confirms what everyone who ever touched AMD GPGPUs knows -- that the only thing holding back AMD from becoming a 2 Trillion dollar company is their absolutely atrocious software. I remember finding a bug in their OpenCL compiler [1], but crashing their OpenCL compiler via segfault was also a piece of cake (that was never fixed, I gave up on reporting it).

    AMD not developing a competitor to CUDA was the most short-sighted thing I have ever seen. I have no idea why their board hasn't been sacked and replaced with people who understand that you can make the best hardware out there, but if your SW to use it is -- to be very mild -- atrocious, nobody is gonna buy it or use it.

    Us, customers, are left to buy the overpriced NVidia cards because AMD's board is too rich to give a damn about a trillion or so of value left on the table. Just... weird. Whoever owns AMD stock I hope is asking questions, because that board needs to go down the nearest drain.

    [1] https://github.com/msoos/amdmiscompile -- they eventually fixed this

  • HIP

    HIP: C++ Heterogeneous-Compute Interface for Portability

  • Is it perhaps because they want people to use HIP?

    > HIP is very thin and has little or no performance impact over coding directly in CUDA mode.

    > The HIPIFY tools automatically convert source from CUDA to HIP.

    1. https://github.com/ROCm/HIP

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • HIP CPU

    1 project | /r/ROCm | 18 Feb 2023
  • HIP CPU

    1 project | /r/Amd | 18 Feb 2023
  • AMD HIP + Cuda in same program

    3 projects | /r/CUDA | 26 Aug 2022
  • AMD publishes GPUFORT as Open Source to address CUDA’s dominance

    2 projects | /r/Amd | 8 Oct 2021
  • Test Coverage with CUDA

    1 project | /r/gpgpu | 26 Aug 2021