• cbarrick@lemmy.world
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      5
      ·
      3 months ago

      Unfortunately, those of us doing scientific compute don’t have a real alternative.

      ROCm just isn’t as widely supported as CUDA, and neither is Vulkan for GPGPU use cases.

      AMD dropped the ball on GPGPU, and Nvidia is eating their lunch. Linux desktop users be damned.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        3
        ·
        3 months ago

        yep yep and yep.

        and they’ve been eating their lunch so long at this point I’ve given up on that changing.

        The new world stands in cuda and that’s just the way it is. I don’t really want an nVidia, radeon seems far better for price to performance . Except I can justify an nVidia for work.

        I can’t justify a radeon for work.

        • cbarrick@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          3 months ago

          Long term, I expect Vulkan to be the replacement to CUDA. ROCm isn’t going anywhere…

          We just need fundamental Vulkan libraries to be developed that can replace the CUDA equivalents.

          • cuFFT -> vkFFT (this definitely exists)
          • cuBLAS -> vkBLAS (is anyone working on this?)
          • cuDNN -> vkDNN (this definitely doesn’t exist)

          At that point, adding Vulkan support to XLA (Jax and TensorFlow) or ATen (PyTorch) wouldn’t be that difficult.

          • DarkenLM@kbin.social
            link
            fedilink
            arrow-up
            18
            ·
            3 months ago

            wouldn’t be that difficult.

            The amount of times I said that only to be quickly proven wrong by the fundamental forces of existence is the reason that’s going to be written on my tombstone.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            3 months ago

            I think. it’s just path stickiness at this point. CUDA works and then you can ignore it’s existence and do the thing you actually care about. ML in the pre CUDA days was painful. CUDA makes it not painful. Asking people to return to painfully…

            Good luck…

      • urbanxs@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        I find it eerly odd how amd seems to almost intetionally stay out nvidia’s way in terms of cuda and couple other things. I dont wish to speculate but considering how ai is having a blowout yet AMD is basically not even trying, it feels as if the nvidia ceo beying cousins with amd’s ceo has something to do with it. Maybe i am reading too much into it but there’s something going on. Why would amd leave so much money on the table?

    • EccTM@lemmy.ml
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      3 months ago

      Thats great.

      I’d still like my Nvidia card to work so I’m happy about this, and when AMD on Linux eventually starts swapping over to explicit sync, I’ll be happy for those users then too.

        • DumbAceDragon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          Cool. It should still use it though. If for nothing else than the parallelization improvements it allows.

          If we stuck with the “it works fine so I’m not moving away from it” approach then we’d all still be on x11. Nvidia sucks and they should be more of a team player, but I think they were right to push for explicit sync over implicit. We should’ve been doing this from the beginning on wayland.