Replies: 3 comments 6 replies
-
(Speaking as an interested observer, not a ublue developer...) In principle, it should be pretty easy to do this by adding a few libraries to an existing nvidia image. However, I'm not sure you'd want take this approach. Installing your ML libraries into an image makes sense if you know they won't change--you'll always be using the same libraries. But the ML field is changing incredibly quickly, meaning there are likely new libraries you'll find yourself wanting to add. At the same time, if you're trying to run someone else's code that might be a couple years older or more, you could find yourself wanting to use an older version of the libraries. So you really want a greater degree of flexibility than an immutable system would afford. I would suggest taking one of two approaches.
Personally I use approach 2. I'm currently running Fedora (not immutable) but doing all my dev work in ubuntu-based containers. That said, you could imagine setting up a UBlue image to support conducting ML research & development in containers. I may actually do something like that. I currently have a setup with Cinnamon DE where I can move around in a workspace grid, and different workspaces are associated with running the terminal in different containers. It's still pretty new, but I think it's going to be a cool way of doing work moving forward. |
Beta Was this translation helpful? Give feedback.
-
Even if all we did was start collecting premade yaml configs so people can just podman deploy stuff it'd probably be a useful companion to the nvidia images, and we'd be able to figure out what makes sense on an image depending on how people use it! |
Beta Was this translation helpful? Give feedback.
-
Just thinking aloud a bit more since I'm not an expert in AI/ML. I was doing searches for some images to see what's out there:
And there's probably piles from other vendors. I'm wondering if we could grab a popular stack and do a "hello world" that would be useful for folks, maybe the play should be curation of known-working stacks on top of the images we have? |
Beta Was this translation helpful? Give feedback.
-
Hey all, I'm curious if there's been any discussion around an ML/AI Spin of UBlue, the Nvidia base includes the needed libs for ML/AI workloads so for that it would just be a matter of using a lighter desktop (budgie would be a great option) and cuda pytorch + possibly jupyter
I don't have a rocm gpu to test what would be needed for AMD but would be more than happy to Start on an nvidia version :)
Beta Was this translation helpful? Give feedback.
All reactions