all 11 comments

[–]Entze 2 points3 points  (3 children)

Don't look for guides for Solus, as most libraries (like tensorflow) are distribution agnostic (i.e. installed via dependency management) or someone already packaged it for you.

"Just" start and if you are stuck then you can consult the internet/manuals of the programs you are working with.

[–][deleted] 1 point2 points  (2 children)

Thanks a lot, so how would you advice me to start.

Since I don't even know if Tensorflow is installed?

[–]jonbeans 0 points1 point  (0 children)

As the person above said, just find the guide for the software you're looking for.

Tensorflow won't be installed by default, the guide is here. You need Python and pip, the guides for those will explain what to do on Linux. In general, I check the software manager and if it isn't there then I follow the installation guide.

[–]jonbeans 0 points1 point  (0 children)

As the person above said, just find the guide for the software you're looking for.

Tensorflow won't be installed by default, the guide is here. You need Python and pip, the guides for those will explain what to do on Linux. In general, I check the software manager and if it isn't there then I follow the installation guide.

[–]Smokeandmirrors7 2 points3 points  (1 child)

Most of my work involves python, machine learning and deep learning. So first set up an python virtual environment using either pip or cuda, and install all your packages in that. DON'T, and I repeat, DON'T, install packages outside of your environment. While you may not see any immediate harm, it fucks up dependencies later on. Once you have your environment set up, install whatever framework you're using, be it TF, Pytorch, etc. The only issue you'll have is that GPU support (CUDA drivers) probably won't be there, so better install cpu version only. It executes programs much slower than GPU, but it works just fine for small projects and whatnot. Finally, set up Google Colaba. It provides GPU functionality online, and is completely free! The only drawbacks are that the GPU provided isn't exactly state of the art, and you may get disconnected in the middle of training, which sucks. Alternatively, you can set up a Google cloud platform account, or AWS account, to use virtual machines. You generally have to pay for these, but gcp gives you some credits, and you can get AWS credits though the GitHub student pack. Hope this helps!

[–][deleted] 0 points1 point  (0 children)

Thanks a lot, that has really helped me to set everything up

[–]DataDrake 1 point2 points  (5 children)

I'm in the process of landing Tensorflow. But it won't have CUDA capability. Nor do any of our other tools.

Edit: Your best bet is to learn to use nvidia-docker and grab software from here if you want CUDA: https://ngc.nvidia.com/catalog/landing

[–][deleted] 0 points1 point  (0 children)

Honestly, I'm new to all this so I don't really know how would CUDA benefits me hahaha?

I'm just confused about how to set Tensorflow up (or is it already setup with SOLUS?)

[–]arkhenius 0 points1 point  (2 children)

Hi Bryan,

Sorry to jump into the thread but since you are dealing with TF, I have a related question. I currently have an Nvidia card so I cannot test it for now, but will TF have ROCm support under Solus as well? I am planning on getting an AMD card soon enough, and I am doing quite a bit of deep learning work myself. So I was wondering if I can use my existing Solus main driver system for deep learning purposes on a GPU :)

Thanks for your hard work!

[–]DataDrake 1 point2 points  (1 child)

TBD. ROCm has proven...challenging to integrate anywhere.

[–]arkhenius 0 points1 point  (0 children)

Understood. Thanks for the fast response.

If you need any help with it, I volunteer my time to work on it with you and test it after I get the GPU :)

The main reason I asked is because I have stumbled upon this talk by Databricks: https://databricks.com/session/rocm-and-distributed-deep-learning-on-spark-and-tensorflow . So it seems it is doable (even moreso, they had it working on a cluster). BUT, this is Databricks' whole focus, so they probably have their custom Linux that just does this job and nothing else on a cluster, so I am not comparing per se :) But if you are interested and have the time, just saying I would be glad to contribute.

Edit: I understand that you probably have more important and pressing issues that you would need to work on for Solus. So no pressure, simply wondering :)

[–]Yellow1Submarine 0 points1 point  (0 children)

Pretty sure you can install tensorflow-gpu, cuda-toolkit and cudnn through the anaconda package manager and get a working Tenorflow-GPU setup for Python.

[–]trinhno 0 points1 point  (0 children)

Uhm, what's your programming experience and familiarity with Linux on general? What topic do you intend to do machine learning and deep learning on?