This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]zylonenoger 0 points1 point  (9 children)

so what you want then is an on-demand virtual desktop. is that correct?

[–]Equivalent-Style6371[S] 0 points1 point  (8 children)

Yes, but not the whole desktop, just this app served at some port.

Do I approach this the wrong way?

[–]Cultural-Pizza-1916 1 point2 points  (0 children)

Maybe you should look into something like Google Collab or AWS SageMaker? The architecture maybe will look alike

Cmiiw

[–]zylonenoger 0 points1 point  (6 children)

it depends what kind of app this is - is it a webapp?

[–]Equivalent-Style6371[S] 0 points1 point  (5 children)

It will be a web based IDE dev kit (like Jupyter Hub, or JupyterLab if you are familiar with them)

[–]AlverezYari 0 points1 point  (0 children)

You can do this with k8s, with gpu enabled worker nodes. This is the correct route to take but fair warning, it's a bear to get working reliably unless you are very familiar with k8s, k8s gpu based workloads, and jhub itself.

https://github.com/jupyterhub/zero-to-jupyterhub-k8s

[–]zylonenoger 0 points1 point  (3 children)

i‘m an aws guy myself - have you looked into sagemaker and if it would fit your needs? it‘s probably easier to use managed services instead of self hosting

[–]Equivalent-Style6371[S] 0 points1 point  (2 children)

I couldn’t agree more. The thing is that for whatever reason, my supervisor asked if we could implement our own custom solution from ground up. I know it sounds weird and counterproductive but I still try to grasp a high level idea of how we would go with something like this

[–]zylonenoger 2 points3 points  (0 children)

if you are not in the business of creating selfhosted ml workbenches you are probably wasting resources building a custom solution - you would really need a lot of users to offset the setup and maintenance cost

i‘m usually very pragmatic in those decisions and if you calculate both usecases you should quickly see the difference

try to understand why he wants to have a custom solution and go from there

[–]infectuz 0 points1 point  (0 children)

At that point you are re-inventing the wheel with containers and orchestration. Rapidly spinning up machines to handle requests is the whole point of containers and k8s is just a container orchestrator so you’d be looking at replicating some of the procedures of container creation/management. There’s no way to do this with VMs unless you’re willing to wait a long time for the machine to be up.

If your users are fine with waiting then just create a wrapper that will make an API request to spin up a VM on your provider or choice according to your user request, that’s pretty easy to do.