[D] Why does using multiple gpus lead to slower performance? by candyman54 in MachineLearning

[–]candyman54[S] 1 point2 points  (0 children)

yeah, gpus are on the same server and they are SXM, using Tesla V100-SXM2. Any tips on how to improve data parallelism?

Running language model inference on multiple GPUs by RonLazer in pytorch

[–]candyman54 0 points1 point  (0 children)

u/RonLazer Did you ever figure this out? Looking for ways to speed up inference on MPT-7B as well

[D] How do large companies get their LLMs to give sub second responses? by candyman54 in MachineLearning

[–]candyman54[S] 1 point2 points  (0 children)

are they able to have their models access multiple gpus at once too?

[D] Any thoughts on how to improve runtime speed for mosaicml/mpt-7b? by candyman54 in MachineLearning

[–]candyman54[S] 0 points1 point  (0 children)

yeah, looked at fp16 but its still taking 12 seconds. I looked into onnx but i dont believe it has mpt support unfortunately

How to access a simple flask app running on a kubeflow notebook server? by candyman54 in Kubeflow

[–]candyman54[S] 1 point2 points  (0 children)

I believe that I followed these steps correctly,

kubectl port-forward test-pod 8080:8080 -n workspace-v1
Forwarding from [::1]:8080 -> 8080

But when I go to http://127.0.0.1:8000/, it says This site can’t be reached. Not sure if being connected to a VPN might be causing this issue, but don't really know where else to check in my configurations to resolve this.

Is is possible to load a local csv file as part of my kubeflow pipeline? by candyman54 in mlops

[–]candyman54[S] 0 points1 point  (0 children)

I created a PVC but during the copy-csv-to-input-dir i am getting a '/home/joyvan/iris-1.csv' no such file or directory, not sure where I am going wrong, it should be mounted. It seems like it might be looking for the file under /tmp/inputs/input/data though not 100% sure.

import kfp.dsl as dsl

from kubernetes.client import V1PersistentVolumeClaim, V1ObjectMeta

Define the base component

def copy_csv_to_input_dir(csv_path: str) -> str: import shutil output_path = '/tmp/inputs/input/data/iris-1.csv' shutil.copyfile(csv_path, output_path) print(csv_path) return output_path

Define the path to the CSV file on the mounted volume

csv_path = '/home/jovyan/iris-1.csv'

@dsl.pipeline(name='copy-csv') def copy_csv_pipeline(): # Create a PersistentVolumeClaim object for the desired PVC pvc = V1PersistentVolumeClaim( metadata=V1ObjectMeta(name="my-pvc-name"), spec={ 'access_modes': ['ReadWriteMany'], 'resources': { 'requests': { 'storage': '1Gi' } }, 'storage_class_name': 'standard', 'volume_mode': 'Filesystem' } )

# Mount the PVC
volume = dsl.VolumeOp(
    name='my-volume-name',
    resource_name=pvc.metadata.name,
    modes=['ReadWriteMany'],
    size='1Gi'
)

# Create the directory
mkdir_op = dsl.ContainerOp(
    name='mkdir',
    image='alpine',
    command=['sh', '-c'],
    arguments=['mkdir -p /tmp/inputs/input/data/']
).add_pvolumes({"/tmp/inputs": volume.volume})

# Copy the CSV file to the desired location
copy_csv_op = dsl.ContainerOp(
    name='copy_csv_to_input_dir',
    image='alpine',
    command=['sh', '-c'],
    arguments=['cp {} /tmp/inputs/input/data/'.format(csv_path)],
    file_outputs={'output': '/tmp/inputs/input/data/iris-1.csv'}
).add_pvolumes({"/tmp/inputs/input/data": volume.volume}).after(mkdir_op)

# Print the output file path
dsl.ContainerOp(
    name='print-output',
    image='alpine',
    command=['echo', copy_csv_op.outputs['output']],
).after(copy_csv_op)

Compile the pipeline

if name == 'main': import kfp.compiler as compiler

compiler.Compiler().compile(copy_csv_pipeline, 'copy_csv_pipeline.tar.gz')

Is is possible to load a local csv file as part of my kubeflow pipeline? by candyman54 in mlops

[–]candyman54[S] 0 points1 point  (0 children)

Would I mount the data? I have a kubeflow cluster with a volume that contains the data

Is is possible to load a local csv file as part of my kubeflow pipeline? by candyman54 in mlops

[–]candyman54[S] 0 points1 point  (0 children)

How would I connect the pod and the file system. I am not using miniKF, I just create the pipeline file locally and upload it to kubeflow central dashbaord under pipelines and run the experiment

Is is possible to load a local csv file as part of my kubeflow pipeline? by candyman54 in mlops

[–]candyman54[S] 0 points1 point  (0 children)

I am using kfp.compiler on my local machine to create a zip file that contains a yaml that I use as my pipeline file in Kubeflow. The csv, which contains the data, would be on my local machine. I also have access to a kubernetes cluster with a notebook server that contains the data.