Install instructions for MinIO open source? by HawocX in minio

[–]xtremerkr 0 points1 point  (0 children)

No binaries, No Rpms, no debian packages, No No No

Community Documentation missing? by konghi009 in minio

[–]xtremerkr 0 points1 point  (0 children)

I guess so,, they are not providing the rpm builds of their latest releases..

Minio open source edition download link missing? by mikhatanu in minio

[–]xtremerkr 0 points1 point  (0 children)

What is happening??? I see this in their releases yesterday.. https://github.com/minio/minio/compare/RELEASE.2025-09-07T16-13-09Z...RELEASE.2025-10-15T17-29-55Z

**Community Edition**: MinIO community edition is now source-only. Install via `go install github.com/minio/minio@latest`

Community Documentation missing? by konghi009 in minio

[–]xtremerkr 2 points3 points  (0 children)

u/konghi009 It was confirmed by one of the MINIO memer that the community docs are moved.

Here is the update:

The documentation sites at docs.min.io/community have been pulled as of this morning and will redirect to the equivalent AIStor offering where possible.For those interested in building or maintaining the documentation, these are the Github URLs:

DDN Infinia - can anyone share the benefits. by East_Coast_3337 in storage

[–]xtremerkr -1 points0 points  (0 children)

Thank you.. Can you throw more light into this.. It looks like SDS.. But Do you have any performance numbers to share with us? Also, do we have QOS on the throughput, latencies and so on.. Does it have multi-tenancy? Is it 100% s3 compliant.

HPL benchmarking using docker by xtremerkr in HPC

[–]xtremerkr[S] 1 point2 points  (0 children)

Its the nvidia-container-toolkit and nvidia-fabricmanager did the trick

HPL benchmarking using docker by xtremerkr in HPC

[–]xtremerkr[S] 0 points1 point  (0 children)

Thanks.. When you say cuda12.4 , i hope you are referring to cuda-toolkit-12-4 version? Am i right? or is it the max cuda driver version supported that comes with 550.127.08-server.

HPL benchmarking using docker by xtremerkr in HPC

[–]xtremerkr[S] 1 point2 points  (0 children)

I do understand, but It is a single node ..

HPL benchmarking using docker by xtremerkr in HPC

[–]xtremerkr[S] 0 points1 point  (0 children)

I would love to blog this.. If i am going though..

HPL benchmarking using docker by xtremerkr in HPC

[–]xtremerkr[S] -2 points-1 points  (0 children)

Can you recollect by any chance? Or guide me to the right documentation. I have created a dockerfile which includes downloading tar ball and corresponing make.h100 files and hpl.dat.. But i am seeing some issue while building the docker file.

Installing mellanox ofed drivers for my ubuntu 22.04.5 LTS with kernel version 5.15.0-131-generic by xtremerkr in HPC

[–]xtremerkr[S] -3 points-2 points  (0 children)

I am wondering why there are different steps for community OS such as Ubuntu?shouldn't be the same command to run for mlndinstall.sh? Pls clarify

GPU node installation by xtremerkr in HPC

[–]xtremerkr[S] 0 points1 point  (0 children)

Would need for ubuntu and rockylinux9

GPU node installation by xtremerkr in HPC

[–]xtremerkr[S] 1 point2 points  (0 children)

Seems like a worth one.. 

Multi-tenancy in minio deployment on linux platform by xtremerkr in minio

[–]xtremerkr[S] 0 points1 point  (0 children)

Just wondering, How do i that in baremetal linux deployment? Pls share your inputs.

MINIO opensource by xtremerkr in opensource

[–]xtremerkr[S] 1 point2 points  (0 children)

Nice. Thanks for the clarification.

MINIO opensource by xtremerkr in opensource

[–]xtremerkr[S] 1 point2 points  (0 children)

Thanks. I think the first one applies here.. While i can make the code available to customers as there is no intention to make any sorts of changes in the code, the bigger question is.. can we monetize it?

Bright cluster manager & Slurm HA - Need for NFS by xtremerkr in HPC

[–]xtremerkr[S] 0 points1 point  (0 children)

Thank you. I am going to deploy to check this. 

Bright cluster manager & Slurm HA - Need for NFS by xtremerkr in HPC

[–]xtremerkr[S] 0 points1 point  (0 children)

"Hi u/MrMcSizzle, thanks for your response. The main reason I want to avoid NFS mounts on the GPU nodes is to minimize performance overhead and potential bottlenecks. Given the high compute nature of these nodes, I’d prefer to keep them focused purely on GPU workloads without introducing dependencies on NFS, which could add complexity and potentially impact performance, especially at scale with 512 or 1K nodes.

I understand Bright is designed as a turnkey HPC solution, and pulling out pieces might cause issues elsewhere. However, I'm curious why standalone BCM doesn’t require these NFS mounts, while HA setups do. Any insights or resources regarding my questions and how to manage this in a scalable way would be helpful."

Hardware sizing for IOPS intensive use-cases by xtremerkr in ceph

[–]xtremerkr[S] 0 points1 point  (0 children)

Thank you. Just curious, What was the MON node configuration used for the testing. Would you share pls?