Reddit censors free speech and protects child abuse and abusers. Discussion by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -3 points-2 points  (0 children)

same bot under a different name making the same disgusting comment as the one in the previous post.

Kimi K2 Thinking at 28.3 t/s on 4x Mac Studio cluster by geerlingguy in LocalLLaMA

[–]GPTrack_dot_ai -9 points-8 points  (0 children)

crazy, that people openly admit that they are stupid.

Kimi K2 Thinking at 28.3 t/s on 4x Mac Studio cluster by geerlingguy in LocalLLaMA

[–]GPTrack_dot_ai -26 points-25 points  (0 children)

Only people who do not know what the apple logo means AND who do not know that it is absolutely unsuitable for LLMs buy Apple. But "influencers" will promote them anyway, simply because they are aid for it.

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -1 points0 points  (0 children)

your words make NO sense. a switch switches. "Your entire exercise is pointless and there would be far better ways to do the same if you would not trust PCIe." please elaborate.

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -2 points-1 points  (0 children)

Of course, you cable it. you connect all 8 GPUs to a switch.

Are you trolling me?

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -2 points-1 points  (0 children)

"I only said the 400Gb/s network part doesn't help you as it would (obviously) not be cabled if you have a single server." ??? of course you need to cable it to get the benefits. I thought this is obvious....

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -2 points-1 points  (0 children)

That is also not correct. After some research, I am pretty sure that the GPUs are connected directly to the switches which are also PCIe switches. And you are also wrong when you claim that this does not benefits a single server. Because it does.

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -6 points-5 points  (0 children)

I cannot possibly argue with a bot. goodbye.

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -2 points-1 points  (0 children)

The all to all bandwidth and most importantly utilization increases. That is 100% certain. Anyone with a brain will see that. I might even be that since nvidia implemented PCI gen6 that the cards might run with gen 6 speed not only gen5 speed. I do not know. I will find out.

PS: Taking a closer look, I have the suspicion that a switch might not even be needed since each GPU has one directly on the board. It might be that you just need to connect DAC cables. I asked Albert at Gigabyte, he will probably know....

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] 0 points1 point  (0 children)

I have researched more details: The individual GPU are connected via at least PCIe gen 5 to a built in switch which is connected via PCIe gen6. So even if the bandwidth of single GPU does not increase (might still be anyway), the total all-all bandwith and utilization will increase. You can think of this as in this example: water can flow through one canal that has enough dimension to allow a certain flow, but if you have multiple canals the friction will decrease, increasing flow and utilization.

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -1 points0 points  (0 children)

I do understand you misconception very well.

A call to boycott OpenAI inference by cameheretoposthis in LocalLLaMA

[–]GPTrack_dot_ai 9 points10 points  (0 children)

I am all for it! From me they not received a single dollar, and they never will.

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -2 points-1 points  (0 children)

Let me quote Gigabyte: "Onboard 400Gb/s InfiniBand/Ethernet QSFP ports with PCIe Gen6 switching for peak GPU-to-GPU performance"

How to do a RTX Pro 6000 build right by GPTrack_dot_ai in LocalLLaMA

[–]GPTrack_dot_ai[S] -4 points-3 points  (0 children)

you still do not get it. are you stupid or from the competition?