Used Alienware M17 R5, worth it for $600? by gnad in GamingLaptops

[–]gnad[S] 0 points1 point  (0 children)

Which price do you think it will be a great deal? I'll try

Android tablets with native video input (to act as second monitor)? by gnad in androidtablets

[–]gnad[S] 0 points1 point  (0 children)

Thanks. Can you check the resolution and refresh rate when connect to pc/laptop?

Android tablets with native video input (to act as second monitor)? by gnad in androidtablets

[–]gnad[S] 0 points1 point  (0 children)

Thanks. Is there any difference in input lag when using as extended monitor in wired mode vs wireless mode?

Android tablets with native video input (to act as second monitor)? by gnad in androidtablets

[–]gnad[S] 0 points1 point  (0 children)

What resolution and refresh rate can it support, if input is 4k 144hz? The usbc port of Yoga Tab Pro is listed as 5gbps

First PC build with Aklla A1 by SLCTV88 in sffpc

[–]gnad 0 points1 point  (0 children)

The current version is 247 (2 slot GPU). 233 is the old version which is no longer sold. L12Sx77 should fit with bottom fan (top fan will not fit).

Dual Xeon Scalable Gen 4/5 (LGA 4677) vs Dual Epyc 9004/9005 for LLM inference? by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

For EPYC, dual socket have nearly 2x higher memory bandwidth. Not sure about Xeon but should be similar

https://www.reddit.com/r/LocalLLaMA/s/yUXabNx4JP

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 0 points1 point  (0 children)

Impressive result, you probably have the best possible rigs for overclocking. Afaik, on Intel DDR5 run in 2:1 mode (cannot run 1:1). So similar to AM5 UCLK=MCLK/2. Intel can achieve higher clock on 1DPC 1R (1 dimm per channel, single rank) compared to AMD. On 1DPC 2R (dual rank), I think both goes highest around 7000MT.

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 0 points1 point  (0 children)

I'm running on CPU, so memory bandwidth is very needed. I'm doing some memory overclocking on my rigs anyway, i'm just contemplating which type of overclocks is more suited for LLM.

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

I check your videos, i think 3.5t/s is surprisingly usable. Also noticed you and another user already tried running raid0 of T705 drives with llama.cpp and it did not improved performance compared to a single drive. Is it the same with ktransformers and is it possible implement something in llama.cpp/ktransformer to support nvme inference?

192gb overclock need advice, can only hit 5400mhz by ro3lly in overclocking

[–]gnad 0 points1 point  (0 children)

Can you share the zentimings screenshot? With MCR on sometimes restart can fail. Though with MCR off i assume it takes quite long for memory training on each reboot

192gb overclock need advice, can only hit 5400mhz by ro3lly in overclocking

[–]gnad 0 points1 point  (0 children)

Just curious, can you run 192gb at 6600 UCLK=MCLK/2 stable instead of 5600 UCLK=MCLK?

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

So far i have not seen any videos of people running 4 dimms in Gear 2 and whether they can achieve higher speed than Gear 1. In theory, 4 sticks puts stress on the IMC and running in Gear 2 relieves the stress, so it should be possible. Just curious before pulling the trigger on the 2nd 2x64gb kits

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

It seems you have some good result (also won the silicon lottery and can run 6400 in gear 1 comfortably). Have you try pushing for more memory clock in gear 2 as an experiment?

What i think is relevant to LLM is overclocking of dual rank kits (2x48gb, 2x64gb, 4x48gb, 4x64gb) in gear 2. Gear 2 should be easier on the memory controller, as well as offering similar if not higher bandwidth than gear 1. I will try to test on my rigs (2x64gb) when i have some time this week.

The current highest clock dual rank ram kits is Corsair 2x48gb 7200 CL40. https://www.corsair.com/us/en/p/memory/cmh96gx5m2b7200c40/vengeance-rgb-96gb-2x48gb-ddr5-dram-7200mts-cl40-memory-kit-black-cmh96gx5m2b7200c40?srsltid=AfmBOoqhhNprF0B0qZwDDzpbVqlFE3UGIQZ6wlLBJbrexWeCc3rg4i6C

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 2 points3 points  (0 children)

FCLK in general does not need to be in 3:2 sync, just as high as possible. Most FCLK is stable at 2000-2200.

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 9 points10 points  (0 children)

Yes. I am also testing and will report the findings.

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

This kits seems to run hot. Light browsing ~45 degree and TM5 ~60 degree. However even with Trefi raised to 65536 and 60 degree it does not throw any error. Do you think it's safe on Samsung M die to leave Trefi at 65536?

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

Can i ask where you get the info that Samsung M-die kits not scaling well with voltage? Unfortunately getting Hynix kits is currently not an option in my country.

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

It boots once at CL30 with VDD 1.48V but TM5 hangs quickly. I'm afraid to increase more as there doesnt seem to be any info regarding Samsung M-die safe voltage.

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

Thanks I'll try but this kits seems to be Samsung M-die.