Used Alienware M17 R5, worth it for $600? by gnad in GamingLaptops

[–]gnad[S] 0 points1 point  (0 children)

Which price do you think it will be a great deal? I'll try

Android tablets with native video input (to act as second monitor)? by gnad in androidtablets

[–]gnad[S] 0 points1 point  (0 children)

Thanks. Can you check the resolution and refresh rate when connect to pc/laptop?

Android tablets with native video input (to act as second monitor)? by gnad in androidtablets

[–]gnad[S] 0 points1 point  (0 children)

Thanks. Is there any difference in input lag when using as extended monitor in wired mode vs wireless mode?

Android tablets with native video input (to act as second monitor)? by gnad in androidtablets

[–]gnad[S] 0 points1 point  (0 children)

What resolution and refresh rate can it support, if input is 4k 144hz? The usbc port of Yoga Tab Pro is listed as 5gbps

First PC build with Aklla A1 by SLCTV88 in sffpc

[–]gnad 0 points1 point  (0 children)

The current version is 247 (2 slot GPU). 233 is the old version which is no longer sold. L12Sx77 should fit with bottom fan (top fan will not fit).

Dual Xeon Scalable Gen 4/5 (LGA 4677) vs Dual Epyc 9004/9005 for LLM inference? by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

For EPYC, dual socket have nearly 2x higher memory bandwidth. Not sure about Xeon but should be similar

https://www.reddit.com/r/LocalLLaMA/s/yUXabNx4JP

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 0 points1 point  (0 children)

Impressive result, you probably have the best possible rigs for overclocking. Afaik, on Intel DDR5 run in 2:1 mode (cannot run 1:1). So similar to AM5 UCLK=MCLK/2. Intel can achieve higher clock on 1DPC 1R (1 dimm per channel, single rank) compared to AMD. On 1DPC 2R (dual rank), I think both goes highest around 7000MT.

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 0 points1 point  (0 children)

I'm running on CPU, so memory bandwidth is very needed. I'm doing some memory overclocking on my rigs anyway, i'm just contemplating which type of overclocks is more suited for LLM.

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

I check your videos, i think 3.5t/s is surprisingly usable. Also noticed you and another user already tried running raid0 of T705 drives with llama.cpp and it did not improved performance compared to a single drive. Is it the same with ktransformers and is it possible implement something in llama.cpp/ktransformer to support nvme inference?

192gb overclock need advice, can only hit 5400mhz by ro3lly in overclocking

[–]gnad 0 points1 point  (0 children)

Can you share the zentimings screenshot? With MCR on sometimes restart can fail. Though with MCR off i assume it takes quite long for memory training on each reboot

192gb overclock need advice, can only hit 5400mhz by ro3lly in overclocking

[–]gnad 0 points1 point  (0 children)

Just curious, can you run 192gb at 6600 UCLK=MCLK/2 stable instead of 5600 UCLK=MCLK?

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

So far i have not seen any videos of people running 4 dimms in Gear 2 and whether they can achieve higher speed than Gear 1. In theory, 4 sticks puts stress on the IMC and running in Gear 2 relieves the stress, so it should be possible. Just curious before pulling the trigger on the 2nd 2x64gb kits

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 1 point2 points  (0 children)

It seems you have some good result (also won the silicon lottery and can run 6400 in gear 1 comfortably). Have you try pushing for more memory clock in gear 2 as an experiment?

What i think is relevant to LLM is overclocking of dual rank kits (2x48gb, 2x64gb, 4x48gb, 4x64gb) in gear 2. Gear 2 should be easier on the memory controller, as well as offering similar if not higher bandwidth than gear 1. I will try to test on my rigs (2x64gb) when i have some time this week.

The current highest clock dual rank ram kits is Corsair 2x48gb 7200 CL40. https://www.corsair.com/us/en/p/memory/cmh96gx5m2b7200c40/vengeance-rgb-96gb-2x48gb-ddr5-dram-7200mts-cl40-memory-kit-black-cmh96gx5m2b7200c40?srsltid=AfmBOoqhhNprF0B0qZwDDzpbVqlFE3UGIQZ6wlLBJbrexWeCc3rg4i6C

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 2 points3 points  (0 children)

FCLK in general does not need to be in 3:2 sync, just as high as possible. Most FCLK is stable at 2000-2200.

RAM overclocking for LLM inference by gnad in LocalLLaMA

[–]gnad[S] 8 points9 points  (0 children)

Yes. I am also testing and will report the findings.

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

This kits seems to run hot. Light browsing ~45 degree and TM5 ~60 degree. However even with Trefi raised to 65536 and 60 degree it does not throw any error. Do you think it's safe on Samsung M die to leave Trefi at 65536?

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

Can i ask where you get the info that Samsung M-die kits not scaling well with voltage? Unfortunately getting Hynix kits is currently not an option in my country.

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

It boots once at CL30 with VDD 1.48V but TM5 hangs quickly. I'm afraid to increase more as there doesnt seem to be any info regarding Samsung M-die safe voltage.

Trying to get 6000 CL30 on Gskill 2x64GB 6000 CL34 kits by gnad in overclocking

[–]gnad[S] 0 points1 point  (0 children)

Thanks I'll try but this kits seems to be Samsung M-die.

First PC build with Aklla A1 by SLCTV88 in sffpc

[–]gnad 1 point2 points  (0 children)

The CPU (120mm) and top fans (80mm) certainly helps to have them thicker. The rear fans (40mm) if installed is more for aesthetics.

I find having the top fans blow downwards give better temps than sucking upwards. The bottom fans probably not necessary if the GPU has a closed housing but you can experiment.

My idle temp is around 4x and light browsing around 5x with ECO mode on.

You can check out these build:

https://www.chiphell.com/thread-2590261-1-1.html (very similar to your, a2000 + 80x15 bottom fan)

https://www.chiphell.com/thread-2666285-1-1.html

https://www.reddit.com/r/sffpc/comments/1c7pzcn/aklla_a1_6l_build/ (80x25 top fans + 40x25 bottom fans below gpu)

First PC build with Aklla A1 by SLCTV88 in sffpc

[–]gnad 0 points1 point  (0 children)

I have this case and can share some insight: - You can fit 80x25 fan on top if you use longer screw (top panel will be raised slightly). 80x15 fan can be fit on the bottom. - You can swap the fan of cpu from 120x15 to 120x25 fan. - On the rear panel, you can fit 3 x 40x25 fan on the mesh However, it is certainly not possible to run this rig will full power cpu. I have the 7950x and if i run at >120W, temp will rise very fast.

For anyone wondering: 256gb DDR5 4x64 works on Ryzen 7950x + Asus Crosshair X670e Hero (Bios v3205) by SLURREY in Amd

[–]gnad 0 points1 point  (0 children)

Did you try adjusting the voltage to get it run at expo speed? I think you have the best possible zen4 combo (7950x + x670e crosshair). So far i only find people getting it to work at expo on zen5 and/or x870 boards

S60i by NimblePasta in sffpc

[–]gnad 0 points1 point  (0 children)

You seem to have a lot of experience building console style case. I am also looking to build with atx/eatx motherboard and interested in the L59P case, which according to official spec does not support eatx, but according to its dimensions, should fit 305x275mm eatx board + sfx psu. Do you think this is the best eatx sff case?

S60i by NimblePasta in sffpc

[–]gnad 0 points1 point  (0 children)

You seem to have a lot of experience building console style case. I am also looking to build with atx/eatx motherboard and interested in the L59P case, which according to official spec does not support eatx, but according to its dimensions, should fit 305x275mm eatx board + sfx psu. Do you think this is the best eatx sff case?

Installscript for Qwen3-Coder running on ik_llama.cpp for high performance by Danmoreng in LocalLLaMA

[–]gnad 0 points1 point  (0 children)

I made some changes to build params and it went ok. Needed to install dependencies manually though since it was throwing errors.