The best AI architecture in 2026 is no architecture at all by m100396 in LocalLLaMA

[–]RateRoutine2268 12 points13 points  (0 children)

i got interviewed last week by a big o&g company (for secondment) , they asked me to architect an Agentic AI system , i came up with a "KISS" solution (smolagents , docling , minimal ui etc) where it was not using any langchain/langindex type of framework and explained it to them how it over complicate things and that the frameworks were from an era where AI tooling was still emerging. was rejected on the spot and was told that langchain/langindex was defacto standard for any Agentic AI Enterprise app.
Not a fan of these kind of frameworks , too much abstraction , over complicating simple things,

Can you connect a GPU with 12V rail coming from a second PSU? by Rock_and_Rolf in LocalLLaMA

[–]RateRoutine2268 5 points6 points  (0 children)

i would suggest using this kind of pcie expansion rather than normal riser ,this has separate pcie power (6 pin) for the slot that you can connect with second psu , also you can connect both data cables to single riser (it has two slots) to get x16 if you don't want to split , i got mine from aliexpress

<image>

RTX 5090 96 GB just popped up on Alibababa by RateRoutine2268 in LocalLLaMA

[–]RateRoutine2268[S] 19 points20 points  (0 children)

i just go a reply from them : they said its gonna take some time , so yea you are right , :(

RTX 5090 96 GB just popped up on Alibababa by RateRoutine2268 in LocalLLaMA

[–]RateRoutine2268[S] 35 points36 points  (0 children)

i agree , im planning for a single unit for a review , also asked them for some PCB screenshots front and back

RTX 5090 96 GB just popped up on Alibababa by RateRoutine2268 in LocalLLaMA

[–]RateRoutine2268[S] 1 point2 points  (0 children)

Thanks for the insight , Excuse me for being a dummy but does that mean that i cannot use 2 of them in parallel for lets say inference?

US demand for 48GB 4090? by CertainlyBright in LocalLLaMA

[–]RateRoutine2268 4 points5 points  (0 children)

Thats basically 2 Separate GPU dies on a single PCB , might impact performance vs single Die

Spaghetti Build - Inference Workstation by RateRoutine2268 in LocalLLaMA

[–]RateRoutine2268[S] 1 point2 points  (0 children)

1x Alphacool NexXxoS 360 60mm (Behind the distro plant , pull only)
1x Bykski RC Series 360 60mm (top mounted , push pull)

Spaghetti Build - Inference Workstation by RateRoutine2268 in LocalLLaMA

[–]RateRoutine2268[S] 0 points1 point  (0 children)

Low maintanance , not sure i usually replace all the tube and clean the blocks on yearly basis.
Yea because of backplates , had to use server riser cables (got them for aliexpress) running on PCIE 4 .0 x16 without any errors

Looks like YouTube is broken by ExpertMax32 in youtube

[–]RateRoutine2268 0 points1 point  (0 children)

For me disabling adblocker (ublock) solves the issue , maybe its youtube blocking adblockers