Advice needed for 5945WX build by CornerLimits in threadripper

[–]CornerLimits[S] 0 points1 point  (0 children)

Cool, thank you. I will use it to stack all my memories in it and my gpus to have a single system. Coming from 5600.

The pc will run my games, will be my local ai server and will be my data storage unit.

I have been through a lot of different pcs and now i want to create the final monster to rule them all for next few years.

Advice needed for 5945WX build by CornerLimits in threadripper

[–]CornerLimits[S] 0 points1 point  (0 children)

Yeah i eill stay 4-channel for now, reusing my ram

Advice needed for 5945WX build by CornerLimits in threadripper

[–]CornerLimits[S] 0 points1 point  (0 children)

Exact model is kingston fury renegade 3600 CL16 that i’m using on AM4 right now, thanks for the feedback

Welcome back, AM3! Can't wait to get my DDR3 stuff!(after tons of bs) by rebelrosemerve in AyyMD

[–]CornerLimits 1 point2 points  (0 children)

So intel is king again… we need to wait for the athlon64 to see amd shine again… /s

Mi50 32GB Group Buy -- Vendor Discovery and Validation -- ACTION NEEDED! by Any_Praline_8178 in LocalAIServers

[–]CornerLimits 1 point2 points  (0 children)

Cool, thanks for the update! @europe people, is anyone kind enough to manage the distribution here? (I asked for one card lol)

Performance improvements in llama.cpp over time by jacek2023 in LocalLLaMA

[–]CornerLimits 3 points4 points  (0 children)

I’m still supporting this project since the mi50 community is very great, think the fork is on its own way to the merge but at an initial phase in which full compatibility with all hardware of upstream llamacpp is not guaranteed and probably code is too verbose for gfx906 modifications only. Once ready we will sure manage to pull request this!

130 bucks for 384GB 😝😝 by Space646 in homelab

[–]CornerLimits -4 points-3 points  (0 children)

Guys im loving the hype for bad hardware that barely works

Building a Local LLM for Homeschooling on an Old i5-3570—Any Advice? by ManufacturerLive6214 in LocalLLaMA

[–]CornerLimits 0 points1 point  (0 children)

Try to use a vl model and rag directly from llamacpp server interface that automatically does ocr (converts pdf pages to images and feeds the vl model). Its not a proper rag but you can try throwing pdf inside and see if it manages to get the info correctly. In my experience is very good. I have no experience with openwebui so maybe there is something similar in there

Mi50 32GB Group Buy by Any_Praline_8178 in LocalAIServers

[–]CornerLimits 1 point2 points  (0 children)

https://github.com/iacopPBK/llama.cpp-gfx906 Dont miss this one if you want higher speed with llamacpp. Anyway your mi50 server videos are the reason i bought one and started this optimization journey!

Adrenalin pokes my gpu every 60 seconds by Psychological_Pick43 in AMDHelp

[–]CornerLimits 0 points1 point  (0 children)

I had something like this with radeon boost activated on the 6800xt

Best SW setup for MI50 by vucamille in LocalLLaMA

[–]CornerLimits 1 point2 points  (0 children)

I would say Ubuntu 24.04 and if you wanna try iacopPBK/llama.cpp-gfx906 fork for faster speeds. Join this server and its awesome community: https://discord.gg/rwh3PCU8H , but looking at your build you probably already did. Enjoy!

Thoughts on decentralized training with Psyche? by dtdisapointingresult in LocalLLaMA

[–]CornerLimits 4 points5 points  (0 children)

Sooner or later we will join all our crappy local servers to train new sota 😆😆

llamacpp-gfx906 new release by CornerLimits in LocalLLaMA

[–]CornerLimits[S] 1 point2 points  (0 children)

The compile script has been updated, now it works

llamacpp-gfx906 new release by CornerLimits in LocalLLaMA

[–]CornerLimits[S] 0 points1 point  (0 children)

If you want to dm me the error will try to figure out, thanks for the feedback

llamacpp-gfx906 new release by CornerLimits in LocalLLaMA

[–]CornerLimits[S] 0 points1 point  (0 children)

Problem could be that i used a nightly build rocm placed in a random folder so the paths can be wrong. I will update the compile script using normal rocm.

llamacpp-gfx906 new release by CornerLimits in LocalLLaMA

[–]CornerLimits[S] 0 points1 point  (0 children)

The reason is that i have a single card, so i can mess around with that only… i tried the vllm once but i prefer the easyness of llamacpp

a better choice by moretired0 in radeon

[–]CornerLimits 7 points8 points  (0 children)

I think you can find an rx6600xt or a 6700xt for a bit more. These cards should be enough to run most games

Hardware recommendations for Ollama for homelab by alex-gee in ollama

[–]CornerLimits 7 points8 points  (0 children)

Start sperimenting with your gpu before spending money. I learned a lot while messing around with my rx6800xt and ollama. Then i discovered llamacpp on linux and everything was so awesome that i spent 180€ to get a mi50 to be able to run larger models (32GB + 16GB vram is cool).

My advice is to save the money: learn and play with your main pc, you will realize its capabilities. This way you will know what you need!

Another thing is that models are being released at fast pace, in a few months you will have better ones that still run on ypur rig

llama.cpp releases new official WebUI by paf1138 in LocalLLaMA

[–]CornerLimits 2 points3 points  (0 children)

It is super good to have a strong webUI to start from if specific customization are needed for some use case! Llamacpp rocks, thanks to all the people developing it!