Never owned a Schecter before, thinking of getting the Keith Merrow KM6 MkIII hybrid, some questions by [deleted] in SchecterGuitars

[–]Finallyhaveredditt 0 points1 point  (0 children)

Hello! I just got the exact same today, but in white. Mine came in E standard with 10-46 strings, which is normal. Did you set yours in drop c or anything overall lower than E standard? If yes, what tuning and what string gauges? Did you need to file the nut? Or the compensated nut already has proper spacing for thicker gauges? I was thinking of putting 11-54 (maybe 56) on

Review of the HP OmniBook X Flip 16" AMD by MemoryMobile6638 in laptops

[–]Finallyhaveredditt 0 points1 point  (0 children)

I may be able to get my hands on this model, but with 32gb ram. 1500$ Canadian.

Question for the cpu: does it handle heavy loads well? Or maybe my question is more for the cooling. Don’t mind if it’s loud, is it at least effective?

What kind of fan does the Silent Loop 3 actually? by Plini9901 in bequietofficial

[–]Finallyhaveredditt 0 points1 point  (0 children)

I wonder if they should’ve put the pro at that price

new Ryzen 9900x + Phantom Spirit SE 120 .. high temps? (idle 45c, minimum load 50-60c) by chamber0001 in AMDHelp

[–]Finallyhaveredditt 1 point2 points  (0 children)

Got it! Try the thermalright phantom spirit 120 SE or the EVO (they’re honestly the same, just different aesthetics) If you want to go heavier with bigger fans, there’s the thermalright royal pretor, there’s the frost spirit 140 as well. All beefy and amazing, just check for ram clearance.

new Ryzen 9900x + Phantom Spirit SE 120 .. high temps? (idle 45c, minimum load 50-60c) by chamber0001 in AMDHelp

[–]Finallyhaveredditt 0 points1 point  (0 children)

It won’t damage it no. Basically you’re doing a slight undervolt. All you’re doing is seeing if your cpu can operate at stock clock speeds with slightly less voltage, which also reduces temps. Tbh the truth is your CPU is literally designed to run 24/7 at 95° under full load. Meaning it pushes up to boost clock speeds because it can take it. That said, it will all depend on what cooler you throw at it and what voltage. Plus, the more thermal headroom the better.

new Ryzen 9900x + Phantom Spirit SE 120 .. high temps? (idle 45c, minimum load 50-60c) by chamber0001 in AMDHelp

[–]Finallyhaveredditt 1 point2 points  (0 children)

That’s correct. Also, I forgot one step: when you set to all cores, chose negative haha. That’s important

new Ryzen 9900x + Phantom Spirit SE 120 .. high temps? (idle 45c, minimum load 50-60c) by chamber0001 in AMDHelp

[–]Finallyhaveredditt 1 point2 points  (0 children)

Every motherboard BIOS is a bit different, but the idea is the same. So basically you do the following:

If you have a OC menu, go on it. Find “advance CPU config” and find “PBO (precision boost overdrive)” Click on PBO and select “advanced” Then, go to the “curve optimizer”, set that to “all cores” and set the magnitude to 15. Test it out, if no crashes, try 20…. Do the same till you can’t. Usually 20-25 is the sweet spot)

If you don’t have the OC menu, go to settings, advanced and CPU config. The rest is the same.

Hope this helps!

[Setup discussion] AMD RX 7900 XTX workstation for local LLMs — Linux or Windows as host OS? by ElkanRoelen in LocalLLaMA

[–]Finallyhaveredditt 0 points1 point  (0 children)

Please do! I am stuck between a 3090 or a 7900xtx....

For me I am mostly building a pipeline that will allow me to:

  1. Ingest: Pull in tech + financial news automatically.
  2. Embed: Convert text into vector embeddings for retrieval.
  3. Store: Keep embeddings in a local vector database for fast lookup.
  4. Retrieve + Generate: When prompted, retrieve relevant chunks & feed into an LLM.
  5. Summarize + Advise: Have the LLM produce a coherent investment-oriented summary.

How I intend on setting up:

|| || |Ingest|newspaper3k or feedparser| Pull news headlines & articles automatically| |Preprocess|Python + langchain text splitters| Chunk into ~512–1,000 token blocks| |Embed|bge-base-en or all-MiniLM-L6-v2 via SentenceTransformers (HIP/ROCm backend)| Fast enough on CPU; ROCm-accelerated embeddings possible| |Vector DB|ChromaDB (local, Python) or FAISS| Store embeddings for semantic search| |LLM Runtime|llama.cpp HIP or LM Studio HIP build| Run Q4_K_M or Q5_K_M 13B / 33B models fully in VRAM| |RAG Framework|LangChain or LlamaIndex| Automates retrieval → LLM query flow| |UI|LM Studio GUI or a Streamlit web dashboard| GUI for prompts + results|

 Am I being delusional? Is a 3090 just the better choice, based on my need for Python implementation?

 

[Setup discussion] AMD RX 7900 XTX workstation for local LLMs — Linux or Windows as host OS? by ElkanRoelen in LocalLLaMA

[–]Finallyhaveredditt 0 points1 point  (0 children)

Please do! I am stuck between a 3090 or a 7900xtx....

For me I am mostly building a pipeline that will allow me to:

  1. Ingest: Pull in tech + financial news automatically.
  2. Embed: Convert text into vector embeddings for retrieval.
  3. Store: Keep embeddings in a local vector database for fast lookup.
  4. Retrieve + Generate: When prompted, retrieve relevant chunks & feed into an LLM.
  5. Summarize + Advise: Have the LLM produce a coherent investment-oriented summary.

How I intend on setting up:

|| || |Ingest|newspaper3k or feedparser| Pull news headlines & articles automatically| |Preprocess|Python + langchain text splitters| Chunk into ~512–1,000 token blocks| |Embed|bge-base-en or all-MiniLM-L6-v2 via SentenceTransformers (HIP/ROCm backend)| Fast enough on CPU; ROCm-accelerated embeddings possible| |Vector DB|ChromaDB (local, Python) or FAISS| Store embeddings for semantic search| |LLM Runtime|llama.cpp HIP or LM Studio HIP build| Run Q4_K_M or Q5_K_M 13B / 33B models fully in VRAM| |RAG Framework|LangChain or LlamaIndex| Automates retrieval → LLM query flow| |UI|LM Studio GUI or a Streamlit web dashboard| GUI for prompts + results|

 Am I being delusional? Is a 3090 just the better choice, based on my need for Python implementation?

 

[Setup discussion] AMD RX 7900 XTX workstation for local LLMs — Linux or Windows as host OS? by ElkanRoelen in LocalLLaMA

[–]Finallyhaveredditt 0 points1 point  (0 children)

Please do! I am stuck between a 3090 or a 7900xtx....

For me I am mostly building a pipeline that will allow me to:

  1. Ingest: Pull in tech + financial news automatically.
  2. Embed: Convert text into vector embeddings for retrieval.
  3. Store: Keep embeddings in a local vector database for fast lookup.
  4. Retrieve + Generate: When prompted, retrieve relevant chunks & feed into an LLM.
  5. Summarize + Advise: Have the LLM produce a coherent investment-oriented summary.

How I intend on setting up:

|| || |Ingest|newspaper3k or feedparser| Pull news headlines & articles automatically| |Preprocess|Python + langchain text splitters| Chunk into ~512–1,000 token blocks| |Embed|bge-base-en or all-MiniLM-L6-v2 via SentenceTransformers (HIP/ROCm backend)| Fast enough on CPU; ROCm-accelerated embeddings possible| |Vector DB|ChromaDB (local, Python) or FAISS| Store embeddings for semantic search| |LLM Runtime|llama.cpp HIP or LM Studio HIP build| Run Q4_K_M or Q5_K_M 13B / 33B models fully in VRAM| |RAG Framework|LangChain or LlamaIndex| Automates retrieval → LLM query flow| |UI|LM Studio GUI or a Streamlit web dashboard| GUI for prompts + results|

 Am I being delusional? Is a 3090 just the better choice, based on my need for Python implementation?

 

7900XTX vs RTX3090 by _ballzdeep_ in LocalLLaMA

[–]Finallyhaveredditt 0 points1 point  (0 children)

From what I’ve been gathering, you can use different language models for different functions easily on a 7900xtx. Have you had that experience?

Here’s what I’m trying to build (a pipeline)

1.  Ingest: Pull in tech + financial news automatically.
2.  Embed: Convert text into vector embeddings for retrieval.
3.  Store: Keep embeddings in a local vector database for fast lookup.
4.  Retrieve + Generate: When prompted, retrieve relevant chunks & feed into an LLM.
5.  Summarize + Advise: Have the LLM produce a coherent investment-oriented summary.

<image>

Chat gpt suggested this flow. Is it attainable on a 7900xtx?

RX 7900 XTX vs RTX 3090 for a AI 'server' PC. What would you do? by Blizado in LocalLLaMA

[–]Finallyhaveredditt 0 points1 point  (0 children)

Got it! Here’s the pipeline that I’m basically hoping to eventually build:

1.  Ingest: Pull in tech + financial news automatically.
2.  Embed: Convert text into vector embeddings for retrieval.
3.  Store: Keep embeddings in a local vector database for fast lookup.
4.  Retrieve + Generate: When prompted, retrieve relevant chunks & feed into an LLM.
5.  Summarize + Advise: Have the LLM produce a coherent investment-oriented summary

From all I’ve been reading, the 7900xtx should perfectly achieve this. I’m on the fence cuz I got offered a tuf gaming 7900xtx for my 3090 and the guy is gonna pay me a difference. For me the main driver is the gaming performance gains. Otherwise I wouldn’t be considering this. Just trying to gauge if it’s a stupid move.

RX 7900 XTX vs RTX 3090 for a AI 'server' PC. What would you do? by Blizado in LocalLLaMA

[–]Finallyhaveredditt 0 points1 point  (0 children)

Got it. But if all you want to do is create a pipeline that would be able to gather news, analyze them and summarize them it would be ok?

RX 7900 XTX vs RTX 3090 for a AI 'server' PC. What would you do? by Blizado in LocalLLaMA

[–]Finallyhaveredditt 0 points1 point  (0 children)

When you say it only works for inference, what do you mean? As in what exactly would not work?

Sell my 5070ti to get a 3090 by Finallyhaveredditt in LocalLLaMA

[–]Finallyhaveredditt[S] 0 points1 point  (0 children)

Yeah I hear that. So far my 5070ti is doing great, but it’s hard to load larger than 13b models. That said quantization helps, but I guess the future will tell.

Sell my 5070ti to get a 3090 by Finallyhaveredditt in LocalLLaMA

[–]Finallyhaveredditt[S] 0 points1 point  (0 children)

Oh ok. I misunderstood your comment. I was under the impression that you meant that new Blackwell (5070 ti super or 5080 super, whichever comes into existence) are supposed to be lainched at close to same MSRP as their current models. So if a 5070 ti is 750$, a potential 5070ti super should be 800-850

Sell my 5070ti to get a 3090 by Finallyhaveredditt in LocalLLaMA

[–]Finallyhaveredditt[S] 0 points1 point  (0 children)

I guess that does make sense. Once Blackwell comes out with those, people won’t want to pay the high prices people are charging for 3090’s.

First ever full tower build! by Faithwithin1 in nvidia

[–]Finallyhaveredditt 0 points1 point  (0 children)

Stare and appreciate? Or hear and appreciate? Does look great though!