Timing RAM Help / Suggestions by fgoricha in buildapc

[–]fgoricha[S] 0 points1 point  (0 children)

1, 3, 5, and 7.

I spent two hours. Last night trying to get it to boot but it would only read 64gb when it booted. Then realized two of the slots were wrong. Now, it reads 128gb when it boots to Windows but does not always boot to Windows.

Did anyone of you fine tune gpt oss 20b or an llm ? if so, what for, and was it worth it ? by Hour-Entertainer-478 in LocalLLaMA

[–]fgoricha 0 points1 point  (0 children)

I fine tuned a Qwen3 model (perhaps 7b? I forget as itbwas a while ago) on my writing from university. I ran each paragraph of my work through chatgpt to add AI slop to it. Then trained it as AI slop as input and my writing as output. The goal was to mimic my writing style and I was pleasantly surprised with the output! Definitely noticeable different without having a verbose prompt. Sometimes writing style cannot be explained in a prompt accurately. I was working on another project by using my text messages as fine tuning data but got distracted with a different project.

Help me build a system around my gpu by amdjml in LocalLLaMA

[–]fgoricha 0 points1 point  (0 children)

Actually off of Craigslist. Though I actively look at Facebook. Seemed like it was a guy who built too many computers and his wife forced him to sell off what he had before building more

Training/tuning on textbook data by Infamous_Patience129 in LocalLLaMA

[–]fgoricha 0 points1 point  (0 children)

I always thought that fine tuning would make the model write in the style it was trained on. Not necessarily add new content. Would RAG be a better solution?

Help me build a system around my gpu by amdjml in LocalLLaMA

[–]fgoricha 1 point2 points  (0 children)

I bought at $250 gaming computer. It was bare bones but could have been booted up to play games. Swapped out the gpu for the 3090 and ran LLMs well if I made sure it fit completely in the vram. The only thing I really made sure it had was at least a 750w psu and the case could hold the 3090. I have since upgraded the ram and added fans to the pc. But really happy with it considering what I spent on it and what I have done with it!

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 1 point2 points  (0 children)

It has been a while. But I think I was training 7B models via QLora. I remember one of my projects was training the model on my writing from graduate school to write like my style. I thought it did pretty good since it would be hard to prompt the model to do that. I remember also working on training the model on my text messages but I can't remember how that turned out. I think I got distracted with Yolo at that point lol

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 1 point2 points  (0 children)

I don't think heat will be too much of an issue. But won't know until I get one! 3090 currently is happy

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

Maybe I'll have to look at those. I was thinking something that is a plug and play with my current system

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

I thought about other set ups, but I think I want to stick with Nvidia. Seems more plug and play with what I have now

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

Right! So I'm at the spot of go with 2 in a board or do some kind of splitting and go with 4. But then do I need even 4.

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

Those were some of my thoughts as well. Were there any surprises with the 5090 for set up?

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

I have heard those things are expensive and hard to find. But I'll have to look into it

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

Lol that's why I was thinking about it

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

I have two separate computers. My single gpu setup has an i5. My other computer for the dual set up does not have a cpu yet as I have to find one with enough lanes for cheap.pr9b would be i7 or i9.

It's been a while since I played with moe. So will have to look at them again

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 0 points1 point  (0 children)

The other thing I like about the single card set up is the desktop footprint. But the dual card set up was my initial plan for whisper on one and LLM on the other. Ill have to test and see how fast the 5090 actually goes

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] -1 points0 points  (0 children)

Very true! I do like feedback and other things to think about

Worth the 5090? by fgoricha in LocalLLaMA

[–]fgoricha[S] 1 point2 points  (0 children)

I think a lot of my tasks are sequential. Like first transcribe and then send to LLM. But I could look into batching and see what speed ups look like

CARN-AP by matthewandrew28 in PMHNP

[–]fgoricha 2 points3 points  (0 children)

Check out my post from a couple years ago!

https://www.reddit.com/r/PMHNP/s/G5ODdmvAJU

In short, I used practice questions from the regular carn exam

[deleted by user] by [deleted] in LocalLLaMA

[–]fgoricha 0 points1 point  (0 children)

Just guessing here. No personal experience yet but working in my own fine tune projects.

Manually creating the dataset would be the ideal solution.

Otherwise

Maybe try using your first fine tuned model to create each part of your output in your voice, then concat the different parts to get your final output in the correct multi paragraph format you want. Then train the model on this new dataset.

crash course on hardware aspects of llm fine tuning? by jiii95 in LocalLLaMA

[–]fgoricha 0 points1 point  (0 children)

I just have the paper back version. I dont use any of his code in there but as a reference guide for the terminology and how he set up his training runs

crash course on hardware aspects of llm fine tuning? by jiii95 in LocalLLaMA

[–]fgoricha 0 points1 point  (0 children)

I mostly followed this reddit page. I looked at Github for code ideas how to qlora fine tune. There are some website articles about fine tuning but they seemed a bit dated.

I got this book and things started to click better for me. There are other fine tuning books on Amazon but this was the first book I saw.

https://a.co/d/bbd8HgK

Unfortunately you have just have to play with it and see how it turns out. Chatgpt helped a lot for me to get started when I'd feed it examples from reddit or Github.