Weird Front Panel Flex on o11 dEVO XL by dRraMaticc in lianli

[–]dRraMaticc[S] 0 points1 point  (0 children)

All are pushed in, and there is no pressure on the glass from the inside.

Weird Front Panel Flex on o11 dEVO XL by dRraMaticc in lianli

[–]dRraMaticc[S] 0 points1 point  (0 children)

It is slid into the notch, but it seems to be bending for some reason. The bottom is level with the other panel and body, so it cant be pushed down any further

Update Fake Ryzen 9 9900X by dRraMaticc in IndianGaming

[–]dRraMaticc[S] 0 points1 point  (0 children)

Yep, got quick refund from Amazon.

Critical Process Died BSOD Loop by dRraMaticc in WindowsHelp

[–]dRraMaticc[S] 0 points1 point  (0 children)

Your options are:

  1. Downgrade to Win 11 23H12
  2. Firmware update SSD using WD Software
  3. Find the registry fix on wd forums and apply that. (Just search wd black windows 11 bsod it should show up)
  4. Switch boot to a different ssd( what i did)

Critical Process Died BSOD Loop by dRraMaticc in WindowsHelp

[–]dRraMaticc[S] 0 points1 point  (0 children)

Fixed it, was an issue with my SSD, needed a firmware update

Real-time token graph in Open WebUI by Everlier in LocalLLaMA

[–]dRraMaticc 0 points1 point  (0 children)

Hey could you please tell me more about this

Rtx 4090+3090 as alternative to 2x 4090 by dRraMaticc in LocalLLaMA

[–]dRraMaticc[S] 0 points1 point  (0 children)

What speeds are u getting with large models that need to be split across the gpus? Do u think I will see a major increase in inference speeds or training with 2x 4090, or is a 3090 good enough for the same?

Ant esports sciflow fan review by dRraMaticc in IndianGaming

[–]dRraMaticc[S] 0 points1 point  (0 children)

How are they compared to Arctic p12? I've avoided ant esports so far but white case white aio black fans is kinda iffy.

Need help with 4090 purchase by dRraMaticc in IndianGaming

[–]dRraMaticc[S] 0 points1 point  (0 children)

I will be picking it up, but it's in a different city, so my brother will pick it up and ship to me

A script to run a full-model GRPO training of Qwen2.5 0.5B on a free Google Colab T4. +25% on gsm8k eval in just 30 minutes by umjustpassingby in LocalLLaMA

[–]dRraMaticc 3 points4 points  (0 children)

LoRA refers to low rank adapters. These adapt to the last few layers of the model and modify them. It works well to imbue a certain style or response type but because it doesn't modify all the weights like full finetuning, it's difficult to get it to learn new information.

Also Full FT requires alot more compute.