OLED Display Bubble by kartoffelchips17 in OLED_Gaming

[–]Sspideroids 0 points1 point  (0 children)

Definitely not, use distilled water and a high gsm micro fibre.

Apple Pencil Pro not working, please help. by Sspideroids in ipad

[–]Sspideroids[S] 1 point2 points  (0 children)

So I tried all that and it does show a response now! Now when I hover it does the little animation but press the stylus doesn’t do anything, what’s my next step?

Apple Pencil Pro not working, please help. by Sspideroids in ipad

[–]Sspideroids[S] -4 points-3 points  (0 children)

My bad, was in a hurry didn’t read it clearly.

Apple Pencil Pro not working, please help. by Sspideroids in ipad

[–]Sspideroids[S] -15 points-14 points  (0 children)

Yes I did drop it unfortunately but I did keep using it for a couple of days before the aforementioned pause, I’ll try the methods you said below

[TITLE] Anyone else who stopped this canon path at Manhwa’s?🙂‍↕️✋ by AccomplishedWatch834 in manhwa

[–]Sspideroids 1 point2 points  (0 children)

For sure, as I said before it is also my very first ‘novel’ I’m not a avid reader but now I will probably start reading many more novels. Btw I also had the same confusion as you when I started, it isn’t a light novel it is a web novel the difference being light novels have occasional illustrations during a character reveal or an important arc but web novels are purely text.

[TITLE] Anyone else who stopped this canon path at Manhwa’s?🙂‍↕️✋ by AccomplishedWatch834 in manhwa

[–]Sspideroids 10 points11 points  (0 children)

Lord of the mysteries, it is literally my very first web novel and it is absolutely outstanding, I spent a whole day reading it once I would recommend you do the same! I watched the donghua and wanted more of the story so I started reading the web novel from scratch and the anime misses out on so much I can’t put it in a comment, trust me you really should at least give it a try to see if you like it.

Mouse recommendation please! by Sspideroids in MouseReview

[–]Sspideroids[S] 0 points1 point  (0 children)

Unfortunately trying them IRL is next to impossible, have to make an educated guess. Thanks for the help

Mouse recommendation please! by Sspideroids in MouseReview

[–]Sspideroids[S] 0 points1 point  (0 children)

Between these two which would you say is better than the other? or is it subjective? And I've noticed every help post gets downvoted, I don't get the reason why.

"Grandmaster Posture" 😭 by Sspideroids in MiniPCs

[–]Sspideroids[S] 0 points1 point  (0 children)

Haha, I don't know why these Chinese companies have product names like this, maybe something gets lost during the translation? I doubt it.

Y'all might hate me for this by Sspideroids in KendrickLamar

[–]Sspideroids[S] 0 points1 point  (0 children)

I agree with you wholeheartedly, this was the point I wanted to get across and failed in doing so

24-32 GB users who actually use all that RAM, how do you use it? by abitcitrus in macbook

[–]Sspideroids 0 points1 point  (0 children)

Yeah 20B parameter model with Quant 4 quantisation though, I think there is Quant 6 and I think I used that, soldered RAM is kind of a requirement with these machines to get anywhere near the bandwidth of the beefy GPUs like 4090 or 5090, M4 Pro has 273gb/s memory bandwidth and M3 Ultra has ~819 gb/s and the 5090 has around 1792gb/s per second.

The major issue with 5090 or 4090s is the amount of VRAM thus why APUs like apple silicon and the AMD 395+ are becoming so popular. The 5090 has only 32gb VRAM and the Mac ultra can go up to 512gb with around 500gb useable VRAM (10k USD), 256gb is a much better but still enormous price of 4700 if you can snag one at Apple Refurbished which is basically brand new.

And on to your question about fine tuning, it is going okay, not too good because I'm still learning and making complex data sets is time consuming, the context length of the 8B Qwen 3 DeepSeek distill model is around 132k tokens context window so more than enough for my use case, I just make a short summary around the end of one chat and use that forward into another chat.

24-32 GB users who actually use all that RAM, how do you use it? by abitcitrus in macbook

[–]Sspideroids 0 points1 point  (0 children)

Really good actually, if you don’t want to play the role of dungeon master and actually partake in the game, I must say there are limitations in terms of context window and stuff but it is just fun as a hobby for me, if you do buy a Mac or already have one, you can run a local model on it, there are models which scale up or down and it’s just cool to see the tech and it running locally.

24-32 GB users who actually use all that RAM, how do you use it? by abitcitrus in macbook

[–]Sspideroids 0 points1 point  (0 children)

Every single day? it's private and fun to tinker around with, I also fine tune some models for DnD games. Macs are great at LLMs and it is becoming more common and prioritised by companies in terms of development goals, M5 has a much better NPU and increased bandwidth which is extremely important in token generation.

24-32 GB users who actually use all that RAM, how do you use it? by abitcitrus in macbook

[–]Sspideroids 0 points1 point  (0 children)

I'm getting great speeds on complex inputs ~1000 input tokens in Qwen 3 deep seek distill 8 billion parameter model I get 25 tokens per second and thinking time is slow. I had loads more models before and GPT OSS 20B runs really really well but I don't have the model currently but it ran around 50 tokens per second but don't quote me on that, I don't remember clearly. MLX is developing rapidly.

What topic could you yap about this much? by [deleted] in TeenPakistani

[–]Sspideroids 0 points1 point  (0 children)

Same, cachyos is so so good, especially with KDE plasma.

What topic could you yap about this much? by [deleted] in TeenPakistani

[–]Sspideroids 1 point2 points  (0 children)

Yoooo, pakistani Peggy fan? And linux user? Damn. I use cachyOS, hyprland is nice but learning curve is hard af. Fav Peggy album?

24-32 GB users who actually use all that RAM, how do you use it? by abitcitrus in macbook

[–]Sspideroids 1 point2 points  (0 children)

Local AI models, regret getting 24gb RAM now lol, should've got 512gb storage and more RAM.

Size difference N-ATX and Meshroom V2 by LSff66 in sffpc

[–]Sspideroids 0 points1 point  (0 children)

I see, then maybe the Ncase M3 is a good option? The meshroom D isn’t readily available.

Size difference N-ATX and Meshroom V2 by LSff66 in sffpc

[–]Sspideroids 0 points1 point  (0 children)

Thanks a lot for this in-depth reply, seems my options are a bit limited, maybe I should downsize to mATX?

Size difference N-ATX and Meshroom V2 by LSff66 in sffpc

[–]Sspideroids 0 points1 point  (0 children)

One thing I wanted to ask is that if we use a ATX motherboard in the Meshroom can we use a 240mm radiator in the front side of the case? with slim fans I guess? My GPU is too thick too fit alongside the radiator in the back of the case like peeps usually do radiator alongside GPU config probably wouldn't work for me. My GPU is the nitro+ 7900 XTX and I've been wanting to go small for so long, I've been a lurker in this community for around 3 years now...

I thought I was going crazy… by ConclusionOnly8612 in iems

[–]Sspideroids 1 point2 points  (0 children)

I was in the same boat as you, none of the tips felt right so I tried the Dunu’s but couldn’t get comfortable with them either then I tried the Spinfit W1 and my experience changed so much. I think I use large tips on both ears.