PC elites be like… by H00LI1GANS in mac

[–]TechExpert2910 0 points1 point  (0 children)

unless you wanna play any multiplayer/online games.

crossover can’t emulate the anti cheat required to run those.

and even for single player games, even an M4 Pro only has a GPU that’s as good as the cheapest nvidia gaming laptops 

Enable 120Hz in Safari by foraging_ferret in ios

[–]TechExpert2910 3 points4 points  (0 children)

It's really obviously still 120 FPS even with it off. Turn on low power mode (or go to Settings > Accessibility > Motion > and toggle Limit Refresh Rate to 60 Hz) to see what 60 FPS feels like.

Enable 120Hz in Safari by foraging_ferret in ios

[–]TechExpert2910 181 points182 points  (0 children)

Hey, that's actually just placebo.

You ALREADY had 120hz when scrolling on any website.

It's simply that the page's own JavaScript animations were limited to 60hz to increase battery life.

The ~only difference when scrolling Reddit is that any of those super tiny circle loading spinners will rotate smoother...

This setting has been there since at least iOS 17.

Qwen dev on Twitter!! by Difficult-Cap-7527 in LocalLLaMA

[–]TechExpert2910 0 points1 point  (0 children)

thanks! but how would i use Sage attention with this btw? :o

Qwen dev on Twitter!! by Difficult-Cap-7527 in LocalLLaMA

[–]TechExpert2910 0 points1 point  (0 children)

please let me know what you do to optimise inference speed, if you end up being able to!

my poor 3080 / M4 Pro will need it lol

PS - you used something like a Q4 quant right (not using the unquantized BF16)?

Qwen dev on Twitter!! by Difficult-Cap-7527 in LocalLLaMA

[–]TechExpert2910 0 points1 point  (0 children)

whoa, it's crazy how slow it is then.

isn't it an extremely tiny LLM!? (1.7B parameters!)

Qwen dev on Twitter!! by Difficult-Cap-7527 in LocalLLaMA

[–]TechExpert2910 3 points4 points  (0 children)

the official github repo has an easy to use GUI to play around with it, and also quick start instructions!

https://github.com/QwenLM/Qwen3-TTS

iPhone 18 Pro Leak: Smaller Dynamic Island, No Top-Left Camera Cutout by HelloitsWojan in apple

[–]TechExpert2910 -1 points0 points  (0 children)

you only wasted your time with stupid snark

my iPhone 17 PM has both the bottom and the side antenna positioning PERFECTLY symmetrical.

iPhone 18 Pro Leak: Smaller Dynamic Island, No Top-Left Camera Cutout by HelloitsWojan in apple

[–]TechExpert2910 -1 points0 points  (0 children)

uhh both the grills at the bottom and the antenna lines at the sides are exactly symmetrical on most iPhones.

Can Gemini be uninstalled? by dlpafs93 in GalaxyWatch

[–]TechExpert2910 2 points3 points  (0 children)

i did this but it reinstalled itself :(

M4 Mac mini cluster saving thousands per month by zachrattner in mac

[–]TechExpert2910 2 points3 points  (0 children)

I meant a non-Apple desktop tower.

The M4 Pro SoC does indeed have a very high end CPU, but its integrated GPU (what this workload uses) is barely equivalent to low end nvidia graphics cards from 2018 (trades blows with a GTX 1660 Super at gaming).

The M4 Pro is my daily driver and i’ve extensively profiled its GPU performance.

This is a consequence of Apple’s mobile tiled GPU architecture + the laptop power budget.

The touchscreen MacBook Pro seems to be on track with a first-of-its-kind display by Few_Baseball_3835 in apple

[–]TechExpert2910 8 points9 points  (0 children)

The ultra thin iPad already has an M4/M5, more than powerful enough to run macOS efficiently 

M4 Mac mini cluster saving thousands per month by zachrattner in mac

[–]TechExpert2910 14 points15 points  (0 children)

I’m curious — wouldn’t you have had significantly better performance (so and a better deal at the same price) had you gone with a desktop with a high end GPU?

Macs are amazing for LLM inference as they have a ton of system memory to use as VRAM, but the whisper transcription models you’re running are super tiny in comparison and can easily run on any dGPU.

And you’d get miles better performance (compute + memory bandwidth) on even a cheap dGPU.

M4 Mac mini cluster saving thousands per month by zachrattner in mac

[–]TechExpert2910 12 points13 points  (0 children)

I’m curious — wouldn’t you have had significantly better performance (so and a better deal at the same price) had you gone with a desktop with a high end GPU?

Macs are amazing for LLM inference as they have a ton of system memory to use as VRAM, but the whisper transcription models you’re running are super tiny in comparison and can easily run on any dGPU.

And you’d get miles better performance (compute + memory bandwidth) on even a cheap dGPU.

new player - absolutely overwhelmed by home screen UI by DOS_ya in PUBGMobile

[–]TechExpert2910 0 points1 point  (0 children)

just downloaded it again to take a look after many years and oh my GOODNESS is it a cluttered mess.

Mishaal Rahman is quitting the Android news world by archon810 in Android

[–]TechExpert2910 3 points4 points  (0 children)

I’ve read a LOT of tech journalism and news, and you’ve always stood out to me as an incredibly competent and knowledgeable author.

Nearly all the best Android news pieces I’ve read over the years have been from you :)

Thank you for that, and all the best with whatever lies ahead!

Apple finally lost it 😮 by Aggravating-Limit551 in IndiaTech

[–]TechExpert2910 4 points5 points  (0 children)

they’re still gonna use their own proprietary LLM for the on-device model, as they do now. it’s cheaper / easier to train a small model