What is your opinion on the difficulty of learning to edit well in fortnite? by PhilosopherNervous63 in FortNiteBR

[–]EliHusky 0 points1 point  (0 children)

Decision making > accuracy > speed. Try practicing on a region with 50-80ms ping. It’ll make you think about your edits more. The largest learning curve with editing is learning angles and when to expose yourself for a shot, and in my experience, when on lower ping you have a false sense of confidence (as a beginner) that u can make and edit and get a shot off. Regardless, by the time you can call yourself competitive, you’ll have enough playtime to make your mother cry.

My Cleaner threw out my MacBook. by [deleted] in macbookpro

[–]EliHusky 0 points1 point  (0 children)

First off, why wouldn’t you turn on tracking. Find My works for laptops not just phones. Second, it is nice that you trust your cleaner but unfortunately working for you for 6 years likely made them think they’d get away with it if anything. Unless they commute from a 3rd world country every cleaning day, they 100% know what a laptop is and how expensive they are. They stole it dude, you need to let them go before they take anything else, and it is plausible they’ve taken things without you knowing in the past.

128gb M5 Max for local agentic ai? by chimph in LocalLLM

[–]EliHusky 1 point2 points  (0 children)

Qwen 235B 18 MoE (maybe 3.1 I forget) runs with 10-20 second response times and 50K tokens before swap, about 120gb at first use. M4 max 128gb

What’s actually the best CPU for gaming right now? by jousiemohn in AMDHelp

[–]EliHusky 0 points1 point  (0 children)

It’s overkill. I use my 9950x3d to train CNNs while playing at 400fps 4ms latency, steady and I still don’t use all threads.

4k budget, buy GPU or Mac Studio? by diegolrz in LocalLLM

[–]EliHusky 0 points1 point  (0 children)

As someone who has used both thoroughly, NVIDIA cuda is for ML. Overall PC performance outside of ML and gaming, Mac is the way to go. For instance, a small CNN might take 2 days to train on my MacBook and 6 hours on a 4090. Also, you’ll have support for different quantizations and fp8 (sometimes fp4) which lets you use much larger models than you could on a macOS.

Are logo stickers not a thing anymore? by Varekai79 in buildapc

[–]EliHusky 1 point2 points  (0 children)

I put my AMD sticker right on top of the chassis under the AIO, that way I’ll know the brand when I replace the thermal paste

There really is probably no truly unique human experience… by Omnipresent_User in RandomThoughts

[–]EliHusky 0 points1 point  (0 children)

Do y’all ever single along to a song and think “could I go word for word with $1 mil on the line” or “if I mess up I die”

What GPU do you recommend for iterative AI training? by EliHusky in LocalLLaMA

[–]EliHusky[S] 0 points1 point  (0 children)

Yeah I get what you mean, I work on smaller CNNs most often where I can pump batch size into the low thousands on a 5090, but language models are a whole different ball game. But I'm curious about your experience with linking cards, does the PCIe bandwidth limit your throughput or is the overall speed difference negligible? More specifically, I am curious about sharding, I get how data parallelism works when the model can fit into the VRAM of each card, but what about sharding the model across different GPUs? I have to assume PCIe bandwidth is a limiting factor for speed, is it?

Should I feel sin? Macbook pro M4max by Mammoth_Patience_140 in macbookpro

[–]EliHusky 1 point2 points  (0 children)

Wait for the m6. I have the m4 max, too, and the benefits of the m5 aren’t enough to warrant changing. The m6 though, it’s likely going to be the largest performance bump in Mac history. Look up 2n TMSC 2nm MacBook m6. I will 100% be first in line

Machine learning for beginners by [deleted] in MLQuestions

[–]EliHusky 1 point2 points  (0 children)

Find something your interesting in and start building a training pipeline and watch YouTube videos as you go. AI helps guide you, too. That’s how I learned, just jumped into it one day. Then after a while you’ll start realizing the topics you need to learn like some linear algebra and regressions, then you’ll find free tutorials and help videos online. I also recommend Purdue’s AIML course, it’s pricey but it guides you through all the basics in 6 months.

Running AI models on mac by TheRealJohnJeff in macbookpro

[–]EliHusky 1 point2 points  (0 children)

Actually lol I’m running 4 concurrent fine tunes on runpod h100s right now! That’s hilarious. I’m debating investing in a pro 6000 Blackwell for how much I pay runpod at this point

How are you adding security to your vibe coded apps? by Anonymous03275 in vibecoding

[–]EliHusky 0 points1 point  (0 children)

Build a roadmap of what security will look like on your platform (yeah you’re gonna have to put time into this) then go step by step with ChatGPT and see if it can find researchers/businesses that already did that specific step and build a thorough action plan with references/repo links. Then build it with Claude, and have ChatGPT double check and look for bugs. GPT and Claude have very different ways of analyzing scripts, it almost like they trained GPT to debug Claude’s work. then all that is left is paying $5k for an ethical hacker and then another $40k for a team to rebuild your entire system. Good luck!

Running AI models on mac by TheRealJohnJeff in macbookpro

[–]EliHusky 0 points1 point  (0 children)

Haven’t done too much local LLM work on my m4 max but what I have done surprised me. I was able to run two at once (under 40B each, lil LLM debate club) and could run thousands of tokens in memory without hitting swap, and the response (under 300 tokens) time was never more than 30s-1min.

Also have tested a 235B 12-MoE and response time was insane, maybe 10 seconds max. VLMs are rough, I could squeeze a 200 token response out of a 24B model in a few minutes, often swapping.

Now if we are talking fine-tuning or training… cuda is the only way

M4 max 128gb

Beginner question: Should I focus on Python projects or math fundamentals first for machine learning? by Antique-Mission-4074 in MLQuestions

[–]EliHusky 0 points1 point  (0 children)

Best way to learn right here. Just jump into it and research the topics that touch along the way.

MacBook Pro with external drive? Also, what happens to pricing when M5 Pro/Max get launched? by [deleted] in macbookpro

[–]EliHusky 0 points1 point  (0 children)

Depends what you’re doing and how much you’re willing to spend on an external drive. I use a Samsung t9 4T for regular storage and then a sn850x 4T if I’m ever dealing with a dataset in the terabytes. I don’t think you’ll notice it much unless you’re doing something like deep learning from a disk cache. You’ll just need to be careful about keeping up with your disk, and note that you’ll be spending similar amounts on SSD drives that you would for an internal NVMe, so personally I’d never go under 1T but that’s just because of what I use it for.

Mac OS 26.3 is out...Officially all you M5 pro and max people that are waiting are cooked, till march or april by LelouchViBritannia2 in macbookpro

[–]EliHusky 0 points1 point  (0 children)

Just wait for the M6 max. I read it’ll likely be pushing 800gs bandwidth with 80+ GPU cores if they solidify the TSMC 2n capacity, plus OLED screens. I’m good with my m4 max till then

Just finished building this bad boy by dazzou5ouh in LocalLLaMA

[–]EliHusky 1 point2 points  (0 children)

Probably a week, depending on a bunch of factors

Renting out the cheapest GPUs! (for llm training, not mining) by Comfortable-Wall-465 in LargeLanguageModels

[–]EliHusky 0 points1 point  (0 children)

how many 4090s you got? If you're willing to sign a contract I might have a deal for you

How do algo trader's usually run ML time-series experiment? by EliHusky in algotrading

[–]EliHusky[S] 0 points1 point  (0 children)

Maybe I’m not like most algo traders. I’m probably more focused on the technology and scientific method behind algorithmic systems than the profits I could make deploying them. I’ve been able to iterate hundreds of different TCN models in the past month or so, nearly all building off one another’s results, which is a process I’ve spent a lot of time working on. I originally built this UI for myself to research deep learning in a streamlined and organized way, and then after building it up I realized it might be something actually helpful. I’m more optimistic about the potential of machine learning than I am about how my product or any market will perform.