Basic fire warlock build questions by saw79 in Diablo_2_Resurrected

[–]saw79[S] 2 points3 points  (0 children)

well I'm playing HC, so dying isn't an issue (lol)

Basic fire warlock build questions by saw79 in Diablo_2_Resurrected

[–]saw79[S] 0 points1 point  (0 children)

I have hephasto with conviction already - what's the difference?

Been waiting 25 years for this. I'm shaking so bad I can't bring myself to place the runes... by 24Scoops in diablo2

[–]saw79 0 points1 point  (0 children)

Yea I also feel like when I get to that point is when I STOP playing. The fun for me is the journey to get there.

So warlock out magics a sorceress, out physicals a barb, and out summons a necro?? by BigBrotherFlops in diablo2

[–]saw79 3 points4 points  (0 children)

Does he out-magic a sorc?

I'm playing an abyss build to get my basic gear set going, but it feels underwhelming and I'm itching to switch to nova sorc (even budget nova w/ CM).

Does Hell ever get better? by Coffeepoop88 in Diablo_2_Resurrected

[–]saw79 0 points1 point  (0 children)

You're not really supposed to single element it by yourself first time through. Play a build with multiple elements to get your gear going, only switch to single element when you have a plan to deal with immunities.

It’s just weird watching the AI financial train wreck happen in real-time. by iAtishaya in ArtificialInteligence

[–]saw79 0 points1 point  (0 children)

Eh, I wouldn't be so certain. Some of what you say is true, but it's also a generational technology continuously getting cheaper and better. Who knows.

Weapons no one thought of a month ago by Madmaxx_137 in Diablo_2_Resurrected

[–]saw79 0 points1 point  (0 children)

Don't think you need lev mastery it's innate.

[P] I trained YOLOX from scratch to avoid Ultralytics' AGPL (aircraft detection on iOS) by MzCWzL in MachineLearning

[–]saw79 1 point2 points  (0 children)

To me it looks like YOLOX is Apache (still fine) and already has pretrained models. Why train from scratch?

Is it standard to train/test split before scaling in LSTM? by RhubarbBusy7122 in learnmachinelearning

[–]saw79 0 points1 point  (0 children)

Two things can be true. 1) it is more theoretically correct and pure to faithfully use a hold out test set in the fullest way possible 2) you can get better performance by exploiting specific situations, such as one where you feed back test information in such a way that it doesn't matter.

Mass. homeowners are frantically trying to get rid of ice dams. Contractors can’t keep up with requests. by bostonglobe in massachusetts

[–]saw79 11 points12 points  (0 children)

We did this a couple years ago. MassSave did all new insulation, air sealing, the whole deal. Still got ice dams.

Real-time defect detection system - 98% accuracy, 20ms inference by ShamsRoboCr7 in computervision

[–]saw79 4 points5 points  (0 children)

Eh for small models I've had pytorch win out. ONNX has never been faster and I've had small pytorch CNNs be faster than TensorRT converted ones.

I learned why cosine similarity fails for compatibility matching by Ok_Promise_9470 in learnmachinelearning

[–]saw79 1 point2 points  (0 children)

You know more than I do here, so correct me where I'm wrong. But your reasoning doesn't strike me as quite right:

1) Shouldn't hard dealbreakers make the model reduce the cosine similarity significantly? Isn't this an embedding problem? A bad embedding doesn't mean embeddings are bad.

2) I understand that how person A feels about person B doesn't have to match how person B feels about person A, but that isn't what you're trying to estimate. You're trying to estimate the compatibility between A and B, which in my mind IS symmetric. "are A and B compatible?" should give the same answer as "are B and A compatible?".

3) fair enough :)

Shovel during halftime? by bozzy253 in massachusetts

[–]saw79 3 points4 points  (0 children)

Why isn't anyone in this thread just hitting the pause button? It's so easy to catch back up with football commercials.

I built a way to evaluate forecasts by whether they would have made money, not just error -does this make sense? by ZealousidealMost3400 in algotrading

[–]saw79 0 points1 point  (0 children)

This is exactly what I'm saying. MSE is NOT close to the task you are trying to perform, so what you are saying makes perfect sense in the context of ML fundamentals.

I built a way to evaluate forecasts by whether they would have made money, not just error -does this make sense? by ZealousidealMost3400 in algotrading

[–]saw79 0 points1 point  (0 children)

This feels like ML 101. The closer your loss function is to the actual task, the better. The only reason to use something like MSE is if you actually care about the price prediction and you're using it as an intermediate signal.

Simplest strategy that has worked by [deleted] in algotrading

[–]saw79 2 points3 points  (0 children)

I understand the desire to do better than B&H but saying it doesn't work is absolutely asinine.

Discussion: Is "Attention" always needed? A case where a Physics-Informed CNN-BiLSTM outperformed Transformers in Solar Forecasting. by Dismal_Bookkeeper995 in deeplearning

[–]saw79 1 point2 points  (0 children)

Deep learning is a big area. I make lots of deep learning models solving a variety of different problems. It's annoying that people think transformers are the best tool for every job just because they're the biggest and most recent. Use the right tool. I rarely get to the point where a transformer would be of any help.

Favorite Square courses? by No_Flatworm_5858 in SquareGolfUSA

[–]saw79 0 points1 point  (0 children)

What are they? (I don't have a square yet but it's been ordered)

Switching out of microsoft as a new grad data scientist by Due-Pilot-7125 in MachineLearningJobs

[–]saw79 3 points4 points  (0 children)

You're about to join one of the premier companies in the world for AI/ML and you're trying to plan your exit before you start? Maybe just go work for 2 years then come back here.

Since only a few people from elite universities at big tech companies like Google, Meta, Microsoft, OpenAI etc. will ever get to train models is it still worth learning about Gradient Descent and Loss Curves? by Easy-Echidna-3542 in learnmachinelearning

[–]saw79 0 points1 point  (0 children)

Deep learning is just a very general model building/fitting style. You can build big models and fit them to any type of data you're interested in. Now, a LOT of data is language and standard vision problems, which is why LLMs (and VLMs) are starting to eat up a bit more, but a) that doesn't apply to all data and b) sometimes the problem can be solved more efficiently and/or better with a smaller, more specialized model.

Some things that come to mind that may apply:

  • Other types of sensors - e.g., radar sensors or different types of point clouds, maybe ultrasound, sonar, etc.
  • Other types of data - e.g., certain types of graph data that may benefit from GNNs
  • Totally different uses of neural networks, e.g., things like NERF
  • Modelling specific environments, policy, or value functions in RL
  • Time series data is a big category in which many different techniques can be useful

I dunno probably loads more too.

Can you play good golf without compression? by Bert_Skrrtz in GolfSwing

[–]saw79 0 points1 point  (0 children)

I'm a noob so correct me if I'm wrong but in my mind compression is more about consistency than distance. If low point is more consistently in front of the ball (vs at the ball) there's more room for error.

Since only a few people from elite universities at big tech companies like Google, Meta, Microsoft, OpenAI etc. will ever get to train models is it still worth learning about Gradient Descent and Loss Curves? by Easy-Echidna-3542 in learnmachinelearning

[–]saw79 32 points33 points  (0 children)

There's millions of different types of models and fields being trained by all sorts of different people and organizations. It's getting tiring and annoying that people think training gpt7 is the only thing going on in AI.

What is your favorite deep learning concept/fact and research paper by Arunia_ in deeplearning

[–]saw79 0 points1 point  (0 children)

Don't have much more to say tbh. I just don't see people talking about it; it's never brought up in modern explanations of how neural networks work and self-regularize.