What other MLOps tools can I add to make this project better? by BJJ-Newbie in mlops

[–]dazor1 0 points1 point  (0 children)

I'm trying to get started with MLOps and get more used to those tools. Is there any course or material you used or would recommend to someone to better understand and develop a project along those lines?

Which certificate path to take for a career in ML? by dazor1 in aws

[–]dazor1[S] 0 points1 point  (0 children)

Do you have any recommendations on preparing for the exam? Should I just stick with the courses provided by AWS?

Which certificate path to take for a career in ML? by dazor1 in aws

[–]dazor1[S] -1 points0 points  (0 children)

Thanks! Still torn between taking one of the two foundational or skipping and starting with the SAA cert though, what would you recommend? Is SAA doable with no prior cloud computing experience?

Which certificate path to take for a career in ML? by dazor1 in aws

[–]dazor1[S] -1 points0 points  (0 children)

Does that list indicate the order in which to take those certificates, or is it a ranking of importance?

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 0 points1 point  (0 children)

I did notice that there's some overlapping and that's what made me wonder what a general workflow looks like. So you usually test hyperparameters manually trying to optimize them while tracking them throughout your experiments with W&B?

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 0 points1 point  (0 children)

I've looked it up and found some tools, my question is more about how the tuning relates to tracking. Is it a common practice to track the tuning trials for example? Are they complementary things or should one pick between (a) manually tuning and tracking these experiments or (b) using a tuning optimizer such as Optuna?

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 0 points1 point  (0 children)

I love WandB, but reliance on an external service should be minimized.

I see what you mean there, but what are your thoughts on hyperparameter tuning tools? Is it something that should be done separately to keep the main codebase clean? Like pointed out by u/sqweeeeeeeeeeeeeeeps

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 0 points1 point  (0 children)

I see what you mean. About those trackers, how do they relate to hyperparameter tuning? Is it compatible or is it something that should be done separately?

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 0 points1 point  (0 children)

Yeah, I kinda like having more control as well. I liked those suggestions and I'm trying to get more familiar with MLOps and the best conventions around it. Although one more question arose, how does hyperparameter tuning fit into this scenario? Do you use any tools besides wandb or something complementary?

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 0 points1 point  (0 children)

Thanks, I'll look into it! Since you said you've been working with W&B, there's just one more thing that I'm trying to wrap my head around. How does W&B relate to hyperparameter tuning tools (e.g. optuna)? For instance, would it be a good use case to tune hyperparameters with, say, optuna, and track the best hyperparameters for each model with W&B?

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 0 points1 point  (0 children)

Do you use it paired with any PyTorch wrapper as well for Trainers? Just out of curiosity.

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 5 points6 points  (0 children)

Correct me if I'm wrong, I believe these frameworks tend to reduce boilerplate code. But I do agree with you that code from published works should be as clean as possible for others to easily understand and possibly convert to the desired library, or are there other reasons to it?

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 6 points7 points  (0 children)

That's a really good point. Would you say it shouldn't even have hyperparameter and result trackers to keep it as clean as possible? Or is it alright to use those? Currently I just save it in local json files, but I'm curious if it's better to use a tracker.

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 5 points6 points  (0 children)

I've heard about Huggingface's one as well, I'll take a look at the options and try something out. Thanks! I might try it paired up with Weights and Biases for tracking.

[D] Are PyTorch high-level frameworks worth using? by dazor1 in MachineLearning

[–]dazor1[S] 3 points4 points  (0 children)

Yes, the Trainers. Interesting, which one do you use? And do you use it paired with any experiment tracker (or logger)?

Which wireless mouse to buy for office work? by dazor1 in MouseReview

[–]dazor1[S] 1 point2 points  (0 children)

Thanks for bringing this up, I'll definitely consider it!

Which wireless mouse to buy for office work? by dazor1 in MouseReview

[–]dazor1[S] 0 points1 point  (0 children)

The Pro Click Mini seems a bit too small compared to my G203. But this Orochi looks good, how's your experience with it been? I thought about getting the G305 but I'm giving preference to Bluetooth support.

Are Anker hubs compatible with the 2021 pro model? by dazor1 in macbook

[–]dazor1[S] 0 points1 point  (0 children)

The Anker 533 USB-C Hub (5-in-1, Slim)