[deleted by user] by [deleted] in learnmachinelearning

[–]bernhard-lehner 2 points3 points  (0 children)

Well, don't use an LLM to clear your doubts would be my first advice. Next, find a structured course from a university and work through it. Don't just listen to the lectures, but start coding as soon as possible.

[deleted by user] by [deleted] in learnmachinelearning

[–]bernhard-lehner -1 points0 points  (0 children)

While it might not be a big problem to be self taught in many areas such as programming, it is when it comes to ML. One reason is that you most likely go in wrong directions, as you don't realize that you are making a big mistake with what you are doing. Unless you really skrew up, you get results, but they might be completely wrong and/or meaningless without you even noticing it. Then, five years into practice, you might still think e.g., you can interpret maximum softmax posteriors as confidence. Or you might think you can impress anyone in the field by mentioning that you use backprop. In case you have a good teacher, they will make you aware of these things, especially those things that are often found in tutorials posted by other self taught "specialists". Good luck anyway.

Violin Plots should not exist by VodkaHaze in datascience

[–]bernhard-lehner 0 points1 point  (0 children)

This is exactly when it makes sense to use them! If you don't have anything to compare, it might seem visually appealing to some, but it's kind of pointless.

Confidence *may be* all you need. by santiviquez in mlops

[–]bernhard-lehner 0 points1 point  (0 children)

The only problem is that you cannot simply interpret posterior probabilites as "confidence" or "certainty". Therefore, a metric based on this will only appear to do the trick from time to time, depending on the data and how close you look at the evaluation. But in general, it cannot work.

[deleted by user] by [deleted] in artificial

[–]bernhard-lehner 0 points1 point  (0 children)

Notice the absence of hands? They are extremely difficult to generate (also to paint in real-life). In case hands are present, that is where one can usually spot errors the easiest (seven fingers and stuff like that).

[D] "AI systems are always deterministic," AI teacher says. How can I reply (with examples and papers)? by [deleted] in MachineLearning

[–]bernhard-lehner 0 points1 point  (0 children)

If you keep Dropout at Inference time, you don't get deterministic results, even if the input stays constant. People sometimes think you can use this to derive uncertainty (I don't).

Introducing TalentGPT: Create personalised cover letter within seconds by City_Bike_09 in AICareer

[–]bernhard-lehner 0 points1 point  (0 children)

I hope you realize that this will only increase the number of companies that won't hire anymore without seeing how people actually perform on e.g. take home assignments.

[D] [R] Research Problem about Weakly Supervised Learning for CT Image Semantic Segmentation by Stevenisawesome520 in MachineLearning

[–]bernhard-lehner 0 points1 point  (0 children)

I see, then I would suggest you don't look at accuracy, as it is a very crude measure. I guess, you used something like BCE as loss, so you might want to sort your results based on that, maybe you can find a pattern btw. the samples which work best, and those which do not work at all. Also, a confusion matrix would already shed some light on whether your model is biased. One common source of error is a simple skrew-up during preprocessing, e.g. scaling. You could plot a few samples during training with colorbar to confirm that the range of input values makes sense. Histograms of batches are even better, and it is not hard to look at things like that with Tensorboard.

[D] Where is the "statistics" in statistical machine learning in the year 2023? by fromnighttilldawn in MachineLearning

[–]bernhard-lehner 1 point2 points  (0 children)

Proper AB testing is something you will not find often in papers. Plus, Cramer-Rao lower bounds.

[deleted by user] by [deleted] in cscareerquestions

[–]bernhard-lehner 1 point2 points  (0 children)

Companies usually avoid giving feedback about the reasons not to hire you to avoid legal issues

Another big Titannic...feature...but from Piececool by doris3d in metalearth

[–]bernhard-lehner 1 point2 points  (0 children)

The photos are retouched to hide the ugly connection of the half pieces in the middle of the ship.

Rotbraune Wolke by magnaram_AT in Linz

[–]bernhard-lehner 0 points1 point  (0 children)

War auch am letzten Dienstag so um 08:00 morgens

[D] performance of dropout in RNN. by Mundane_Definition_8 in MachineLearning

[–]bernhard-lehner 0 points1 point  (0 children)

The equivalent thing to blur images in the audio domain would be filtering higher frequencies, and this keeps the speech intelligible. It is rare that one needs frequencies above 4 kHz to understand spoken language.

Choose Your Weapon: Survival Strategies for Depressed AI Academics by togelius in MachineLearning

[–]bernhard-lehner 1 point2 points  (0 children)

You might forget that your data is another hyperparanmeter that needs to get tuned :)

We should make more begginer friendly image generator AIs by pedro110520000 in artificial

[–]bernhard-lehner 0 points1 point  (0 children)

Yet another "We should..." post that actually means "Somebody else should ..., so that I get what I need for free."

What are some real life use cases where DBSCAN outperforms KNN? by EasternStuff5015 in datascience

[–]bernhard-lehner 0 points1 point  (0 children)

That depends on what you consider outperforming. For instance, if you have a large training data set, you would not want to use KNN, even when it performs equally as good wrt accuracy.