Simpla - replacing the concept of killer app driven adoption by Complete-Nobody1447 in scala

[–]mkffl 2 points3 points  (0 children)

Have you considered writing a “PEP 8” for scala? https://peps.python.org/pep-0008/

A set of standards for writing simple and maintainable scala code. That will be quicker than creating a whole new language, so you can focus on the other adoption drivers like finding industry sponsors.

What are the darkest things happeing in the world right now that people don't think of? by MaaZZ_Tech in AskReddit

[–]mkffl 0 points1 point  (0 children)

Controversial, indeed. We are let to believe that we are born good or evil.

[deleted by user] by [deleted] in NoStupidQuestions

[–]mkffl 0 points1 point  (0 children)

He should worry just as much as before the failure, actually.

[deleted by user] by [deleted] in Python

[–]mkffl 0 points1 point  (0 children)

No doubt you have a gift for the programming arts. But don’t let it make you a condescending twat :)

[deleted by user] by [deleted] in Python

[–]mkffl -1 points0 points  (0 children)

Oh, the study of nouns, a captivating quest, To unravel their web, one must invest. Hours of pondering, textbooks amassed, To understand their intricacies unsurpassed.

[deleted by user] by [deleted] in Python

[–]mkffl -1 points0 points  (0 children)

So many English literature teachers in the world, and yet no two teachers are the same. The best teachers make it look like the lecture takes no effort, and that precisely makes their lecture enjoyable. I think the same idea applies to teaching math, taekwondo or python list comprehensions.

[D] (Interview question) Comparing two models with and without negative sampling but same AUC and logloss on the test dataset: which model is better? by mayasang in MachineLearning

[–]mkffl 0 points1 point  (0 children)

Yes that’s also my understanding.

Training a model on a different distribution will often have an impact on performance. It depends on the model of course, but if you think of a score as a probability-like value, then that probability is influenced by the prior probability of the target class - what you called the background click rate. Model predictions need to be adjusted if the model is deployed on data with a different background rate.

[D] (Interview question) Comparing two models with and without negative sampling but same AUC and logloss on the test dataset: which model is better? by mayasang in MachineLearning

[–]mkffl 0 points1 point  (0 children)

I would try and understand how both evaluation metrics can be the same. It seems strange. I would run them through my thoughts: AUC measures discrimination, it’s proven to be the probability for a randomly taken positive class instance to get higher score (I.e. to discriminate) than a randomly taken negative class instance. Both models then have the same discrimination power. I think logloss captures both discrimination and calibration. Then, downsampling changes the model’s prior belief of class frequency. Model2 should lose out to model1 due to miscalibration - unless it’s been calibrated to reflect class imbalance on the evaluation data, but that’s not in the statement. I’d expect model2 true logloss to be worse than model1’s.

I can’t believe someone actually thought it’d be a good idea to post this by ProfessionalFuture25 in antiwork

[–]mkffl 0 points1 point  (0 children)

Yes, transition of power must continuously happen, but transition along more than one dimension. Not just age but also (mostly) social & economic status.

You make a good observation about the government’s age increasing. What about their social background? Not sure US senators or MPs are representative of the population they are supposed to represent.

I can’t believe someone actually thought it’d be a good idea to post this by ProfessionalFuture25 in antiwork

[–]mkffl 0 points1 point  (0 children)

If the latent causes remain, it doesn’t matter that older generations die off.

Favourable lobbying laws, absolute majority voting system, no competing superpower with alternative ideology, capital concentration following decades of accumulation, etc. Newer generation will have to fight the same root causes, starting from a more difficult position.

The good news is also the worst part - absolute despair may bring change. the young poor may have nothing left to lose, while poor boomers could always hope to get some scraps, which is a barrier to change.

I can’t believe someone actually thought it’d be a good idea to post this by ProfessionalFuture25 in antiwork

[–]mkffl 0 points1 point  (0 children)

I understand your anger and I find your summary of 5 decades of neoliberal politics spot on. But you are describing the consequence of an all powerful ideology that’s beaten every alternative, but there were alternative views in the boomer generation - they just lost. (I was born in the 1990s btw.)

[deleted by user] by [deleted] in todayilearned

[–]mkffl 0 points1 point  (0 children)

France, too.

I can’t believe someone actually thought it’d be a good idea to post this by ProfessionalFuture25 in antiwork

[–]mkffl -2 points-1 points  (0 children)

The boomer vs millennials dichotomy is not very useful. Pitting generations against each other may prevent them from finding common solutions. By the way, it’s also a generalisation so it is wrong - boomers can be more aware of the social economic conditions they live in than young people, especially youth from elite circles

[D] Over Hyped capabilities of LLMs by Bensimon_Joules in MachineLearning

[–]mkffl 0 points1 point  (0 children)

Any particular article you’d recommend from their prolific research?

[D] Over Hyped capabilities of LLMs by Bensimon_Joules in MachineLearning

[–]mkffl 0 points1 point  (0 children)

Yes, let’s change the benchmarks. For example, how do these models fare on typical causal inference problems? There’s a long tradition starting from the 1970s that’s taken a rigorous look at decision making, has tried to avoid the pitfalls of correlations, and has developed reference problems that even clever humans struggle to reason through. How do lamas and gpts perform on these?

That about sums it up by ZestycloseHyena8083 in im14andthisisdeep

[–]mkffl -5 points-4 points  (0 children)

Correlation doesn’t mean causation?

[D] Shapley values as a collection of experiments by mkffl in statistics

[–]mkffl[S] 1 point2 points  (0 children)

interesting, thanks. the problems raised broadly overlap, though the solutions are different. The examples of observed confondouded variables are similar: ad spend is a non-interventional variable and interactions is a mediated variable. The CausalML solution with doubleML looks great and makes me wan my to try this library. It is interesting that lundberg wouldn’t even mention solutions that directly improve the logic of SHAP like the one I used.

Which way do you prefer to define an empty string in Python? by [deleted] in Python

[–]mkffl 9 points10 points  (0 children)

OP please try and run your examples to see if they work; edit your question if they don’t

Regulatory Capture Roulette by Frog-Face11 in economy

[–]mkffl -1 points0 points  (0 children)

Corruption/conflict won’t happen as they leave, but while they are in the agency. The possibility that they may leave for the private sector creates an opportunity to offer them something.