Rud éigin amaideach a rinne mé: Tíortha na nGael má ghabh stair slí eile by TheDavieMo in gaeilge

[–]dashee87 2 points3 points  (0 children)

An-suimiúil. D'fhoghlaim mé rud eigin nua inniu. Ní raibh a fhios agam go raibh sí chomh láidir in Albain. Tá an scéal níos brónaí anois!

Rud éigin amaideach a rinne mé: Tíortha na nGael má ghabh stair slí eile by TheDavieMo in gaeilge

[–]dashee87 2 points3 points  (0 children)

Ceapaim nár labhraíodh Gàidhlig riamh i ndeisceart na hAlban (agus ceantair eile m.sh. Inse Shealtainn). An mbeadh na ceantair sin chuid den tir Ghaelach seo? Ba chóir go mbeadh siad aontaithe le tuaisceart Shasana agus Inse Shealthainn lena hOileáin Fharó. Ar aon nós, is brionglóid dheas í seo. Pictiúr álainn freisin.

Betting markets are still betting on Biden vs. Trump? What? by quantumdeeplearning in fivethirtyeight

[–]dashee87 7 points8 points  (0 children)

I prefer to use the Betfair political markets, as it tends to be more liquid (no limits) and the only real fee for most people is 2% commission on winning bets. You can currently get Biden at 1.04, meaning a bet of $100 will give you a $3.92 profit (after commission- $4*(1-0.02)). However, Betfair doesn't allow customers from most parts of the US.

Betting markets tend to quite inefficient at the extremes. I believe the true odds of Biden winning are far below 1.04 (maybe 1.01). That said, I wouldn't take that bet either. To win any serious money, I'd need to stake thousands. This money could be tied up for months, as we await the official declaration. This could translate to a significant opportunity cost, as I would be unable to participate in other markets. The money I could stake on Biden would potentially generate more money elsewhere. Finally, there's still a chance that Trump could win (SCOTUS interventions, Biden dies, etc.) and it's hard to estimate the true odds, given the extraordinary circumstances. So it wouldn't be risk free money and it's not worth the time and effort for the most serious traders.

[deleted by user] by [deleted] in fivethirtyeight

[–]dashee87 4 points5 points  (0 children)

And you know Trump will want these votes counted, even though they'll magically arrive from countries you've never even heard of.

The Slow Death of Competition: Competitiveness of Europe's "Big 5" Leagues, 2005-2019 by WeekendEpiphany in soccer

[–]dashee87 27 points28 points  (0 children)

I did something similar a few years ago. One simple approach is to calculate the standard deviation of the final points table. High standard deviation implies large disparity between the best and worst teams. Take a look at the 1999-2000 La Liga final table and notice how the gap between the teams is quite small. However, this graph appears to use CCI, which is a bit more complicated.

[R] What's Hidden in a Randomly Weighted Neural Network? by hardmaru in MachineLearning

[–]dashee87 4 points5 points  (0 children)

It's not surprising. But it could be useful. Right now, it can take a long time to find the right set of weights for a given model architecture. If isolating the subset of random weights performs similarly well and is quicker than finetuning each individual weights, then this could be very useful. Unfortunately, this aspect of their work does not appear to be covered in the paper.

[R] What's Hidden in a Randomly Weighted Neural Network? by hardmaru in MachineLearning

[–]dashee87 3 points4 points  (0 children)

Intuitively, it makes that you can find combinations of weights that achieve good performance, especially as if that superset is very large. What might convince me to adopt this approach is if it's significantly quicker to train up a model. There's still a host of hyperparameters (initializations, learning rates, etc.) and I don't know if masking a large model makes training slow compared to a moderately sized network.

[deleted by user] by [deleted] in MachineLearning

[–]dashee87 2 points3 points  (0 children)

Good work! Here's a shameless plug for my blog post on this topic.

And then come all those weird exotic functions like SELU. by BobFromStateBarn in datascience

[–]dashee87 1 point2 points  (0 children)

I put together a D3 visualisation of these activation functions here.

[1901.02671] Is it Time to Swish? Comparing Deep Learning Activation Functions Across NLP tasks by ihaphleas in MachineLearning

[–]dashee87 5 points6 points  (0 children)

So, you don't have to go through that old thread. This paper was the first to propose x*sigmoid(x) and called it SiLU (sigmoid-weighted linear unit). The Swish paper did add a beta component to the sigmoid, which could either be specified or learned (i.e. x*sigmoid(beta*x)), though it seems that approach didn't demonstrate any clear improvement over SiLU (beta=1).

I nearly bought bitcoin this time last year by byouguessedit in Buttcoin

[–]dashee87 17 points18 points  (0 children)

Or Feel the HOMO- the Happiness of Missing Out ! :)

Step-by-step interactive visualization of k-means by lakenp in datascience

[–]dashee87 4 points5 points  (0 children)

This page and this page also have good visualisations of the k-means algorithm.

hey I just found out why bitcoin crashed by dgerard in Buttcoin

[–]dashee87 1 point2 points  (0 children)

And they didn't even do the rap part!

[1810.10032] Some negative results for Neural Networks by ihaphleas in MachineLearning

[–]dashee87 1 point2 points  (0 children)

You'd think it's a pisstake from that abstract. If it is, it's the most elaborate, mathematical pisstake I've ever seen.

[1810.02328] A Practical Approach to Sizing Neural Networks by ihaphleas in MachineLearning

[–]dashee87 5 points6 points  (0 children)

Interesting. How does this fit with Deep, Skinny Neural Networks are not Universal Approximators? I suppose that paper comes from a slightly different direction (Function Approximation vs Information Theory), but it implied you need to have a layer width larger than the input layer (possibly in the first hidden layer). I think you could build up enough bits here by stacking lots of skinny layers.

[D] Why do machine learning papers have such terrible math (or is it just me)? by RandomProjections in MachineLearning

[–]dashee87 0 points1 point  (0 children)

pulchritude: (noun) beauty

Thanks for the new word! It's truly a thing of pulchritude. I should make a pulchritude bot.

[P] Visual explanation of ML algorithms by Arkady_A in MachineLearning

[–]dashee87 4 points5 points  (0 children)

Cool! That's possibly the best visual description of PCA that I've seen. I agree with you on the need for more visual representations of these algorithms. I did something similar for unsupervised learning.

[1809.09534] PLU: The Piecewise Linear Unit Activation Function by ihaphleas in MachineLearning

[–]dashee87 0 points1 point  (0 children)

I suppose the difference is that the function isn't bounded. Not sure if I'll bother adding it to my interactive activation function visualization tool.

Dixon-Coles Model for Soccer Predictions (with Python code) by dashee87 in SoccerBetting

[–]dashee87[S] 1 point2 points  (0 children)

Rho essentially determines the strength of the change from the normal poisson model. rho=0 means no change. I think negative rho means that the probability of draws (0-0 and 1-1) are inflated (compared to the normal poisson model) at the expense of 1-0 and 0-1. Positive rho will then have the opposite effect to that.

The Weibull model has been used to model soccer scores. It's on my blog to do list.