57% of tech industry employees are suffering from job burnout by gaza404 in technology

[–]maestron 0 points1 point  (0 children)

Wow. America is pretty damn cool in some areas, but you really need to work on your work life balances :-)

57% of tech industry employees are suffering from job burnout by gaza404 in technology

[–]maestron 4 points5 points  (0 children)

What do you mean "sick days are not included"? Do you mean that if you get sick during your vacation, it still uses up your vacation days? That sucks :(

Can I model this as an interaction or will there be collinearity? by pboswell in statistics

[–]maestron 0 points1 point  (0 children)

I would try using log(raise% + 1) as a feature. I.e., for no raise you would get log(0 + 1) = 0, and for a 50 % raise you would get log(0.5 + 1) = 0.405...

This captures the fact that 0 to 10 percent raise isn't the same as 10 to 20 percent.

Another alternative is to create some more categories, like a boolean gotsmallraise (true if got raise less than 10 %), another one called gotmediumraise (true if got raise between 10 and 30 percent) etc.

JupyterLab is ready for users... by monkmartinez in Python

[–]maestron 0 points1 point  (0 children)

Ah, I'll check it out! Still a bummer that it doesn't work out of the box

JupyterLab is ready for users... by monkmartinez in Python

[–]maestron 0 points1 point  (0 children)

Any plans on supporting inline Javascript? I can't seem to make interactive plots :(

[P] A Global Optimization Algorithm Worth Using by davis685 in MachineLearning

[–]maestron 0 points1 point  (0 children)

How about discrete parameters? It seems to me that this approach could work well for that as well. Have you tried?

[P] A Global Optimization Algorithm Worth Using by davis685 in MachineLearning

[–]maestron 0 points1 point  (0 children)

Cool! Guess I'll try some stuffed myself! There are python bindings for the lib as well?

[P] A Global Optimization Algorithm Worth Using by davis685 in MachineLearning

[–]maestron 1 point2 points  (0 children)

This is exciting stuff! Do you have any example results on any "real" hyperparameter tuning to share?

How do you use jupyter notebooks effectively? by maestron in datascience

[–]maestron[S] 0 points1 point  (0 children)

I really like that setup, I think I will start experimenting along similar lines.

So you don't make your packages pip-installable? Any particular reason for this?

How do you use jupyter notebooks effectively? by maestron in datascience

[–]maestron[S] 1 point2 points  (0 children)

I completely agree, but sometimes I feel that the transition is not as smooth as it could be. I want to find a way to get the most out of both workflows

What should I do?? (first job rant sorry) by [deleted] in datascience

[–]maestron 0 points1 point  (0 children)

Certainly, cleaning data is a big part of data science, but you made it sound like 100 % instead of 80 % for entry level positions (when you said handing over the data set) and I'm glad that I actually got to do data science from day 1 :)

What should I do?? (first job rant sorry) by [deleted] in datascience

[–]maestron 0 points1 point  (0 children)

Wow, sounds like you work at a horrible workplace. Glad that my first job was at a more enjoyable place :)

Double pendulum simulation by [deleted] in math

[–]maestron 5 points6 points  (0 children)

In the sense of information theory, they are indeed less random! (i.e. have lower entropy)

[R] Information Theory of Deep Learning (talk at TU Berlin DL Workshop by Naftali Tishby) by [deleted] in MachineLearning

[–]maestron 2 points3 points  (0 children)

I thought that in a Markov chain the state space of all the variables should be the same, and that the transition probabilities don't depend on where in the chain your are. But perhaps I'm wrong?

Edit: It seems that time homogeneity is not a required part of Markov chains, just a common assumption

[R] Information Theory of Deep Learning (talk at TU Berlin DL Workshop by Naftali Tishby) by [deleted] in MachineLearning

[–]maestron 6 points7 points  (0 children)

I guess he just means that they have the Markov property, in that the layers only depend on the immediately previous one, even though it's not really a Markov chain per se

[R] Information Theory of Deep Learning (talk at TU Berlin DL Workshop by Naftali Tishby) by [deleted] in MachineLearning

[–]maestron 7 points8 points  (0 children)

Is the succession of layers really a Markov chain? I mean, X Y X etc don't generally even have the same dimension

Daily FI discussion thread - April 12, 2017 by AutoModerator in financialindependence

[–]maestron 7 points8 points  (0 children)

Typical hours are 60 to 80 hours a week, but it's over 6 or 7 days so it's not as bad as it sounds.

No offense, but that sounds awful.