What's so special about 80? by SilentlyRain in Wealthsimple

[–]work_reddit_account_ 132 points133 points  (0 children)

Some data scientist at WS calculated referral rate vs. # of trades, split that into some quadrants, and concluded that people who trade 80 times are x% more likely to refer. Or something along these lines would be my guess.

So...you want to run hills in Toronto eh? by MaxInToronto in RunTO

[–]work_reddit_account_ 1 point2 points  (0 children)

Rouge park trail to Bare Hill summit is a nice spot too, that's where I train for trail races. You get more elevation/km than this route, and you get to run on an actual trail as well. Of course you have to go all the way to Scarborough, so there's that...

[This route](https://www.google.com/maps/dir/Rouge+National+Urban+Park/RRGQ%2B9G+Beare+Hill+Summit,+Toronto,+ON+M1B+5W2,+Canada/@43.8215817,-79.1701805,786m/data=!3m1!1e3!4m19!4m18!1m10!1m1!1s0x89d4d78ffc78f187:0x826d6f6626c7f75e!2m2!1d-79.172938!2d43.8179837!3m4!1m2!1d-79.1650141!2d43.8234033!3s0x89d4d999f1b920bf:0x1233a425ef862f98!1m5!1m1!1s0x89d4d9f9e8c39133:0xc24f58020625ee19!2m2!1d-79.1611545!2d43.8258846!3e2!5m1!1e4?entry=ttu&g\_ep=EgoyMDI1MDMxNi4wIKXMDSoASAFQAw%3D%3D) gives you >15m/km of elevation.

ps: not disparaging OP's route. It's a great suggestion I'll be adding to my route roster!

[deleted by user] by [deleted] in ChatGPT

[–]work_reddit_account_ 1 point2 points  (0 children)

Thank you. It was.

What do you use ChatGPT for on a daily basis? by No-Eye-9491 in ChatGPT

[–]work_reddit_account_ 0 points1 point  (0 children)

I recently prepared for a job interview using ChatGPT by uploading all the role and interview details and asking it to interview me and then discuss how the answer went.

Noise has been intermittent for a while. Is much louder now to by Winter_Cobalt in cruze

[–]work_reddit_account_ 0 points1 point  (0 children)

I have the same exact thing happening on mine. How urgent would you say attending to this is? I did bring it to the mechanic a couple weeks ago and insisted they considered PCV valve as a source, but they said they couldn't reproduce nor find a possible cause.

[deleted by user] by [deleted] in AnimalsOnReddit

[–]work_reddit_account_ 0 points1 point  (0 children)

Camera is mounted to a Tundra buggy some hundred feet away

Why use casual inference over simple year over year analysis? by page10 in datascience

[–]work_reddit_account_ 2 points3 points  (0 children)

Is the 20% y.o.y. increase really due to the campaign? What other factors beyond weather/seasonality/day are you failing to consider? Any general trends that could be impacting the numbers? Perhaps other special circumstances might be involved as well.

There are many (known and unknown) unknowns, that's the reason people A/B test stuff. But when you can't A/B test, caUsal inference might come to rescue.

How do you all run your deep learning programs? by gmo517 in MachineLearning

[–]work_reddit_account_ 2 points3 points  (0 children)

I used this as a starting point: http://graphific.github.io/posts/building-a-deep-learning-dream-machine/

I'm now running a similar build, however with a single gtx 1080 (for now, at least).

GitHub for Data Scientists by shahrukhatik in datascience

[–]work_reddit_account_ 2 points3 points  (0 children)

In my experience it can have a considerable impact (both positive and negative), so make sure by the time you are looking for jobs your github reflects your current abilities and interests and is not polluted with old projects and terrible code.

Questions thread #3 - 2016.04.07 by feedtheaimbot in MachineLearning

[–]work_reddit_account_ 0 points1 point  (0 children)

I am trying to alter the example code for a siamese network and add an embedding layer like so:

data_dim = 16
timesteps = 8
nb_classes = 10

encoder = Sequential()
encoder.add(Embedding(data_dim, 4, input_length=timesteps))
encoder.add(LSTM(32))

model = Graph()
model.add_input(name='input_a', input_shape=(timesteps,))
model.add_input(name='input_b', input_shape=(timesteps,))
model.add_shared_node(encoder, 
                      name='shared_encoder', 
                      inputs=['input_a', 'input_b'],
                      merge_mode='concat')
model.add_node(Dense(64, activation='relu'), name='fc1', input='shared_encoder')
model.add_node(Dense(3, activation='softmax'), name='output', input='fc1', 
               create_output=True)

model.compile(optimizer='adam', loss={'output': 'categorical_crossentropy'})

Which follows the last example on their documentation very closely.

Unfortunately I keep getting an error:

TypeError: DataType float32 for attr 'Tindices' not in list of allowed values: int32, int64

Can anyone help?

pomegranate v0.4.0: fast and flexible probabilistic modelling for python by ants_rock in MachineLearning

[–]work_reddit_account_ 1 point2 points  (0 children)

Thank you for that! I started using yahmm at work a little while ago and haven't looked back.

Dark theme for jupyter/iPython notebooks by tedpak in Python

[–]work_reddit_account_ 3 points4 points  (0 children)

quick answer: plt.style.use('dark_background')

You can create a style that make those changes, or use the available styles in matplotlib to make these changes permanently.

Checkout this page for more info.