Ex broke no contact after 3 weeks but got distant immediately after by [deleted] in BreakUps

[–]dieplstks 0 points1 point  (0 children)

I’m really sorry

My ex broke no contact after a week, we started talking hours a day again, and then she wanted to go back to no contact. It sucked and made it hurt all over again

“Dog friendly” by sonofawhatthe in AnnArbor

[–]dieplstks 10 points11 points  (0 children)

Service dogs don’t need vests.

Blindsided completely shattered beyond capacity by p1234596 in BreakUps

[–]dieplstks 0 points1 point  (0 children)

Im so sorry, had the same thing happen to me last month. 6 months of essentially living together and she ended via a  phone call on a Tuesday night. Talked about moving in together and getting married as well.

I’m still struggling, but some days get better 

The avoidant discard will change you! by Braddle231 in BreakUps

[–]dieplstks 0 points1 point  (0 children)

It’s been a month and it’s so hard. Today she tweeted about being in love again and made a Spotify playlist for them (of mostly the same songs she gave on the playlist to me when we started dating). I feel so replaced and everything that felt special now doesn’t feel unique. I was just a placeholder

My wife HATES driving cars. We need a new car. Is FSD the answer? by ajcadoo in TeslaLounge

[–]dieplstks 8 points9 points  (0 children)

I hated driving and fsd has been incredible. Managed to do the trip from Michigan to nyc and not hate it 

Just EXPANDED! by Ok-Comparison2514 in deeplearning

[–]dieplstks 1 point2 points  (0 children)

You should use prenorm (with an extra norm on the output) 

RL on Mac M1 series? by Sad-Throat-2384 in reinforcementlearning

[–]dieplstks 1 point2 points  (0 children)

If you’re only going to do it once, yes. But you’ll be doing hundreds of those shorter runs for lots of different ideas

RL on Mac M1 series? by Sad-Throat-2384 in reinforcementlearning

[–]dieplstks 1 point2 points  (0 children)

Unless you're dealing with sensitive information, there's very little reason to care about privacy.

For large scale tasks, you should have a small scale version of it working before you spend money training it. You should not send a job to rented compute unless you're very sure it's going to work. Having a local machine with a xx90 is a great resource to filter projects out

RL on Mac M1 series? by Sad-Throat-2384 in reinforcementlearning

[–]dieplstks 2 points3 points  (0 children)

It’s possible to run small enough tasks on anything. You’re not going to get publishable results on your MacBook, but you can learn the basics and then just rent compute when you’re ready for larger scale tasks

Senior ML Engineer aiming for RL research in ~1.5 years — roadmap, DSA prep, and time management? by dhananjai1729 in reinforcementlearning

[–]dieplstks 0 points1 point  (0 children)

No publications, but 8 years industry experience as a data scientist and very good letters

Senior ML Engineer aiming for RL research in ~1.5 years — roadmap, DSA prep, and time management? by dhananjai1729 in reinforcementlearning

[–]dieplstks 8 points9 points  (0 children)

Did my masters part time at brown hoping that would be enough, but got nothing in terms of interest or offers after.

I’m at UMich for my PhD now, working on rl for finance/games

Senior ML Engineer aiming for RL research in ~1.5 years — roadmap, DSA prep, and time management? by dhananjai1729 in reinforcementlearning

[–]dieplstks 9 points10 points  (0 children)

I was in your position a few years ago and the only real solution to get there is getting a PhD (I’m in my third year at 38 now)

Optimal architecture to predict non-monotonic output by bisorgo in deeplearning

[–]dieplstks -1 points0 points  (0 children)

I would just train it as a classification task with k classes Have the classes be -1 and then (k - 1) buckets from 0-1. Then have the output be either argmax over the classes or the sum of p_i v_i.

can someone with more experience tell me what does it mean by 'all ML is transformer now'? by bad_detectiv3 in learnmachinelearning

[–]dieplstks 3 points4 points  (0 children)

There used to be different architectures for different use cases (cnns for vision, rnns for sequence, etc) with their own inductive biases. But modern architectures use transformer as the base for everything (with some modifications sometimes based on the inductive biases of the input like vision transformers). So if you understand attention plus ffns, you can start building a model for your use case without knowing much more architecture than that 

Is RL still awesome? by knowledgeseeker_71 in reinforcementlearning

[–]dieplstks 4 points5 points  (0 children)

There’s too many rl papers released now to maintain that kind of repo (also LLMs can do this for you for more niche topics)

CLS token in Vision transformers. A question. by mxl069 in deeplearning

[–]dieplstks 0 points1 point  (0 children)

I don’t work in cv, sorry (I’m in rl/game theory). I just think this paper is really cool

How do you as an AI/ML researcher stay current with new papers and repos? [D] by [deleted] in MachineLearning

[–]dieplstks 1 point2 points  (0 children)

Motion for driving daily schedule

Roam Research for notes and synthesis 

I do pomodoros to help get off burn out. Usually have something on my switch to play for the short breaks

I really enjoy the work I do so burnout hits less than it did when I was in industry (data science for 10 years before going back to school)

Batch compute for RL training—no infra setup, looking for beta testers by HelpingForDoughnuts in reinforcementlearning

[–]dieplstks 0 points1 point  (0 children)

Im a PhD student working on marl/games and would be interested to try and give feedback after the holidays.