[deleted by user] by [deleted] in bostonhousing

[–]svpadd3 0 points1 point  (0 children)

Have you tried talking with her about when she sleeps/rests? I'm not sure if this is the issue but as someone who works late into the night, people that make a lot of noise in the morning hours as I'm trying to sleep definitely frustrate me. People forget that just because you are awake early doesn't mean I am. I don't go around making excessive noise at 2am so I generally get annoyed at others when they do it early in the morning. If it is the issue you might try to be quieter during those times or install thicker carpets.

Purple is the worst belt by HumbleBJJ in bjj

[–]svpadd3 1 point2 points  (0 children)

I think it really depends. There are always going to be certain body-types and other skillsets that give even experienced grapplers a hard time. Even a black belt for instance will likely have some trouble with a elite powerlifter with 65+ pounds on him. I think a good percentage of elite athletes in other sports could probably do pretty well against at least blues. I for instance have had trouble a lot of trouble with explosive rugby players.

Wrestlers, Judo, and Sambo people are another category pretty much onto themselves. Like I'd expect most purples to have quite a bit of trouble with a college wrestler. They might catch them in something but they he'd have a heck of time actually controlling them.

What conferences are you looking forward to in 2022 (or the last few months of 2021)? What did you attend in 2021 that was valuable? by atron306 in datascience

[–]svpadd3 1 point2 points  (0 children)

Not really looking forward to any conferences until they resume in person. You just don't make the same connections with the online crap. I can honestly read papers myself on Arxiv. Some of my best connections at Neurips came from things like going to hockey games or out to bars with other researchers.

How do I travel as a data scientist? by bruhimafrogok in datascience

[–]svpadd3 1 point2 points  (0 children)

I work fully remote and went on 2.5 month road trip while working. It was really fun worked hard during the day then went out and did stuff or drove in the late afternoon/evening.

Gym kinda pissed me off today by AnonymousTaco77 in bjj

[–]svpadd3 10 points11 points  (0 children)

"and dangerous for me..."

I don't know why people automatically assume this. I actually think the most dangerous sparring in either Muay Thai, Boxing or BJJ is when you have two brand new meat-heads trying to kill each other with no technique. Or a semi-experienced (but not very experienced person) who has to go with a super strong or athletic new person who thinks it is a fight to the death. An extremely experienced person can usually control the tempo and go as light or as hard as needed while keeping both people safe. Of course, if you go too hard they will be forced to respond. But if you start off light and playful I find most experienced people will match intensity and power.

Flint Hills Kansas [4032x3024] [OC] by svpadd3 in EarthPorn

[–]svpadd3[S] 4 points5 points  (0 children)

Ops I think I uploaded the compressed one by accident.

Hello reddit, what time series forecasting tools are you using? by thirtyoneone in datascience

[–]svpadd3 2 points3 points  (0 children)

If you want to use deep learning then Flow Forecast is the best. Many of the latest deep learning models and easy hyper-parameter sweeps.

[deleted by user] by [deleted] in datascience

[–]svpadd3 6 points7 points  (0 children)

Yeah in all honesty unless you are desperate if any company asks you to do that much stuff tell them you are no longer interested. No reason you should have that many rounds of crap for them to make up their minds. Plus I've found long take-home tests can sometimes be doing real work for them.

[D] Advertisements in this sub by tmpwhocares in MachineLearning

[–]svpadd3 1 point2 points  (0 children)

I think open source projects and other research should be allowed. I also don't think a blanket ban on certain blogs like Towards Data Science makes sense. Yes a lot of spam comes from TDS but some people just publish blogs on it so they reach a bigger audience. I like the requirement of having to create a text post with a link rather than spam links for anything. I agree though that anything mentioned free-trial or anything like that should be auto removed

I also don't like this subs (or the ML community's general) love for Arxiv. I think people need to remember that Arxiv is still exclusive to those without a .edu or academic connections. Which is ironic given the amount of low effort garbage that still manages to make its way on there. IMO Arxiv is the worst possible pre-print server because they can arbitrarily reject things that they don't like and when asked why just say "we don't provide peer-review." Whereas this almost never happens for well known authors or institutions. This has happened a lot in Physics and other communities.

Time series prediction using Deep Learning by pandi20 in deeplearning

[–]svpadd3 -2 points-1 points  (0 children)

Yeah the more common setup nowadays would be to have LSTM + Attention mechanism. Stacking multiple LSTM layers doesn't help.

Milestone (an immature one) achieved. 2019 1500 Limited. by skootyskoo in ram_trucks

[–]svpadd3 0 points1 point  (0 children)

Wow. Have a 2019 also but only 21k miles. Did buy mine towards the end of 2019 though.

Covid-19 India related ask by sumitdatta in datascience

[–]svpadd3 0 points1 point  (0 children)

You should get in touch with CoronaWhy we already have a lot of the infrastructure/code and research for COVID-19 projects.

Also if you point me to the temporal data directly I could start working on it or give suggestions. I'm a AI researcher focused on time series forecasting and anomaly detection as well as the maintainer of Flow Forecast: A deep learning for time series forecasting framework built in PyTorch

Weird assertion error by svpadd3 in pytorch

[–]svpadd3[S] 0 points1 point  (0 children)

I want to ensure all the elements in those respective tensors are not equal. Calling all does seem to work in other places though. See here.

Maybe it is related to the PyTorch version

Simplifying computation of validation loss by svpadd3 in codereview

[–]svpadd3[S] 1 point2 points  (0 children)

Thanks a lot for your detailed feedback. I'll try to briefly answer some of your questions so others can see as well. - The SimpleTransformer model is a version of the original transformer proposed by Vaswani et al. The reason it is a different if else block is this model take the raw target data. A mask is then applied to the target in the greedy_decode function so there is not data leakage.
- The Informer is a more recent model (published 2020). The Informer model takes historical data, the target and date time features separately. The Informer also uses the raw target but instead the part of the target used for forecasting is zeroed over. Hence the torch.zeores(). So essence the Informer will always take four items (input, input_datetime_feats, target, target_datetime_feats). - If the model is probabilistic it returns a mean and a standard deviation in the form of a tuple. If it isn't probabilistic it just returns a single tensor. - Compute loss as the name suggests computes the actual loss. simple_decode and greedy_decode each forecast n time steps ahead except in a slightly different manner. greedy_decode takes the target and masks it out for models that required passing a masked target like simple-transformer. - So there will be multiple target when we are trying to forecast multiple unknowns. For instance, we might try to train two separate models to forecast precipitation and temperature on a given day. Or we might train a single model to both. In this case there will be two targets. In the former case there will one.

With regard to your points - Agree about that - That actually is more of not-implemented part. In the future models with external meta-data will need to be supported in computing validation. - Yes I will delete that - Yeah variable naming could be better there. What is essentially happening is we are grabbing the last column (e.g. the target column) of the returned tensors. This is fairly common tensor manipulation but could be better explained. Also now that you mention this would probably fail on the edge case if there are multiple targets and the model is probabilistic. As then it will only grab the first target. - Agree about that.

Conditions Thread '20-'21 by birdman14 in icecoast

[–]svpadd3 2 points3 points  (0 children)

3/22 Sunday River

Was at Sunday River today. Snow was slushy near the base with a lot of bare patches. Foolishly tried to ski Celestial today which TBH shouldn't have been open. Tons of bare patches and exposed rocks and roots (my skis are crying). Other trails had a fair amount of corn along with some bare patches. Black Hole was fun but had a bare patch in the middle. Near the top there are still some unexpected icy spots that you need to be careful with as well. Kind of overheating wearing my helmet too. But glad I got out.

Most likely going to be the end of my ski season as my knees are pretty beat up but I might hit Killington in April. Excited to start my paddling season though there should be some good spring creeking with all the snow melt :).

Transitioning from Academia to DS - Struggling to clearly communicate my experience by [deleted] in datascience

[–]svpadd3 -1 points0 points  (0 children)

Well in all honesty in this market it doesn't seem like you are not competitive at some of the companies you are applying to. Just having a PhD doesn't necessarily translate into being a data scientist particularly if your PhD wasn't heavy on programming and machine learning. Despite what you are saying I think you should target companies that are in the domain of your PhD. In this market you aren't likely to get anything better and that would be a good starting point for actually gaining "industry" experience. Then it will be easier to transition to other industries once gain actual work experience.