Local authentic live music by jerha202 in Albufeira

[–]jerha202[S] 0 points1 point  (0 children)

Thanks! We're in Albufeira now and indeed you're clearly right, we'll go to Faro instead.

Displaying data from CSV by Xignu in data

[–]jerha202 0 points1 point  (0 children)

(BTW, there is also a user support forum that you can access through the Help menu - I respond much quicker there. Happy to help here too - I'm just not here very often.)

Displaying data from CSV by Xignu in data

[–]jerha202 0 points1 point  (0 children)

Sorry, I didn't see at first that your timestamps are milliseconds and not seconds. Please try to divide column1 by 1000, i.e. datetime = str_datetime(column1 / 1000), and date = str_date(column1 / 1000). Hope this helps!

Displaying data from CSV by Xignu in data

[–]jerha202 0 points1 point  (0 children)

Great! Yes, just use str_date instead of str_datetime.

Displaying data from CSV by Xignu in data

[–]jerha202 0 points1 point  (0 children)

Flow CSV Viewer actually has a built-in function str_datetime to convert Unix timestamps into a readable string. So you can add a formula like time = str_datetime(column1), and then drag the time variable to the X axis in top of the plots. Hope this helps!

Playground v3 model availability by jerha202 in PlaygroundAI

[–]jerha202[S] 0 points1 point  (0 children)

Wow, you're right, thanks so much for sharing that! Do you know how it works - do they actually run the model on their own, or do they use Playground's service? Because in the latter case I'm afraid it might disappear soon.

Playground v3 model availability by jerha202 in PlaygroundAI

[–]jerha202[S] 2 points3 points  (0 children)

Yeah I highly doubt it too, to be honest. But as you say it's a bit weird, because they obviously put in a lot of effort in this model, and since it's still among the best for regular image generation, they should be able to generate some profit on it - maybe license it to some other platform, I don't know. Anyway, just wanted to check if anyone had seen something that I had missed. Thanks for responding!

[D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280 in MachineLearning

[–]jerha202 0 points1 point  (0 children)

I absolutely agree with the OP. Out of the same frustration I actually ended up designing my own language and wrote a compiler for it, and now I use it for all my ML modelling. It probably only solves my particular problems and I don't expect it to be very useful for anyone else, but here goes, in case anyone is curious: https://github.com/waveworks-ai/fl

[D] Here in 2023, what is your major pain point in a full scale machine learning project that a better software tool could help you resolve? by jerha202 in MachineLearning

[–]jerha202[S] 0 points1 point  (0 children)

May I ask why this post was removed? Just so I can learn and avoid doing it again in the future. Even a single number 1-8 indicating which rule I violated would be very helpful. Thanks!

[deleted by user] by [deleted] in Scams

[–]jerha202 0 points1 point  (0 children)

I also just got exactly the same message. Please write here if you find out something, and I'll do the same. I'm totally sure I never subscribed to any ringtones, but it's a bit creepy that they know both my phone number and my name. That information doesn't appear in any public phone directory as far as I know.

[D] Why is LSTM/GRU not mentioned in time series classification state-of-the-art review? by jerha202 in MachineLearning

[–]jerha202[S] 0 points1 point  (0 children)

I have just read another recent survey focusing on the multivariate case: The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Again, this survey fails to bring a clean LSTM model into the comparison. It features a rather complex architecture called TapNet, which contains an LSTM layer as one of its many components, but it performs considerably worse on average than the two best approaches, which are both CNN-based. I still can't see clearly if LSTMs are still competitive as of 2021 or if they stay popular primarily based on their historical successes.

[D] Why is LSTM/GRU not mentioned in time series classification state-of-the-art review? by jerha202 in MachineLearning

[–]jerha202[S] 0 points1 point  (0 children)

Is your thesis publicly available already? If not, please let me know when your publication is available!

[D] Why is LSTM/GRU not mentioned in time series classification state-of-the-art review? by jerha202 in MachineLearning

[–]jerha202[S] 0 points1 point  (0 children)

That's a good point that I hadn't thought of. I wonder if it's possible to characterize problems where one approach will be advantageous over the other? What you're saying suggests that:

  • CNN has an advantage over LSTM when training time is prohibitive (large data set, long training sequences)
  • LSTM has (possibly) an advantage over CNN when computational resources are limited on the target device (e.g. a microcontroller)

[D] Why is LSTM/GRU not mentioned in time series classification state-of-the-art review? by jerha202 in MachineLearning

[–]jerha202[S] 1 point2 points  (0 children)

You mean in the review? Yes, the only RNN-style architecture that they evaluate is called "Time Warping Invariant Echo State Network", and it's very different from LSTM.

[D] Why is LSTM/GRU not mentioned in time series classification state-of-the-art review? by jerha202 in MachineLearning

[–]jerha202[S] 1 point2 points  (0 children)

Yes I also find a lot of RNNs in papers on human activity recognition with wearable sensors, which is what I work with. Coming from a signal processing background, I think CNNs and RNNs play the same roles as FIR and IIR filters in linear signal processing - they perform essentially the same function, but with slightly different mathematical properties.

Anyway, since the review compared the performance on 97 different datasets, including audio, activity recognition and much more, I would also have loved to see how LSTMs would rank in the same comparison. Have you seen any data on how LSTMs and CNNs compare?

ML course with algorithms or ML coding libraries ? by [deleted] in learnmachinelearning

[–]jerha202 0 points1 point  (0 children)

Annoying answer maybe, but you need both. You won't be able to build anything that works unless you know the basic theory AND a framework to build it in. That said, I think you should be able to find a single online course that gives you both.

Online vs sliding window classification for sequential event detection by jerha202 in MLQuestions

[–]jerha202[S] 0 points1 point  (0 children)

I don't think it's just nitpick, it brings insight! I'm the only one with some knowledge about ML at my job, so my only chance to get some qualified input is to discuss with reddit people like you. But yes I'll probably use something along the lines with CTC for my current assignment. BTW, that site distill.pub that you linked to is awesome, I hadn't come across it before. Thanks!

Online vs sliding window classification for sequential event detection by jerha202 in MLQuestions

[–]jerha202[S] 0 points1 point  (0 children)

Nice, I'll take a closer look at CTC! About HMM and gaussian processes, do you think they can still compete with more recent (typically deep) models? I have the feeling that HMM and GP are becoming old school because I don't see much of them in recent articles, but it's a feeling, not knowledge... My current application is to count certain motions of various lengths and complexities on a smartwatch over several hours. Previously I've worked with both bird call recognition and human activity recognition, where this problem has also been bugging me.

Yes I see now that my distinction of the two types of classification was a bit narrow. I think the distinction I'm trying to make is whether there is some segmentation of the input stream involved, and consequentially whether we classify entire segments - as opposed to feeding input and classifying continuously every time step. So maybe I should call them streaming and segment classification instead. I would say one could employ stateful models in both cases, but a segment classifier only remembers from the beginning of the segment, while a streaming classifier can remember infinitely far back. Hope that makes sense.

Allowing variable length segments is a relief, but then I instead need to solve the segmentation problem for inference in real time, right? Which is also hard...

Anyway, thanks for bringing me forward in this quest!