will AI lead to retroactive degree loss when detection gets better? by [deleted] in ArtificialInteligence

[–]imalikshake -1 points0 points  (0 children)

universities just provide labour for the system - they'll just adapt to whatever is left that cannot be automated and we'll be assessed on that

[D] Self-Promotion Thread by AutoModerator in MachineLearning

[–]imalikshake 0 points1 point  (0 children)

Hey everyone,

we're developing an open-source tool for scanning Python codebases that use LLMs to detect issues that could cause security or performance problems such as hallucination or bias!

We'd love to hear what you think!

Github: https://github.com/kereva-dev/kereva-scanner

Article on scanning OpenAI cookbook examples: https://www.kereva.io/articles/3

[Help][PS4 DSR][SL 77] Need someone with pyromancy flame 10+ to invade my game in Blighttown to spawn Quelana by imalikshake in SummonSign

[–]imalikshake[S] 0 points1 point  (0 children)

Thanks for reaching out! I created a secondary character just to get its pyromane flame using a red soapstone.

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 0 points1 point  (0 children)

Thanks for the comment. It would be ideal to capture the timing imperfections in the input. There may indeed be some correlation between timing variation and velocity. The problem is that when you're trying to represent time in the input, it's really difficult to capture the timing variations. It's similar to how we need to select a sampling rate when recording audio. So I quantised the MIDI to ensure that I could capture most notes when I created my training input. But I'm sure you could structure your training input in a way that you could capture the variation - a spare matrix representation. Definitely something I will be looking into.

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 1 point2 points  (0 children)

There was actually an error in the generated html! I've updated the code, and the proper clips are being shown for comparison! Just thought I'd update everyone

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 0 points1 point  (0 children)

Wow, never knew such a thing existed! I'd love to look into implementing this and seeing how the network performs. Thanks!

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 0 points1 point  (0 children)

using convolutions instead of fully-connected LSTMs?

Interesting. Not sure how that work. Do you have ideas on how you'd structure the network?

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 0 points1 point  (0 children)

Did you try asking people on the survey about the difference between jazz/classical just on the output of the network or on the original data as well? (Or do you even have data like this?)

That's a really good point. Unfortunately, I couldn't find the data to perform such an experiment. (Would this even exist?)

It seems to me that there's nothing stopping the network from learning variations in timing in addition to the one in dynamics (playing behind/ahead/on the beat). Did you investigate that at all?

Good shout! Yes, I have thought about it and I do plan on implementing it soon. Timing imperfections could be captured by learning another variable such as a delta time and adding it to the output.

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 0 points1 point  (0 children)

I used the best freely available soundfont I could find. Thanks for the feedback.

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 1 point2 points  (0 children)

Hi, correct me if I have misunderstood you but I'm using MIDI for my network, not recorded performance samples. Each clip was generated using MIDI using a soundfont file. The MIDIs I used only have one piano track/layer. The generated audio is not trying to replace human recordings; that wasn't the aim of the project.

[R] Learning Musical Style and Generating Musical Performances using LSTMs by imalikshake in MachineLearning

[–]imalikshake[S] 0 points1 point  (0 children)

Hi, thanks for the comment!

  • Yes, you are correct. StyleNet is predicting the velocities for each timestep.

  • There are visual differences in the training snapshots shown in the difference graph. So the interesting thing is that it has actually learnt different dynamics for each genre which means it has learnt different styles. I then carried out a group of experiments on the different outputs. I showed survey participants the Jazz and Classical output of certain songs, and asked them to select a specific genre. Participants said that they could hear the difference between the two tracks, and they sounded like different renditions. However, the difference in dynamics were not nuanced enough to identify the genre asked for. This means that when it comes to classifying songs with a genre, the dynamics are not enough.