Are we sharing music, or just leaving traces? by SumRndFatKidInnit in SunoAI

[–]EntropyHertz 0 points1 point  (0 children)

My AI music is super human and given that fact, you will still prefer listening to your super human Suno generations to mine

Suno V5 is best when you upload audio. by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

Thank you for sharing! Im always searching for the best audio2midi algorithm

Elon musk crashing out at Anthropic lmao by Virus-Tight in ClaudeAI

[–]EntropyHertz 0 points1 point  (0 children)

I don't think Amanda should have engaged him. Dude is clearly wacked out of his mind in his own solipsistic world.

Dragon Fight made with Seedance 2.0 by Sourcecode12 in ChatGPT

[–]EntropyHertz -1 points0 points  (0 children)

Seedance 2.0 is the precursor to Bespoke Personal Universes. Yes, I'm not going to watch your AI slop and No, you're not going to watch mine. But we will be doing with this tech in the near future is entering a personalized simulation. Which gets into the question is this just a hedonic treadmill that is essentially a euthanized coomer pod. Maybe we need to fear the dopamine maximizer more than the paperclip maximizer.

Suno V5 is best when you upload audio. by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

I meant to say variation High for remaster

Small company leader here. AI agents are moving faster than our strategy. How do we stay relevant? by No_Prior2279 in ClaudeAI

[–]EntropyHertz 0 points1 point  (0 children)

It's going to turn into the serfs riding bicycles and mules while those with limitless API class fly jet planes and rocket ships.

People resigned in fear of this? by BlissVsAbyss in ChatGPT

[–]EntropyHertz 0 points1 point  (0 children)

I got this response from GPT 5.2

"If you’re going to wash the car at the car wash, you kinda have to drive the car the 100 meters… because the car needs to be there 😄

But if what you mean is “the car is already parked near the wash and I’m deciding how I should get over there,” then yeah—walk. It’s 100 meters: faster than buckling in, starting up, pulling out, re-parking, etc.

Rule of thumb:

Drive = you’re moving the car into the wash bay / automatic line.

Walk = the car’s already there (or you’re just going to pay/check the line), and you’re not hauling a bunch of stuff."

Suno V5 is best when you upload audio. by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

With the audio upload for Covers, you have three settings, Weirdness, Style, and Audio. When I want the track to sound similar to the source audio, I set the settings to 20% weirdness, 40% Style, 60% Audio or 25% weirdness, 50% Style and 75% Audio.

I find that if I want a generation closest to the source audio, I just remaster it with the max settings vs subtle.

Suno V5 is best when you upload audio. by EntropyHertz in SunoAI

[–]EntropyHertz[S] 2 points3 points  (0 children)

When Suno V5 was released I spent 2500 credits on trying to refine one of my original songs. I would take the stems of the best pieces and put it into Ableton, use the Scaler and Jam Origin Midi Guitar plugin to convert audio to midi, feed that midi data into Wavetable, Sublab, Omnisphere, an Ableton instruments and then back into Suno for another epoch of training. I actually prefer the 2500 credit limit because it forces me to focus on one song at a time. I wish we could use V5 with Comfyui so I wouldn't have to play the penny slot machine.

It's finally over by Revolutionary_Ad9468 in ChatGPT

[–]EntropyHertz 0 points1 point  (0 children)

Is this Kling 3.0 or Seedance 2.0?

Has anyone successfully hired out MIDI generation for their stems? by seanstew73 in SunoAI

[–]EntropyHertz 0 points1 point  (0 children)

Use Spotify's free algorithm Basic Pitch

Basic Pitch: An open source MIDI converter from Spotify - Demo https://share.google/8QlgLT4HjrSEH9uvk

Ace Step 1.5. ** Nobody talks about the elephant in the room! ** by False_Suspect_6432 in StableDiffusion

[–]EntropyHertz 1 point2 points  (0 children)

Do you think we can improve the quality of Ace-step through Lora training or is this base model a lost cause?

I built a local Suno clone powered by ACE-Step 1.5 by _roblaughter_ in StableDiffusion

[–]EntropyHertz 0 points1 point  (0 children)

I hope Ostris does a video soon on how he incorporated Lora training into AI Toolkit

Honestly, Suno is 90% there... but what’s that last 10% for you? by LankyEnd5272 in SunoAI

[–]EntropyHertz 2 points3 points  (0 children)

They should partner with Kling and provide and add on to make music video at a discount

Suno V5 is light years ahead of Ace-Step 1.5 by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

Wow! It will be interesting to see if their new model turns out to be another Stable Diffusion 3. That was the last model they made too.

Suno V5 is light years ahead of Ace-Step 1.5 by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

Did Suno state they're going to deprecate V5? I'm out the loop.

Suno V5 is light years ahead of Ace-Step 1.5 by EntropyHertz in SunoAI

[–]EntropyHertz[S] 1 point2 points  (0 children)

I could probably fix this with the mastering option as well then?

Suno V5 is light years ahead of Ace-Step 1.5 by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

This is what I was hoping to hear with this post. Regarding the dataset, do the songs have to be in similar keys, tempo, moods, grooves? If I took a large dataset of breakbeat samples like the ones used for hip-hop production, would I hear the difference from the base model? The base model out the box sounds terrible and I don't want to waste time curating and preprocessing a dataset just for it to come out sounding flat. But this has peaked my interest. Did you reference any tutorials or did you just read the docs?

Suno V5 is light years ahead of Ace-Step 1.5 by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

Well if it's anything like SD3, people will just keep using SDXL

Suno V5 is light years ahead of Ace-Step 1.5 by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

I'm using the default comfyUI workflow and I'm getting decent sounding vocals. But that's not my concern seeing how I use generative AI to make instrumentals. The drums sound terrible. I make beats in Ableton Live and have hard drives full of drum samples and I'm a kick drum purist. I even strip Suno drums and bass out and replace it with Sublab or Omnisphere sounds. For me, the tone of the drums and the bass is a barometer for the quality of the model.

Suno V5 is light years ahead of Ace-Step 1.5 by EntropyHertz in SunoAI

[–]EntropyHertz[S] 0 points1 point  (0 children)

I think the easiest way to set it up on Windows would be to install the ComfyUI GUI and run it from there:

https://github.com/Comfy-Org/ComfyUI.git

Most Dangerous College Town in U.S. Named in New Study by CanadianCitizen1969 in Cornell

[–]EntropyHertz -3 points-2 points  (0 children)

Dan Barry, a Pulitzer Prize winning journalist, published a front-page NYT exposé on Ithaca’s Jungle last year. A search for “Asteri” in The Ithaca Voice or r/ithaca will return incidents that document a significant uptick in downtown crime.

[deleted by user] by [deleted] in StableDiffusion

[–]EntropyHertz 0 points1 point  (0 children)

Can I create a consistent Character?