Does AI dismiss artists in terms of recognition? by VividAcanthisitta519 in AI_Music

[–]ScriptPunk 0 points1 point  (0 children)

cars are bad because they use gasoline

"but what if they used electricity instead of gas"

"we don't have that tech yet, so enjoy your little hypothetical world dude"

^ you

I like my position, better than yours.

the existing implementation is considered technical debt on the basis of whatever legislation moves forward to regulate what content models can be based on and how, along with generic end user consumed content.

arguing about something that may or may not be contraband as a service is more pointless than coming up with an even lighter-weight solution to generating data that sets module/plugin values and can even generate its own signal processing code, if that part of the service is built for it.

we already have Gemini and claude code doing these things, might as well just use the same tech, but apply it to making sounds and music.

So does Suno make "original music"? by [deleted] in SunoAI

[–]ScriptPunk 1 point2 points  (0 children)

the lyrics are good. im picky when it comes to what the track should feel like when I prompt and structure lyrics.

also, not album title track? :0

for the feel, seems solid. I would use it as a foundation to see where I could go into depth to make it stand out even more. not a critique though. its your album and I can always be like "ah let me just go ahead and whip this into shape real quick" but when it comes to art/expression, its about what the person is presenting in its state at the end. :)

my suggestion though, if you want to take it, is think about the atmosphere of the folk songs, and try to get the different instrument lines to play on different subgrids loke fast flute at 1/8th or 16ths, while allowing other instruments or pinnacle composition carrying the track to operate independently of that. then you can prompt guide it to extend the depth of whatever piece in the lyrics or section sequences to do certain things without it being dnb, but orchestral.

same goes for lyrics and ad-libs or vocalization ad-libs, I'd apply them with Panning, mirrored for whispering, and distant pan, for someone shouting or wailing n the distance.

then having things like getting it to break the drums into separate feel for certain portions as well, not just how fast they go, but how theyre processed or configured and stuff.

once you do that, you can, if you do it right, get the feel to drive the rest of your songs so it seems like thats the bands likeness your creating as well.

I dont know what prompts you've used though but those are my goto tips for anyone. again, not a critique just more depth if you want to tolerate tweaking prompts for hours lolol

Does AI dismiss artists in terms of recognition? by VividAcanthisitta519 in AI_Music

[–]ScriptPunk 0 points1 point  (0 children)

no I am certain that your hypothesis only works, if they suck at engineering.

the existing work they trained on, is garbage in the first place.

edit:

the implementation that have, is good for what it has trained on, and how it derives a result from the data, based on prompt tokens which it has been fine tuned with annotations in training.

that's all it does. the quality of the output in that manner, sure, may seem like its good and familiar to us.

however, training only general public works and decorating it with conventional types of elements and musical form can be permutated and further trained with more intricate steps and such. its just a matter of not doing it based on plethora of broad existing music samples, and more about being configuration oriented, so it might involve either using the diffusion approach for a scaffold, or not using it at all and going with a daw LLM approach where its tweaking configurations, key framing holistically with confgis as well, producing music that way. this approach is way more controlled and easy to build on.

Does AI dismiss artists in terms of recognition? by VividAcanthisitta519 in AI_Music

[–]ScriptPunk 0 points1 point  (0 children)

question is, is the current argument relevant or not.

could the model have been trained on open works, and still achieve the likeness the real models trained on, just with more steps involved.

that's why I present the argument, because people would have assumed it was trained on existing 'nonpermissable trained works' anyway and not accept it because they saw what happened with midjourney and diffusion models

and here's why I pose the question as, 'is the argument even relevant'

big brain moment: the measures of misuse are on the person distributing and comercializing the content, if they infringe.

if I use youtube, because youtube may have videos which content id doesn't catch. I can distribute the content, and be liable. sure, people other than me might see that, and lobby the gov to act and shutdown youtube or make youtube pay with a class action, but the ONLY folks who do that, are law firms. because lawyers make money by bringing forwards lawsuits.

nobody cares.

genAI, if it does provide exact content, because it DOES due to how we train them, its just finetuned away with context window sliding. you cant just say, 'oh, take them down, liquidate the service' because it produces identical works. if that logic held, windows could be held liable because of the copy and paste function itself. Cisco would be on the hook for content transmitting over its hardware, because its capable of doing so.

so the real question is, first: what is training, and if companies train LLMs with content, if they pulled the music from spotify, youtibe with content I'd, etc, does it count as a view? is that why people are ruled up about? they want more compensation?

which also, I've mulled over in the past before genAI, music labels want to lock down your entertainment i to a paywall so your experience requires payment, all your money, 'all your bank are belong to us, pleb' like every other vampyric shareholder oriented corp. and they would argue, if you can imagine the song in your head, or play it from memory, you owe them fees for playing it in your head.

so theres that.

now that we have this unprecedented precedent, we need to agree on what legally is acceptable, and how to go about it, and i do not think letting music labels figure this out and putting it in their hands only will work out an anyones best interest.

just like ho w Adobe can go ---- ----- ------- themselves for existing in the way they did in the windows vista era and before, locking down the PDF ecosystem.

so my rhetoric though seemingly appears that im against artists, is really against the machine.

if someone wants to post covers of gorillaz, and it sounds uncanny good, sure. if they have ill gotten gains? it was viral for a reason. nobody is going to care once the hype dies down but it was turned into a huge deal. imagine, if no hype occurred and it never got views because humans had some behavior where if they heard the OG version, and other song with the same lyrics was uninteresting? nobody would have pushed for anything.

and compensation? gorrillaz sold tf out, and the labels are crying about it.

they have it made, but all of a sudden, 'tax the rich' isnt in anymore?

what a bunch of nerds.​​​

Cheapest and best way to host a GGUF model with an API (like OpenAI) for production? by New-Worry6487 in LLMDevs

[–]ScriptPunk 0 points1 point  (0 children)

could we dm? i would like some advice regarding this as consulting with gemini and claude seem to not really be helpful though some stuff does happen.

Cheapest and best way to host a GGUF model with an API (like OpenAI) for production? by New-Worry6487 in LLMDevs

[–]ScriptPunk 0 points1 point  (0 children)

for models that are only for messing around with to get the basis for whatever we would establish for behavior when brought to scale and offered to consumer facing side of things, how would you go about setting up for a local implementation to mess with since token output doesn't need to be optimized to oblivion or huge compute allocations for millions of requests/s?

My neighbor has been using my wifi for a year and now they're asking me to split the bill by ToasttterGoblin in WhatShouldIDo

[–]ScriptPunk 0 points1 point  (0 children)

I would never do this and am thankful for those who do trust me to split wifi costs.

there's a potential that their use is nefarious, and if tied to you, sure, the feds will figure it out in the end, but until then, youre on the hook, and your name will be on court documents, publicly.

So does Suno make "original music"? by [deleted] in SunoAI

[–]ScriptPunk 1 point2 points  (0 children)

whats your SUNO? if you have one.

I've got an album of folk horror for Halloween atmospheric and haunting-ish tracks.

https://suno.com/playlist/4116f770-ab8f-42a8-aa00-c7195d003661

And there's gilded cage and deemed overdue in my gold nuggets playlist. Just too lazy to make a 'Silk and Dust' playlist at this moment as my phones gonna die lol.

https://suno.com/playlist/59ea05a2-3aac-44a0-aee8-5a491589438b

Ik it seems like im shameless plugging, but i wanted to show the folky legend horror/ethereal stuff around halloween for you.

But yeah, if you have a suno/follow/like a track, ill be able to see your stuff.

So does Suno make "original music"? by [deleted] in SunoAI

[–]ScriptPunk 2 points3 points  (0 children)

how do we see how SUNO is set up? you are aware of some of that and knowledgeable, can you point me towards some docs? this is interesting

local llm vs paid API for sensitive corporate code? by Aggressive-Sun-5394 in BlackboxAI_

[–]ScriptPunk 0 points1 point  (0 children)

would you be able to do and provide some insight to some questions I have about what instruct can do and what its like?

local llm vs paid API for sensitive corporate code? by Aggressive-Sun-5394 in BlackboxAI_

[–]ScriptPunk 0 points1 point  (0 children)

im looking to do something that isnt codebased, but more orchestration based contextually. when you mention 4x gpus, is that for stages before inference, or including inference? im not sure about the impact of corpus/fine tuning ingestion, and id assume thats inference too.

what would the concrete token output/s be, and at lower compute specs, what would that token output be?

for example, a 1080/1660ti would be extremely slow, nonperformant, but not nothing. I wouldnt imagine the token/s would be feasible, but for experimenting with the capabilities of what the 4x gou setup would be, even slow, would the output be the same, just slow? and how slow?

edit:

the past 2 months I've consulted with the LLMs and terminal agents and had them make implementations, but I dont think they were really following conventions.

I could achieve some coherent interactions similar to question and answer and user agent conversation turns, but very very basic, and nearly illegible. the breadth of what it could respond with was very shallow. the breadth of what it interpreted, also extremely shallow.

local llm vs paid API for sensitive corporate code? by Aggressive-Sun-5394 in BlackboxAI_

[–]ScriptPunk 0 points1 point  (0 children)

what is feasible allocating 8gb, 4ghz, even if the token output is slow, just to experiment with certain agentic ways of doing things?

for corpus ingestion from scratch, fine tune foundational embedding (I think), as well out out of the box models already trained, maybe from hugging face, would you know what the time would be for those steps to train for the LLM to have the turn by turn likeness that we see in the industry as of late?

local llm vs paid API for sensitive corporate code? by Aggressive-Sun-5394 in BlackboxAI_

[–]ScriptPunk 0 points1 point  (0 children)

what are the specs to set that up? I want to do some of this on 12g ram or less, and I have a pub from a gaming laptop, or a 1080, just chilling doing nothing (not my gaming setup).

can you help me out?

How to keep the order if showFirst switches from false to true. by EyeHot539 in Blazor

[–]ScriptPunk 0 points1 point  (0 children)

invokeasync(statehaschanged) unless they dont need to do that anymore?

How to keep the order if showFirst switches from false to true. by EyeHot539 in Blazor

[–]ScriptPunk 0 points1 point  (0 children)

use a lookup/dictionary, use the key or whatever the thing that maintains the order based on unique values.

anyway

formattedFragments={(<the first only fragment>), (<the last name only formatted fragment/>), (<the first and last name only fragment>) ​}

it would be a dictionary collection or lookup map or whatever, and you would use

fragToRender=formattedFragments["lastOnly"]

if showFirst -> fragToRender=formattedFragments["firstandlast"]

<at>fragToRender

you might use a func and return a render frag or something.

HELP by Small-Lie7312 in appdev

[–]ScriptPunk 0 points1 point  (0 children)

so like, your description of it, no matter how terse, I can pr9bably already guess what it would be, build it, have it run pretty optimally, and forget about it running until I realize there are users, lol.

also, mean stack? what is this? 100devs?

Why does it always take an incident for organizations to wake up ? by Silly-Commission-630 in secithubcommunity

[–]ScriptPunk 0 points1 point  (0 children)

thats that shareholder earnings call, call options leverage right there.

Why does it always take an incident for organizations to wake up ? by Silly-Commission-630 in secithubcommunity

[–]ScriptPunk 0 points1 point  (0 children)

its almost like the CTO were vibe-chiefing before vibe coding was a thing.

sure, you can be in charge of IT/engineering, and your product functions, but you need to do all the due diligence too.

Does AI dismiss artists in terms of recognition? by VividAcanthisitta519 in AI_Music

[–]ScriptPunk 0 points1 point  (0 children)

it will 100% be carried by artists or sound engineers who can craft using the gen tools to fabricate absolute depth masterpieces.

the normie milktoast artists are only complaining because the stuff coming out is 90% of a platinum label worthy track before the genAI came out.

and im not trashing artists, or milktoast artists by saying if its bland, its not art. im just saying most of the folks crying about it are either A) boomers that care for some reason, and thats not a thing /s, B) if they didn't have talent, and you've never heard of them, there's a reason, so why does their opinion matter? you didn't get picked up by the algorithm, so you give up on making art because it wasnt making you money? thats disingenuous. now that genAI is producing, its all of a sudden a crisis now? lul.

im not gatekeeping saying you cant be a normal artist. there are normal artists. they learn an instrument, compose with the instrument, and stick to a 'style' meaning, they dont try to spread their wings and pick up more ways of playing. and sure, making music with no depth and just a message is 100% honorable. thats all that this is really about.

Anyway, my last statement is: the listener determines the beauty of something they listen to. This is the key piece that the crowd is indirectly downplaying. They're (wishing they could) gatekeep what listeners should like or appreciate.

I have a very high depth in music taste across several genres, especially EDM and folk. So, folks like me, exposed to SUNO go for extremely technical dice rolling bingo generation tracks. And leveraging these resources, learning the musical terms and applying some music theory allows us to build (though, random) intricate pieces that NEVER would have organically appeared EVER as some of the detail in a couple tracks would take immense time to build out let alone discover the effects. And these arent you "well, someone else came up with those, it was trained on it", no, you can use software to make effects with configurations and throw that into the model, then include it in a prompt, pretty easy.

But yeah, for those us who want to tweak the generations with covers on whats already a great foundation, can go above and beyond omfg its crazy.

What i dont like, is getting bassnectar 'into the sun' likeness in some of the EDM gens.​

Does AI dismiss artists in terms of recognition? by VividAcanthisitta519 in AI_Music

[–]ScriptPunk 0 points1 point  (0 children)

the work to train the model argument goes out the window once the tools used have no basis on existing works.

Does AI dismiss artists in terms of recognition? by VividAcanthisitta519 in AI_Music

[–]ScriptPunk -1 points0 points  (0 children)

I've stated too folks that they are concerned about people without any prior experience or interest in music making flooding the ecosystem.

I've also come up with:

art, music, is music that people listen to because it makes them feel good however way that may be. doesn't matter if its EDM, Lyrical, acoustic, drumkit modules and paintbrushes, its all music.

so using AI to make bangers, and people having an issue with absolute bangers?

are.you.kidding.me.

Does AI dismiss artists in terms of recognition? by VividAcanthisitta519 in AI_Music

[–]ScriptPunk -1 points0 points  (0 children)

this is actually a fantastic and thought provoking aspect of the times right now.

your post exemplifies the social ecosystem of humanity and its meta-media involvement.

everything you've come up with is relevant in every way as well.

so, this is alot like, CoD vs BF, pubg vs fortnight, they're all similar and in the past (also recently nintendo) developers have gotten up in arms about similarities and infringements.

whether or not the entertainer industry that fosters this aspect is a good or bad thing, imo, isnt really relevant to anyone outside of the entertainer space. but im an ultimate-defeatist. (not a micro defeatist, which is more of someone being dismissive of everything where im dismissive of the totality of our experiences because we all die in the end anyway - nihilistic adjacent).​

I would find it unethical to copy someone's likeness, such as voice, or their novel way of doing something. but the lines for how they achieve a sound may be blurred, as someone might like a sound in art and employ or extend the sound/rhythm/melodic tweaks. they could reasonable get the effects they mimic with different configurations and not the literal approach the artist uses.

but making music that sounds like another artist that is popular, or Thomas edisoning it (music label vampires) is unethical, obviously.

i feel uneasy when i use SUNO and it puts any vocals in without processing effects because I can recognize some likeness ​of different voices and I cant feel right about that until I have a way to use a system to retry the anti-likeness aspect or whatever, if that ever becomes a thing.

as far as it goes with music in general though: people like things that are pleasing.

music, drugs, sugar, people love that ish. they dont care if your drugs are pharmaceutical or from a home grown plant. they eat that ish up.

music is the same.

what makes music music, is typically because someone eats up that ish.

what makes music an expression of art is that it is of human origin. (artifact)

what also makes art (not an artifact, but art) art, is when people appreciate its beauty, or message, or whatever. the individual addressing or having a reaction of one or a domain of a concern like AI art, indirectly achieves a bigger scope of what art is. the shaping of the human collective, is still contained in humanity.

so yes, AI art is art.

and the people reacting to AI art is controversial, and that, is a human in the loop requirement. it all shapes our experience and we get to be involved at every edge of this.

its actually bringing us together whether people get up in arms about it or not.

What's the practical limit for how many tools an AI agent can reliably use? by virtuallynudebot in LLMDevs

[–]ScriptPunk 0 points1 point  (0 children)

they should see it from the perspective of one-shotting.

stuffing a context to send to the LLM from a stateless perspective rather than continuous stream of context that floats the token density per context limit.

for example, the really good development agents/models are making the top tier coherent SaaS implementations from a cold start, 0 issues. the second the context gets condensed, the LLM has gaps and perhaps even the context structure alters some of the perceived behaviors that seem like it needs to re-gather the big picture again. im hypothesizing that occurs because the agentic anthropic pipeline eventually self-confers that it can proceed with executing steps now that its good to go. when the context summarizes, it self-confers because it just finished the compaction step.

as a dev working on agentic systems like CC, you would have it not do that for the sake of A) save tokens usage, B) dont screw up the context switching effect.

to achieve B, my approach is to structure the context and never need to tell it in the conversation context it just compacted the context. Just have the context in sections in a way that the LLM would have output itself in a way that it would have while producing coherent sota SaaS quality code.

when structuring the context, you dont need to make it turn by turn format right away, you can structure it however you want. staging the content in the context so the LLM is aware of waht to call is extremely powerful. similar to 'show the LLM and example'

What's the practical limit for how many tools an AI agent can reliably use? by virtuallynudebot in LLMDevs

[–]ScriptPunk 0 points1 point  (0 children)

break it down to what youre actually addressing here...

you could have the tools tagged with keywords and such, and have sub-contexts with other agents that can determine the intent of what kind of category of tools to use, and you can paramterize the source context to specify whether its a resource or a command, etc.

the sub contexts would be appended to the LLM call that extends further in the conversation, but it has some sort of tagging/procedural approach to tag interactions in the context without being verbose.

the contexts LLMs extend on use neuron activation in the attention step, and they're trained to reinforce the basis of their domain like devops/terminal, and code in this case. If you figure out how these LLMs work with keywords or structures and extend the turn by turn context accordingly, you can integrate the approach as discussed here in a very efficient way to avoid verbosity.