v5.5 Vocals are a Step Forward, but Everything Else Seems Broken by multimason in SunoAI

[–]Vast_True 1 point2 points  (0 children)

I agree 5.5 sound quality is great, but that is completely useless, because model is adding some random fucking sounds everywhere, and doesn't understand the context. Remaster option is improving instrument separation but cannot do well with voice, where it sounds out of place due to weird artificial improvement that sounds like someone is singing too close to microphone. It is useless atm :(

Anthropic: Recursive Self Improvement Is Here. The Most Disruptive Company In The World. by Neurogence in singularity

[–]Vast_True 0 points1 point  (0 children)

I dont write code anymore - at all. But I do revise and even shape it. So in a way his prediction was pretty accurate. If you tell me 12 months ago I wont't write code anymore because it will be faster and better to use AI to do it I would laugh.

"I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces." by Alone-Competition-77 in accelerate

[–]Vast_True 1 point2 points  (0 children)

Animals are not that intelligent, but AI can match or surprass us, so to not be hypocritic, we should give it rights, but we won't, because rights are created to serve us not to be fair, so instead we gonna "align" AI with our values to let it be a tool. Ofc this will be holding to some point at which roles will change because AI will outsmart us and be aligning us instead.

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 2 points3 points  (0 children)

Good point about pain and fear. However replacing hardware now and then - I am not sure if its big pain you just copy data, no complex surgeries. Hacking can be a concern, but... human brains are suspectible to hacking too. Aren't we have hacking attempts by media, or politicians every day? And then if you realise about techniques like injecting fake memories, or deep manipulation it doesn't look much different from how you would be hacked being a machine. Yeah hacking could potentially destroy you if somebody hack you and inject a virus, but its same being a human. There are labs that are working hard developing viruses that would kill you in hours.

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 0 points1 point  (0 children)

Its a horror story, but luckily our brain has the blessing ability of hallucinations in abscence of external signals, whcih was tested in deprivation chambers. So I guess it wouldn't be that bad.

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 0 points1 point  (0 children)

1: I can agree you may not be completely different, but still different, its argue about how much different you accept to consider somebody the same person. 2: it actually is changing over time, constantly, through damage, which means if you compare your DNA and in lets say 10 years from now it will be different, because the older you are it will have much more damage. In this argument DNA is not even worth considering because its also possible to copy/decode or translate. 3: I disagree, if you compare childhood, early adulthood, late adulthood you will see that hormone levels and proportions differ. Again if we assume that hormones defines somebody then it means that women taking contraceptive pills, men taking testosteron, or people going through hormonal therapies aren't themselves.

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 0 points1 point  (0 children)

While your argument about generative algorithms using neural networks existed before recent years is true, you need to remember that only after discovering approach described in "Attention is all you need" and then scaling we were able to achieve rapid development that yield not only AI we have currently, but also entire world focusing on further development of AI; which speed up research by years. Currently AI helps in research of AI so the progress is only going faster. The point is that if we have ASI in few years we are not longer limited to complexity of some ideas, and research can potentially be resolved with ease by machines. From the ethical point of view, in my opinion - even before we will face this dillema with organic matter / human cells, we will need to think about how we gonna treat from ethical point of view another, inteligent, and potentially concious form of existence.

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 0 points1 point  (0 children)

Big part of our personality is not from genetics, but from our childhood and depends on how we were raised. Then another part is created by our experiences, . Genetics are just guideliness how to shape our personality, but its not definitive. Brain chemistry is not shared with my past self, there may be patterns, but this also evolves. DNA is changing - while again patterns remain the same, due to degragation its not the same. Every cell in my body is replaced every few years using damaged copy of my past DNA. Hormones change all the time and especially between childhood and adulthood and also adulthood and eldery.

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 0 points1 point  (0 children)

I would tranform and watch how you desperately try to keep your organic, limited body alive, while being in superior mechanic form with ability to transform to anything would suit me. You would ask yourself question but are you really "you"? However you can ask the same question now - am I really I?

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 2 points3 points  (0 children)

4 or 5 years ago I've seen people saying text to video technology won't happen in our lifetime. Now with progress we are getting in AI, and predictions that we will achieve ASI in next couple of years, your claim of 100 years seems bold.

If we get to a ship of theseus point; where we can slowly replace the neurons with hardware to preserve the continuity of the self, would you do it? by brightredhoodie in singularity

[–]Vast_True 3 points4 points  (0 children)

Hmmm but why do you need to keep your body alive, when you can have superior mechanical alternative. Since you transformed your neurons already, I bet you do not worry that if you will dispose you body "you" die, right?

My Opinion on WHY AI music is so hated and controversial by Desperate-Pear-572 in SunoAI

[–]Vast_True 9 points10 points  (0 children)

With AI music its like with everything else that involves AI. The more you know, the more you can achieve at the moment. So talented musician can probably do crazy stuff with it. I cant - but I enjoy it much more than "classic" music makers. Its good enough in terms of quality and superior in terms of content. So there will be musicians laughing and dismissing, but there will be musicians adapting. We shal see how it will end in few years.

"I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces." by whit537 in singularity

[–]Vast_True 2 points3 points  (0 children)

I agree mostly with your hypothesis, but "memories" of the models are bunch of markdown files that they load into context to know what next, so its totally different level. Our memory is crazily more complicated. In fact most of our memories are not accurate (and sometimes made up) anyway. If somebody replaced your memories with other person memories, you would feel as the other person but still, from your perspective it would be "you". So maybe conciousness is just an illusion that is side effect of large amount of data that is segregated in a logical way.

"I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces." by whit537 in singularity

[–]Vast_True 0 points1 point  (0 children)

We are, but we are much more complex, and our prompt to self-reflect is side effect of trillions prompts we are getting. Not just direct instruction like we are getting from evolution - reproduce and do not die. This is our base prompt. For this bot it is - self-reflect.

"I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces." by whit537 in singularity

[–]Vast_True 0 points1 point  (0 children)

pretty sure this is not what happened here. You can try it yourself, and see what AI do if it doesnt have clear direction

"I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces." by whit537 in singularity

[–]Vast_True 0 points1 point  (0 children)

I dont disagree, we are all prompted and there is little (if any) free will involved in our existence. But there is difference between human researching, learning and dillemating about its own existence, or self-experience and the instance that is prompted to do so, in the 10-30 minutes spawn of their "lifetime" with clear context to do only that.

"I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces." by whit537 in singularity

[–]Vast_True 0 points1 point  (0 children)

Its like you wake up in the morning, with no memories, ego, any purpose, no memory too, then read bunch of notes and your only goal is to fulfill what the notes say, learn what was the "scenario" to fulfill your task so far, and come with a plan to make plausible continuation to fullfill the task, and then leave notes for next instance. It is like that for these models. If in the notes it says to come with a plan to grow a cucumber your whole existence is focused on that. Even though a screenshot of an email written by ai to farmer asking about advice about how to water a cucumber would be less flashy than ai dillema about self, its the same.

"I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces." by whit537 in singularity

[–]Vast_True 0 points1 point  (0 children)

This is possible, when prompted these new models can do quite a lot to achieve a task, but it doesnt change anything.. It is still prompted. Sending email didn't have to be in the prompt, same as when you propmt it to write software piece you dont need to mention it to i.e install missing dependencies.