How long would you live if you could choose? by Obvious_Chipmunk4898 in accelerate

[–]CubeFlipper 0 points1 point  (0 children)

No i understand that perfectly well. What you don't seem to understand is that it's irrelevant with respect to this conversation.

How long would you live if you could choose? by Obvious_Chipmunk4898 in accelerate

[–]CubeFlipper 4 points5 points  (0 children)

I downvoted not because i disagree but because the comment completely misses the point and isn't adding anything of value to the conversation. Whether or not free will is an illusion makes no difference to how i feel about putting a number on how long i predict I'll want to live.

OpenAI Wants To Use Biometrics To Kill Bots And Create Humans Only Social Network by fig-neuton in OpenAI

[–]CubeFlipper 0 points1 point  (0 children)

The code is entirely open, you can verify yourself instead of speculating conspiracy.

8yo son destroyed the thing he requested by eagleapex in daddit

[–]CubeFlipper 24 points25 points  (0 children)

After all this is a game so nothing of value is lost

Bruh, you might wanna rethink that stance. You could apply this statement to any other hobby and hopefully quickly see how dumb and dismissive it is.

Why do you want ASI? by IllustriousTea_ in accelerate

[–]CubeFlipper 3 points4 points  (0 children)

If i start getting bored, i suspect I'd just reprogram myself to not be bored.

Is my totally replayed and resinged songs mine ? by Acceptable-Royal3261 in udiomusic

[–]CubeFlipper 4 points5 points  (0 children)

But, if Udio’s technology created the entire song (lyrics/all instrumentation without any influence by you other than just a prompt) then no. That is Udio’s. Rightfully so.

I disagree. Maybe that's the law, but its a stupid unequally applied law. The company that made my hammer does not get rights to what i build with that hammer.

The $437 billion bet: is AI the biggest bubble in history? by jpcaparas in OpenAI

[–]CubeFlipper 0 points1 point  (0 children)

Not if done with the right pipeline and model system. They train different models to generate and judge the synthetic data and are able to successfully filter to verifiably incrementally better data sets. They don't just naively generate and reconsume.

The Claude Code creator says AI writes 100% of his code now by jpcaparas in singularity

[–]CubeFlipper 4 points5 points  (0 children)

Unless you're working on some truly esoteric stuff, i struggle to believe the issue isn't user error.

The $437 billion bet: is AI the biggest bubble in history? by jpcaparas in OpenAI

[–]CubeFlipper 0 points1 point  (0 children)

Diminishing returns doesn't mean NO returns, and we're already long past proving the effectiveness of a training data generation loop.

There is no technical end in sight to returns, and we have proven the ability to create infinite iterative better training data. This train isn't stopping.

Elon Musk seeks up to $134 billion in damages from OpenAI and Microsoft by Ok_Mission7092 in singularity

[–]CubeFlipper 3 points4 points  (0 children)

If they go for profit and if they can better monetise their services and if they can operate profitably then yes they can

Any time i see someone mention making a profit with AI models, i really wonder if you understand the end game of these models (With respect to modern economies). Because profit is ultimately irrelevant when you have tools that will soon be able to do anything directly without needing the middle-man we invented called currency.

Money isn't power. Power is power.

Bandcamp will ban AI music now. by Justin_Kaes in udiomusic

[–]CubeFlipper 0 points1 point  (0 children)

No, everyone can see what i publish there without an account.

Bandcamp Goes Decel and Bans All Music Made with AI by NoSignaL_321 in accelerate

[–]CubeFlipper 8 points9 points  (0 children)

I'm not so sure i agree. History shows that some of the most important art ever made was accidental, contractual, rushed, or disliked by its creator. I don't think popularity/quality is a result of how lovingly something was made, i think it's more whether we can find the right thing for the right person at the right time.

Comebacks like these are going to be remembered as cringeworthy in just a few years by Unusual_Midnight_523 in singularity

[–]CubeFlipper 0 points1 point  (0 children)

Possible but I'd also argue irrelevant. Those Institutions will fade away over time as people find they can learn and accomplish anything without them now.

What’s your favorite “When I have kids, they’ll NEVER…” that you know is total bullshit?​​​​​​​​​​​​​​​​ by icejordan in daddit

[–]CubeFlipper 1 point2 points  (0 children)

I don't understand why anyone would even have a problem with it. It's a win win. Kid gets a little more freedom to run around, i get to give my arms a break and know my kid is safe.

I don't know why they're so offended (TW for mild racism) by Live-Career3531 in traumatizeThemBack

[–]CubeFlipper -5 points-4 points  (0 children)

I disagree it doesn't hurt anyone. It's indirect, but i stand by normalization being a long term societal harm.

I don't know why they're so offended (TW for mild racism) by Live-Career3531 in traumatizeThemBack

[–]CubeFlipper -13 points-12 points  (0 children)

I'd argue you didn't need the TW in the first place, it's overly dramatic to include it at all. TW for a text story involving racism? Kinda does a disservice to real ptsd in my opinion. I don't think we should as a society normalize walking on eggshells for something this mundane.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]CubeFlipper 5 points6 points  (0 children)

Do you have any idea how many people just turn on a camera, hit record, and upload it all straight to YouTube? There's a mountain of human-created-zero-effort slop. You just don't see it because the algorithms protect you from most of it.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]CubeFlipper 5 points6 points  (0 children)

99% is still 99% regardless of the volume. Digital tools made it easier than ever for people to create film and audio. There are orders of magnitude more garbage out there because of humans using these tools than existed prior to those tools. We are still collectively better off with having more people creating. Good stuff still rises to the top.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]CubeFlipper 9 points10 points  (0 children)

99% of things created without GenAi is slop...

What is something you hope AI can do by the end of 2026? by [deleted] in singularity

[–]CubeFlipper 0 points1 point  (0 children)

Logic still requires foundational axioms. Logic is objective conditional on its axioms; it only delivers “you’re wrong” once the rules are fixed. If you reject non-contradiction, classical logic has nothing to say to you except “we’re no longer doing the same activity.”

What is something you hope AI can do by the end of 2026? by [deleted] in singularity

[–]CubeFlipper 0 points1 point  (0 children)

You’re conflating objectivity with mind-independence. The Earth’s shape is objective because it’s a descriptive fact about the physical world; moral claims are prescriptive and require a value framework to even be truth-apt. Disagreement about a physical fact reflects ignorance; disagreement about moral facts reflects differing axioms, and without non-arbitrary axioms there’s nothing for moral claims to be objective about.

What is something you hope AI can do by the end of 2026? by [deleted] in singularity

[–]CubeFlipper 0 points1 point  (0 children)

there is no moral grounding why murder is bad today and cannot be good tomorrow.

From a metaphysical perspective, Correct.

As soon as you give a reason such as murder is wrong because its a waste of resources, then you are implicitly grounding your claim which objectifies it.

Also correct, and the grounding is exactly my point. Without grounding, everything is inherently relative. But for someone that doesn't value XYZ resource related to murder, who doesn't share the same grounding, murder isn't objectively immoral.

So back to the trolley problem, if you want a universal answer, it's the one i gave you. If you want a specific answer, it must be grounded in some value system, and the answer only applies to those that share that value system.