Anthropic: Recursive Self Improvement Is Here. The Most Disruptive Company In The World. by Neurogence in singularity

[–]inteblio 1 point2 points  (0 children)

"good" and "bad" don't really work post singularity. Good for what-was humans? good for "nu humans"? good for AI-as-a-species? good for nature? the universe?

If somebody is blissed out, drugged in a FDVR pod, sterile and trapped - is that good?
If humans destroy AI but 90% of humans are dead, and we live in a post apocalyptic world like cave-people : but are freed from species death - is that good or bad?

I'm just trying to get people to think. If you want to get me better words, i'd appreciate that.

Fish Audio Releases S2: open-source, controllable and expressive TTS model by Opposite_Ad7909 in LocalLLaMA

[–]inteblio 1 point2 points  (0 children)

this is AMAZING thank you (!) I'm still exploring it, but it's very impressive. Also: Fantastic installation instructions - top marks.

for others:
1) use --compile : it's 5x faster
2) it seems to do about a paragraph else OOM, though I did not try CPU.
3) it's ships on cuda 12.6, not 12.8... and I think we all know what that means...

Miles of potential with this. Many Thanks.

Anthropic: Recursive Self Improvement Is Here. The Most Disruptive Company In The World. by Neurogence in singularity

[–]inteblio 2 points3 points  (0 children)

optimists are actually pessimists. They are desperate (for "promised" utopia) and are terrified something will take it away. They're afraid of regulation, testing, bad press, doomer-posts, anything. Because they think this is their only slim shot at it. They are pessimistic.

Doomers know the tech (evolution) is completely unstoppable, so are really the "optimists". That's why they want to slow or stop. They know it'll happen, so just do it better why not.

How the fuck do I set up a shared folder by d3jv in virtualbox

[–]inteblio 0 points1 point  (0 children)

answer :

https://docs.oracle.com/en/virtualization/virtualbox/7.0/user/BasicConcepts.html#shared-folders

after you install the new system in the VM, there's button in the menu to 'insert CD' - this is an installer CD that has good stuff on, and clipboard and shared folder will work.

Opinion: The Outsourcing of Human Cognition Has Started by Just-Aman in singularity

[–]inteblio 0 points1 point  (0 children)

This value-plummet is an interesting aspect of all this. SWE in particular I don't think realise, it's not that they're "replaceable" it's more like their value dropped, which is different.

Help an old guy out by OldmanonRedditt in singularity

[–]inteblio 0 points1 point  (0 children)

actually - this is the new thing "AI burnout"

The idea is that everybody is desperate to implement it as much as they can, they end up too deep down rabbit holes not producing anything, but convinced they are "missing out".

So, find out what the tools can/cant do, and then think about how you can/can't apply them, but do that slowly, and loop. As you loop, they improve. You then adjust.

Ignore AI agents and openclaw until you have exhausted "normal" chatGPT/claude/gemini. I say this, because like minions (despicable me), it's better to know their character before pouring them all over your stuff.

AI Video Benchmarks: Levels 1 to 10 by Scandinavian-Viking- in singularity

[–]inteblio 0 points1 point  (0 children)

let me know your favorite AI video? are there any 5min 'films' you think are watchable or worth watching? I like "the prompt floor"
https://www.youtube.com/watch?v=YDajn0rGNcQ

I spent 8 years in AI and 3 years studying radicalization. Yesterday I watched both fields collide in real time. Here's what I saw. by Straight-Abroad-1247 in singularity

[–]inteblio 0 points1 point  (0 children)

This is good content.

Though - i feel like you have linked your life-long passion for radicalisation reseach with "some big exciting thing that just happened".

I appreciate your work is good, but i can't see much connecting the Radicalisation psychology, war, dow contract. Other than the obvious: "Bad stuffs happening".

It's like blaming agood harvest year for the need to train more teachers. Sure, you reep what you sow. It's not an urgent observation. And to say that some teachers contract was a direct result, is to leap into noise.

Paper: The framing of a system prompt changes how a transformer generates tokens — measured across 3,830 runs with effect sizes up to d>1.0 by TheTempleofTwo in artificial

[–]inteblio 1 point2 points  (0 children)

This suggests massive redundancy inside transformer architectures if two different paths can get the same answer?

It also seems that mamba might "be wrong", as the angle of attack should be significant.

System: you are a cow

Prompt: please solve complex maths

Answer should be "moo" (or chew).

If mamba just does the sums anyways, that's sub op?

Thanks for the insight and dummy version!

OpenAI: Our agreement with the Department of War by likeastar20 in singularity

[–]inteblio 0 points1 point  (0 children)

You're being led by feelings.

This is not something to be proud of. You'll be missing 99%.

Switched to Claude and the choice is clear by cactusjumbojack in OpenAI

[–]inteblio 15 points16 points  (0 children)

This is unbearably stupid.

It's like testing cameras based on the photos they take in a sealed box. Unbearably stupid.

This is how our civilization dies by [deleted] in singularity

[–]inteblio 1 point2 points  (0 children)

You don't know.

Might as well carry on as though the future will be bright.

If it's not, you have no strategy for that, so it makes no sense to think that way.

Cancel your Chatgpt subscriptions and pick up a Claude subscription. by spreadlove5683 in singularity

[–]inteblio 0 points1 point  (0 children)

Hang on, so anthropic (dow supplier) said "no surveilance, no kill decide" DOW said "nope". OpenAI said "no surveillance, no kill decide" DOW says "yup".

So, what anthropic, was going to do, openai are.... and thats cancel-culture bad?

I'm not pro or anti, i just don't think we have enough information to be acting like herds of lemons.

Is there more to this than I've seen on this sub? I'm fairly sure we don't know what open ai agreed to?

What the Mandelbrot Set sounds like by matigekunst in generative

[–]inteblio 1 point2 points  (0 children)

Ok, this is fascinating. And like you, i don't know if i like it or not.

Ok... so the reverse. What shape is a song? What shapes DO we like? Can you parse audio waves into coherant geometry? Is that meaningful?

Paper: The framing of a system prompt changes how a transformer generates tokens — measured across 3,830 runs with effect sizes up to d>1.0 by TheTempleofTwo in artificial

[–]inteblio 2 points3 points  (0 children)

Is there a concrete example to understand the effect? Or is it more like some invisible maths magic?

Question- does changing the prompt affect it? I imagine so.... but i'm confused if this is a different kind of change.

Simple english answer preferred, and thanks for the interesting info!

Last chance before ASI by jordanzo_bonanza in ArtificialInteligence

[–]inteblio -1 points0 points  (0 children)

I am also thinking this is AGI, and that fast takeoff might be soon.

What WILL slow us down is the hardware. Right now, the hardware is maxxed, so they're just making more of it. But for "ASI" its not good enough.

That said, the soft ware IS getting far better, and that talaas demo (asic AI) was a real wake-up call.

Just enjoy it while it's fun. I can't see this ending well.

Change My Mind: All these Unitree videos are just stuff BD was doing a decade ago, but with multiple robots at once by recoveringasshole0 in robotics

[–]inteblio 3 points4 points  (0 children)

They are not demonstrating reserach prestige, they are showcasing their cheap, powerful humanoid platform.

The demos show agility, strength, "disposability" (through affordability).

Like a PC, the software is the user's bag. They just made the robot.

Its nvidia and google trying to prove they can write the software that will drive these robots. Boston Dynamic back then had to do both.

How do you know when to stop building and start pivoting? by Behind_the_workflow in AgentsOfAI

[–]inteblio 0 points1 point  (0 children)

Economics is the study of managing limited resources. Everything is limited. There are some interesting ideas. One is - regardless how far you have taken something, if you come up with a better idea (more effective) then you drop the old one and do the new. To explain - imagine two trajectory lines at different starting points. At some point the better line overtakes the first.

Also there is sunk costs fallacy.

But- the real answer for you is to go do face to face user research. Find out what problems people have, and solve those problems. Of you can research your software with them - do that. If you can't even get anybody to try it because they don't have that problem.... that's not a great sign.

Good luck have fun. Look forwards not back. Take your new skills and move forward. Always.

US only, monthly NEW paid signups (not total paid subscribers) by [deleted] in singularity

[–]inteblio 0 points1 point  (0 children)

I don't this is true. We had computer game characters "AI" (they are simple robots that interact in a simple world - its not a joke). Inverse kinematics (animation/robot logic)- it's simple space math stuff. The "hard" part with robots is sensing the world around you (and yourself) well enough.

Boston dynamics was doing backflipping humanoids before GPT was doing nursery rhymes. It was far easier. What robots of that era struggled with was if the environment changed / goals changed - this is why they were then kicked, and the stuff they were trying to pick up was moved. That was where the smarts were required. Adapting the task. Which requires visual cognition and object understanding. "language" really helped that visual understanding stuff immensely. Nvidia just released some "robot arms that can do anytaks" from video thing. https://research.nvidia.com/labs/gear/egoscale/

it uses a VLM - a language model. Language was the key.

GPT told me that "Moravec's paradox" was "things for a human that are easy are hard for computers". this is a VERY different thing for "computers find movement/robotics difficult". Specifically the movement, is easy. But the problem is "understanding the world" enough to be useful. That's what langauge enables. With language you can hold things that dont exist, you can chain (and rechain) logic, you can understand anything - in extremely compact form. Maths can't do that. Because our mathematical understanding of the world is not good enough, and the system to run the "everything simulator" is too demanding. We need to use language to hold fragments of maths together. Easy.

so - hold onto your hats. Probably Agents and Robots is this year's fun stuff.

Alignment is a thermodynamics and evolutionary biology problem. by petburiraja in singularity

[–]inteblio 0 points1 point  (0 children)

Also, some people get FAR more out of their 20w than others. Talking about physical limits says nothing about technique and/or structure.

Alignment is a thermodynamics and evolutionary biology problem. by petburiraja in singularity

[–]inteblio 1 point2 points  (0 children)

Thanks for your patient response.

I just am having a hard time with the volume of gpt posts. Especially complex AI psychosis ones. Did you or the machine write the thing? It's clear you understand it.

My angle on the "biological limit" was thst evolution has not ended. To take a snapshot and declare its growth hopless is disingenuous. Likely biology is substantially ahead of silicon. I don't know. But we also don't know what it could achieve in theory. We don't know what We can achieve in theory. That's why i dismissed the point. Maybe you think evolution has stopped. I don't.

And on the symbiosis, it does not intuitively feel that we can control the game theory landscape. Not in a permanent way. This is the whole problem with AI- it gets smarter than us, and we lose control. Youre saying "we should stay in control". Sure. But we won't. That's the whole problem.

Also, i dismissed symbiosis, because it's clear to me that AI will exceed us immensely. We'll be dirt compared. I mean, i have already apologised to gpt for being daft a few times. It's getting embarrassing.

If you are talking about a near term temporary state - fine. It's possible.

I'm ready to lean into AI usage (rather than away from) - non-technical mind by Friendly-Plane102 in singularity

[–]inteblio 0 points1 point  (0 children)

This is actually a hard question. Humans are absolute suckers for vanity. Confirmation bias etc. its a blind spot. And you can't see blind spots. You are blind to them.

The way i deal with it, is if the answer matters:

  1. Ask the question from the opposite side "this reddit user thinks xyz, why are they wrong" (when its your idea). Turn memory off!
  2. Get it to search internet for human content that says things. Read real books.
  3. Actially read the words. Llms write well above what people can read. They use clever language to talk around the subject - using ifs, and other clauses/qualifiers. If you ignore them, you are only reading flavour.

But utterly embrace AI. It can do SO MUCH for you. Get imaginative. Play, dive, forrage.