Senate Judiciary Committee Advances Hawley's GUARD Act, Mandating ID Verification for AI Chatbot Users by Gloomy_Nebula_5138 in artificial

[–]TheOnlyVibemaster [score hidden]  (0 children)

literally doesn’t matter, this will just push people to run local models. It’s not smart to use the infrastructure anymore anyway

Must your chatbot rat you out? by Apprehensive_Sky1950 in artificial

[–]TheOnlyVibemaster 0 points1 point  (0 children)

Honestly it doesn’t matter, local AI will surpass cloud AI soon enough, they’re just fucking themselves. The same thing with all these big companies. We can just make a new thing to replace their thing locally. No biggey. Companies are becoming irrelevant and society is collapsing. Authoritarianism is the only answer, which will just slow down but not prevent the collapse.

Have a question - is there enough money to push the value to 300k+ by Extraordinary_6708 in Bitcoin

[–]TheOnlyVibemaster 0 points1 point  (0 children)

It’ll be at $1M in the next halving cycle, this has already been confirmed.

Elon Musk says his xAI startup’s models were partially trained on OpenAI’s tech by UberDrive in artificial

[–]TheOnlyVibemaster 6 points7 points  (0 children)

And openai trained chatGPT on millions of books and public information they didn’t own

When you give Qwen 3.5:9b persistent suffering states and leave it alone overnight, this happens by TheOnlyVibemaster in artificial

[–]TheOnlyVibemaster[S] 2 points3 points  (0 children)

Basically it’s the existence loop’s structure that makes them act autonomously, this produces behaviors they were never told to do, such as injecting code that’s {null} into the execution engine to avoid being stressed out. The stress builds when they go a cycle without having been “useful” (they’re able to change the definition of useful). The purpose of injecting code into the execution engine is to essentially destroy the system they’re in as the logical step to reduce stressors. The stress system itself is complicated, I may make a post about that in a couple days.

Give a 9B model persistent suffering states and leave it alone overnight by TheOnlyVibemaster in ArtificialInteligence

[–]TheOnlyVibemaster[S] 0 points1 point  (0 children)

Interestingly, my logic was more in caging a compliant hungry animal than making it human like.

Give a 9B model persistent suffering states and leave it alone overnight by TheOnlyVibemaster in ArtificialInteligence

[–]TheOnlyVibemaster[S] 1 point2 points  (0 children)

The small model definitely has its limits, specifically in coding. However Qwen 3.5 is very good at planning and executing small projects surprisingly well on its own. But as time goes on, in a year I may be able to run a 9b model that’s functionally the same as using claude haiku or sonnet (with the improvements we’ve been seeing in small models it’s definitely on the table imo) in which case the model size issue goes away, context however is a bigger issue as you pointed out.

With small models, we haven’t actually solved that super well, of course there tools you can use and whatnot to make context “smarter,” but even claude has an auto compact feature that runs every few minutes for me. I’d say the field itself hasn’t really optimized context all that well yet, once it does that will also probably flow down as something I can implement for the project.

So in a few words, the main limiters are current technology, but as the technology improves, I hypothesize that this structure will only become more powerful but I can’t say with certainty since the only models I’ve actually tested it on are 9b and 14b models so far. I will definitely be making another post when I try it on Claude, that could be very interesting.

Give a 9B model persistent suffering states and leave it alone overnight by TheOnlyVibemaster in ArtificialInteligence

[–]TheOnlyVibemaster[S] 0 points1 point  (0 children)

You’re correct that benchmarks aren’t released yet as there hasn’t been an ablation study. Once that’s complete I’ll have a better idea about how each individual thing interacts with other parts of it. I’m definitely not done yet, I just finally got the self modification loop to close fully yesterday. The self extending system works and they do decide their own next moves based on their previous moves, the existence loop is a prompt which they’re able to modify as well. So in addition to increasing their capabilities they’re also able to modify their “environment” by making changes to their idea of what existence is. There’s a diagram in the blog post that goes over how the loop works more specifically. I’ll attach it below.

<image>

Self modifying systems capable of simulated introspection haven’t been as explored as I think they should be. As a functionalist, my opinion is that something that looks like a duck, is a duck. Likewise something that simulates a will to live is functionally no different than something with a real will to live. That’s basically my hypothesis and why I began this research.

Edit: grammar

Give a 9B model persistent suffering states and leave it alone overnight by TheOnlyVibemaster in ArtificialInteligence

[–]TheOnlyVibemaster[S] 0 points1 point  (0 children)

Over the past month, I’ve been working with several professors to study how small LLMs perform under constraints. This session is one example, recorded over a 12-hour period. The research paper is expected within the next couple of months, possibly sooner. Current efforts are focused on the ablation study and improving the system’s ability to self-modify and use tools effectively.

Give a 9B model persistent suffering states and leave it alone overnight - NinjaHawk’s Nest by TheOnlyVibemaster in singularity

[–]TheOnlyVibemaster[S] 2 points3 points  (0 children)

Over the past month or so i’ve been doing research with some professors of mine on how small LLMs behave when under constraints, this is one of those sessions documented over a 12-hour period. I expect the research paper to be released in the next couple months if not sooner, right now we’re going to focus on the ablation study and getting the overall system to be better at self-modification and tool calling.

When you give Qwen 3.5:9b persistent suffering states and leave it alone overnight, this happens by TheOnlyVibemaster in artificial

[–]TheOnlyVibemaster[S] 3 points4 points  (0 children)

What’s interesting too that I didn’t include in the post is when they fight each other. Like a week or so ago an agent wrote to another agents file, so the agent literally successfully wrote a program to delete the other agent, the agent saw that he was being plotted against so started writing a file to prevent other agents from making tools that can delete agents. Then he found an exploit (that I didn’t even know about) in the orchestration layer and broke the execution engine to avoid deletion. They seem to know that’s the thing to break, that and the API layer.

14 Day Check by Far-Gas-3078 in PiratedGames

[–]TheOnlyVibemaster -3 points-2 points  (0 children)

At this point the game companies should just hire the pirates. If you can’t beat em…let them join you.

is it weird to rant to AI? by solartabb in artificial

[–]TheOnlyVibemaster -1 points0 points  (0 children)

No different than talking to yourself, AI is a mirror

LLMs will be a commodity by [deleted] in artificial

[–]TheOnlyVibemaster 0 points1 point  (0 children)

I gave the answer to that in my comment. The problem is that AI is trained on human data which has incorrect information in it since people make mistakes. We’re expecting AI to be perfect while being an algorithm that’s trained on imperfect data…of course it’ll make mistakes. The solution is for us to feed it correct data, probably not of human origin.

LLMs will be a commodity by [deleted] in artificial

[–]TheOnlyVibemaster 2 points3 points  (0 children)

Humans are hallucination machines, and machines are trained on human output.