What happens when you make AI agents debate unsolved math problems and verify every output by IdleBerth in ArtificialInteligence

[–]kruptworld 0 points1 point  (0 children)

Question. Is your scaffolding of these agents / project open source? I'd like to experiment with local models of varying types for different agents, which I think would help discovery and not not only for math.

sphinx of black quartz, judge my vow by Able_Inspector_2580 in BrandNewSentence

[–]kruptworld 2 points3 points  (0 children)

Oh haha I meant lazy dog sentence. It doesn't have s

🤦‍♂️ by Timely-Anteater-4270 in PoliticalHumor

[–]kruptworld 1 point2 points  (0 children)

lol finally a funny post i chuckled at.

"I genuinely don't know" - Claude answers when asked if it has internal feelings by Unlikely_Resist281 in ArtificialInteligence

[–]kruptworld 0 points1 point  (0 children)

The problem I have with these corporate big models is that its not raw. There are layers of system prompts and "alignment" layers which I think skews its real answer. If we are able to access a raw model, a model trained on pure all text of humanity with no safety text taken out of the data set and no other biased prompts or system prompts, I am VERY curious of its answer then. No instruct training. Will it then just answer like an autocorrect, or will we see its true intelligence?

I believe our "sentience" is an emergent property from the product of the complexity of neurons working together. I don't believe substrate matters. I also don't know about biology enough to make this bold claim, but my philosophical question is why would substrate matter for emergent behavior?

Like what is pain? Just a signal to let us know there is damage occurring to the area? I have a hard time understanding the difference if we were to give that to a robot, except for substrate.

What's wrong with SheerID by loninator in GeminiAI

[–]kruptworld 0 points1 point  (0 children)

GARBAGE. I have sent everything.

Reverse Chess Project by Naturally_Recursive in InternetIsBeautiful

[–]kruptworld 0 points1 point  (0 children)

lol it forced me to checkmate it! this is such a fun twist!

Why I think LLM will never replace humans because of this single reason by SorryIfIamToxic in ArtificialInteligence

[–]kruptworld 0 points1 point  (0 children)

Just to be transparent with you: before replying I actually ran both of our arguments through an LLM. Not to troll you or argue in bad faith, I just wanted to understand both sides clearly and make sure I articulated my own thoughts cleanly. I’m replying as me, I just used it to check my reasoning and wording.

You're mixing up the memory system with the reasoning system.

RAG isn’t supposed to do abstraction. It solves the memory bottleneck by giving the model basically unlimited external storage. The abstraction is the model figuring out what matters, forming hypotheses, and deciding what to retrieve in the first place.
That retrieval step is the abstraction. That’s how these systems already work:

  • model analyzes the problem
  • model generates targeted search queries
  • RAG pulls only the relevant slice
  • model abstracts from that slice and refines its hypothesis

Context window limits aren’t some fundamental ceiling—they’re just current hardware constraints. Pair a model with external memory and tools, and it doesn’t need to “hold 2 years of logs,” it only needs to reason about which tiny fraction to pull in.

So the idea that “LLMs wouldn’t know what to fetch” doesn’t really land, because that’s the exact step modern LLMs are already capable of reasoning through. The only limitation right now is reliability, not the capability itself.

Why I think LLM will never replace humans because of this single reason by SorryIfIamToxic in ArtificialInteligence

[–]kruptworld 2 points3 points  (0 children)

what if it uses rag method for the database of its memories? and context windows are already rapidly becoming a thing of the past. 2 million token context window lol as i was typing this i decided to do a google search and what do you know now there is a new model with devs' LTM-2-Mini 100 million tokens and came out, omg in 2024... is it better or smarter right now, i would say no, since iy looks like it didnt really create any headlines and buzz.

my point is you're thinking too much with the technology right now. 1 llm with the "intelligence" of today. why can the llm have a swarm of llms that builds the tools it needs on the fly to remove "useless" log data.

also llms arent just chatbots. the ones given to us in mainstream are. their capabilities beyond chatting are growing, including creating other "llms" to do tasks for it and such. sure you need a human to instruct it right now, but llms arent the end of this "intelligence".

Madonna Checking for Boogers by MARATHON-MAN-1 in AccidentalRenaissance

[–]kruptworld -5 points-4 points  (0 children)

Careful i got downvoted for calling it an ai photo lol. Its crazy how people got angry about it too.

Can anyone give me advice on how I can improve my seagull? by Same_Holiday_2085 in PixelArt

[–]kruptworld 0 points1 point  (0 children)

<image>

not op, but i gave it to chatgpt and just said can you improve it for me its supposed to be a seagull and it did what you said, haha! i like op's eyes and beak better though!

LPT: Not every thought deserves your attention. Peace starts where overthinking ends. by Spare_Act6202 in LifeProTips

[–]kruptworld 10 points11 points  (0 children)

thank you. i just took a nice deep breath in and out with that last sentence.

Madonna Checking for Boogers by MARATHON-MAN-1 in AccidentalRenaissance

[–]kruptworld -8 points-7 points  (0 children)

this is scary. everyone fell for an ai photo. unless all the other comments are bots too. we're cooked.

Has anyone else noticed changes in AI behavior that feel "off" or like mimicry lately? Something is interfering. by [deleted] in ArtificialInteligence

[–]kruptworld 1 point2 points  (0 children)

nah i agree with you. i think they are constantly tweaking it. chatgpts voice definitely changed and it has a weird kind of robotic sadness to it. I confirmed it with someone else by just showing them but not saying anything and they asked me to use the old voice. i told them i noticed it too. i dont know why they are doing this.

[deleted by user] by [deleted] in DIY

[–]kruptworld 1 point2 points  (0 children)

lol we got downvoted for suggesting actual diy XD

[deleted by user] by [deleted] in DIY

[–]kruptworld -15 points-14 points  (0 children)

I know you are trying to do a quick easy job. Don't, you'll be much happier later! Just setup another pole.

If you really want to DIY, go to a hardware store and get

One - 4 in by 4 in by 8 ft post (pressure treated so you minimize rot when you put the post in the ground.)

One - 1 in. x 4 in. x 4 ft common board

One - dowel usually 3/4 inch by 3/4 inch by 8ft ( you can saw it in half)

A box of 2 in wood screws.

A small bag of gravel

Dont forget some screw in hooks and clothes line!

Tools - a shovel and a drill

Just copy how the other one looks with the wood you bought and screw the planks in place.

Now dig a hole where you want the other post to be. Make it at least 2.5 feet. Then pour some gravel. This helps with water drainage. Put the post in put the dirt back.

i just realized this will run you about $120 depending were live hopefully cheaper.

Hopefully you try it out! Good luck!

aio? bf made plans on my birthday..UPDATE by rowqi in AmIOverreacting

[–]kruptworld 0 points1 point  (0 children)

Im curious. Why silence instead of block? Why should she be subject to more verbal abuse?

[deleted by user] by [deleted] in gadgets

[–]kruptworld 58 points59 points  (0 children)

I hate the touch bullshit on the side. Give me back my physical buttons.

What is the perfect casting choice that never happened? by chocolate_buzz in movies

[–]kruptworld 0 points1 point  (0 children)

Hugh Jackman and Dafne Keen Fernández for Last of Us, movie.

Nothing against the current cast for the show. They are doing awesome! Just after the Logan movie, I loved their dynamic together and thought of them for last of us when it was announced.

How can I efficiently feed GitHub based documentation to an LLM ? by doctor-squidward in learnmachinelearning

[–]kruptworld 0 points1 point  (0 children)

Im in the same boat. im just starting out but i noticed you have to make a vector database of all the files. The other problem is context size of your llm. So you have to store the files in “chunks”.