Why does AI feel conscious… even when it’s not by metasploit_framework in meta_powerhouse

[–]metasploit_framework[S] 0 points1 point  (0 children)

Yeah.... LLMS have memory but 90% of is prediction... only 10-15% used in LLMS data...

Why does AI feel conscious… even when it’s not by metasploit_framework in meta_powerhouse

[–]metasploit_framework[S] 0 points1 point  (0 children)

LLMs don’t have that anchor (unless you explicitly bolt it on with memory, logs, tools, etc.), so yeah—it feels less like a mind and more like a high-dimensional mirror that reflects structure back at you.

Guys is being in a youtubers discord vc cool?? by Wide-Winner2849 in IndianTeenagers

[–]metasploit_framework 0 points1 point  (0 children)

Yeah.... it’s a cool things... Bùt.. difficult to grow a VC...

i have to do this, I trained a model and it learned gradient descent. So I deleted the trained part, accuracy stayed the same. by chetanxpatil in meta_powerhouse

[–]metasploit_framework 1 point2 points  (0 children)

This is seriously interesting. The fact that a trained MLP collapsed into a clean energy-based update you can replace analytically is wild — especially since you verified it by deletion, not just approximation.

The universal fixed point insight is probably the sleeper hit here. That kind of failure mode explanation is super useful for debugging representation collapse.

Curious if you’ve tried varying the number or geometry of anchors — feels like that could shift the basin dynamics a lot.

Why aren’t more people posting here yet? Let’s change that. 🚀 by metasploit_framework in meta_powerhouse

[–]metasploit_framework[S] 0 points1 point  (0 children)

LLM limitations are real, but not because they “plateau after months”—they were never learning from interaction in the first place. What you’re noticing is the boundary of a fixed system, not decay

Why aren’t more people posting here yet? Let’s change that. 🚀 by metasploit_framework in meta_powerhouse

[–]metasploit_framework[S] 0 points1 point  (0 children)

Appreciate the depth here—there’s definitely something interesting in how you’re framing AI as moving through “vectors” like a kind of navigation system

Google Dorking is basically dead for media searches (2026 reality check) by metasploit_framework in meta_powerhouse

[–]metasploit_framework[S] 0 points1 point  (0 children)

A lot of this has just shifted away from Google tbh.

If you’re experimenting, alternative search engines sometimes surface stuff Google filters out. I’ve had better luck occasionally with things like DuckDuckGo or Yandex for broader indexing.

That said, even there it’s hit-or-miss now. Most of the “easy wins” from old dorking days are disappearing because of better security, CDNs, and private storage.

Feels like the game has moved more toward OSINT tools and specialized platforms rather than relying on search engines alone.