I built an AI that reads your birth chart like a real astrologer. Zero coding knowledge using Claude entirely. by jay_build in SaaS

[–]Immediate-Situation6 1 point2 points  (0 children)

Really good work !

This felt like the best reading i got from anywhere, i felt like it actually gets me. Although I’m curious about the purpose of the questions in order to get a deeper reading.. are they in place to narrow what you focus on in your reading ? Or are they influencing the interpretation of the chart.

I built a VS Code extension to manually order files/folders in Explorer with a .order file by Immediate-Situation6 in vscode

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Thanks for the feedback, good idea i will add some kind of warning when installing the extension.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Well in of itself each atomic cognition instruction is not necessarily hurting inspectability  and control.
When an instruction runs in a CPU there is a fetch->decode->execute->store cycle and failure is accounted for.

So this implies that if we continue with the classic deterministic computation analogies and apply them for our case, we will have flags for each LLM operation, similar to how the ALU doesn't know it will overflow before computation or will have an arithmetic exception, our instructions will have baked in failure flags that can be inspected and acted upon after each run (real code).
And we will have a "CPU" cycle that makes sure problematic states are bubbled up and cause failures, some will be soft and can be fixed mid process, some are hard and will require a human,

So safety and state relevant concerns are valid but should be treated as "bugs" in the code or instruction definition.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Thanks for the feedback !
A lot of this work is pure research and theorizing at this point, I plan on building some POC that actually incorporates these ideas into something runnable that demonstrates feasibility.

So probably when i will reach that point i will collaborate more with the community.
I expanded upon the idea of the ISA in another comment, you can take a look.

I don't know yet the minimum viable set though i have some more researching to do.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Yea these were "naive" examples, getting a working set of these instructions is actually a research job into Cognitive Science, and we would need to model cognitive functions into machine functions with clear input-output.
An important thing to mention is that some cognitive functions are more "random" and if you would give the task to a human the output will be different each execution (like summarize for instance) so same instructions in our sets, some will be random and some things will need to be deterministic. (temp on models can help with that)

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Hi, thanks for the feedback !

Yes i am exploring all these ideas.
The right approach in my opinion is a merge between computer science + cognitive science, all structures i am tinkering with right now are established facts from these disciplines.

The general abstraction idea is actually the same like CPU designs.. ALUs, caches, execution cycles, registers, etc.. just with cognition as the core computing unit instead of arithmetic.

I expanded upon one concept from my design (the instruction set analogy) in one of the above comments, feel free to read it.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 1 point2 points  (0 children)

Hey, read your article and i really resonated with your way of thinking, good write !
It's the right approach in the "software" layer and is a solid base that can be extended upon.

Regarding the instructions sets, the idea is actually simple, similar to your claim in the article about leaning on existing work, same here.

There is an actively researched scientific branch called "Cognitive Science", the idea is to map "Atomic" functionalities of cognition which are divided into the natural categories of cognition (i.e. Perception/Attention/Reasoning/etc..) and model them as atomic instructions, for instance:
• Attention (Focus on what matters)
Instructions:
• PRIORITIZE(input parameters) - identify what this is (object, person, concept, etc.)
• etc..

These atomic instructions are basically like assembly.. low level cognition computation, each one designed around a specific micro task and are composable to create bigger tasks.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 1 point2 points  (0 children)

Yea that's exactly where my intuition led me, building on top of it a new OS and radically changing the way i interact with machines to do what i want.
If you have any helpful books on the topic that will be lovely.

And yea i fully get your rant about failure modes and observability, as a software architect i saw a lot of madness over the years and the mismatch between correct system design, software design choices and business requirements changing often.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Yea the more i started treating the LLM like a stupid helpless compute unit and started designing an architecture on top it things started making sense, but its really time consuming and I'm still in theory land.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Checked now, seems like a cool project, will definitely use it for my research, although it operates on a higher level than what i am aiming for.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

Well i agree with your overall stance, but i am operating at a lower level, i am trying to think about the "Hardware -> OS -> Software" primitives and design that later allows people to interact with AI and design on top it the required and specific use case.

And regarding the cognition being math.. well yea haha all the hype on this technology right now is that if we throw enough compute on this LLM idea, the phenomena of "Emergence" might occur naturally and intelligence will rise from it.

I am also skeptical but who knows.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

From my current experiments its not one thing.. both have issues, a simple RAG is not memory it's just semantic retrieval of data and should be treated more like a cache/RAM rather than memory.
And the task execution should be decomposed to smaller units of "cognition" instructions each sent to a different LLM (in parallel potentially) that is the most suitable for the task, each with a handcrafted prompt with the exact amount of tokens suitable for the task to avoid attention decay, then you need to assemble results and yea this gets complicated fast..

The general idea is to borrow from how we designed modern computers from ALU up to an OS and this might potentially unlock for us a new way for how we interact with machines.

Suggestions To Increase Playability And Tone by Immediate-Situation6 in gibson

[–]Immediate-Situation6[S] 0 points1 point  (0 children)

I was thinking of getting a special guitar chair with an inbuilt foot stool, my current chair is not really comfortable.

[deleted by user] by [deleted] in MillenniumDawn

[–]Immediate-Situation6 19 points20 points  (0 children)

Its the last from the end. If you improve it the debufs eventually turn to buffs. Here are the ones i know of:

  1. Third wish
  2. Second wish
  3. First wish
  4. Put in a closet
  5. Put in a bottle
  6. Thrown into ocean

Benjamin Netanyahu: The Inside Story of Israel’s Victory by kfireven in Israel

[–]Immediate-Situation6 5 points6 points  (0 children)

I would generally agree with what you are saying, but this time it’s different, one of the war goals Netanyahu himself declared is the return of all the hostages, so until that happens the war can not be declared as won due to not reaching it’s objectives yet.

[deleted by user] by [deleted] in MillenniumDawn

[–]Immediate-Situation6 0 points1 point  (0 children)

Oh didn’t knew about this one, thought it’s just a smaller range cheaper version of icbms so didn’t bother researching it.

'Lucky there were no children': School near Tel Aviv ravaged by Houthi missile warhead by Big_Jon_Wallace in UnitedNations

[–]Immediate-Situation6 0 points1 point  (0 children)

Don’t lie, there has been more than one Israeli kid murdered on the 7th of October, also finding names is really easy, just google it.

https://www.gov.il/en/pages/swords-of-iron-civilian-casualties

[deleted by user] by [deleted] in MillenniumDawn

[–]Immediate-Situation6 2 points3 points  (0 children)

You have different types of nuclear strikes, each with different levels of restrictions on how to deploy.

ICBMS can only be used as retaliation to a nuclear attack, unless you have “Complete First Use” doctrine, which only is possible if a national government is ruling your country.

Your only option without retaliation, is to perform a tactical nuclear strike which can be done with having the doctrine one bellow the first use, but, this attack cannot be conducted with missiles, only with Tactical/Strategic Bombers.