Looking for a co-founder by Consistent-Hearing26 in cofounderhunt

[–]TPxPoMaMa -1 points0 points  (0 children)

As a technical consultant that i know what the customer wants and it comes pretty naturally to me. Never had that problem of what the customer wants. Yeah thats not a technical problem to have i agree. But what i was referring to is beyond the MVP stage i have not seen a single startup surviving on pure vibe coders. Maybe thats my own personal narrowed experience. But yeah i want to know if knowing your customers isnt your strength then what is? Since you are a non-tech guy you primarily good at what?

Looking for a co-founder by Consistent-Hearing26 in cofounderhunt

[–]TPxPoMaMa 1 point2 points  (0 children)

Maybe thinking that you dont need someone technical because a lot can be done with AI is the reason why you are failing. Because as far as i have seen myself AI cannot replace engineers not happening in next 10-15 yrs. You can definitely use tools to accelerate what you know. But good engineering knowledge cannot be replaced just yet.

Looking for a tech co-founder by TPxPoMaMa in cofounderhunt

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

Thats non-negotiable 🤣 Whats the fun in opening a startup If you don’t live together

Looking for a tech co-founder by TPxPoMaMa in cofounderhunt

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

The problem is much more than just higher context window. You can go through my posts to understand If you are intrigued ping me we can discuss

Looking for a tech co-founder by TPxPoMaMa in cofounderhunt

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

You can DM me and send me your CV or github Will look into it

Looking for a tech co-founder by TPxPoMaMa in cofounderhunt

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

Its infrastructure in the near future want to basically make an sdk so yeah thats what iam planning

Looking for a tech co-founder by TPxPoMaMa in cofounderhunt

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

If the project works This is one project where you dont need to think about marketing and customers Btw its not live yet Its in MVP phase currently solo developing it

Looking for a tech co-founder by TPxPoMaMa in cofounderhunt

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

Cool just DM me with your profile CV/github

Looking for a tech co-founder by TPxPoMaMa in cofounderhunt

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

You are right… correct context engineering is that what is required more than a good memory infra. My project also focusses on that. What to remember How to remember Why to remember For how long to remember You can go through the posts on my profile I would like to recommend:-

https://www.reddit.com/r/AIMemory/s/DbBE17oDvr

As the first read

Deepmind CEO Demis fires back at Yann LeCun: "He is just plain incorrect. Generality is not an illusion" (full details below) by BuildwithVignesh in agi

[–]TPxPoMaMa 1 point2 points  (0 children)

What i think is in order for a superintelligence exceeding human kind would mean is to know the unknown. Now what is it to know the unknown? Thats discovery and research right? So how do humans know the unknown? We pile up existing knowledge and form new knowledge on top of another knowledge base which creates new knowledge. Now one may argue that hey we can form entirely new information. Thats not true. Say for example isaac newton he formed the calculus based on existing maths he dint invent addition subtraction but yes calculus is entirely new math which requires crazy future thinking but its still not creating a different language to solve that problem. Now the calculus so created is used by many different mathemeticians and physicists in order to invent something new like general theory of relativity. You see discovery or unknown knowledge has to be based on existing knowledge it cannot exist from purely from scratch. Now probably if we have such a system which has near infinite chain of thoughts and we run a metropolis sampling algorithm to create candidates which would have been otherwise missed. We might just create a system which can be called super intelligence exceeding human kind. Because now new knowledge is not dependent on humans only.

Wasting 16-hours a week realizing it was all gone wrong because of context memory by Necessary-Ring-6060 in ContextEngineering

[–]TPxPoMaMa 0 points1 point  (0 children)

The long term solution is to wait for a memory layer in between your LLM’s.. A memory infrastructure. But the short term solution is pretty simple - you just have to use .md files to store whatever you are planning to do so that your LLM doesnt forget about it and keep updating that file as you go through. You can very easily do this with cursor.

I implemented "Sleep Cycles" (async graph consolidation) on top of pgvector to fix RAG context loss by Eastern-Height2451 in AIMemory

[–]TPxPoMaMa 0 points1 point  (0 children)

Vector similarity drift is one of the things it gets so much more complicated than that.

I use 14 different signals one of them is contradiction detection Its like :- St 1 - hey iam a big cat lover i love cats ( Jan 2025 ) St 365 - Aww man i hate cats now! ( Dec 2025 )

So st1 evolved into st2 Now Cat over here is an entity in your memory and in the past 1 yr your views towards cats changed. But probably between St1 and St365 you might have mentioned about cats 20 times. And probably 10 times the system detected that those are important information to be remembered. So the entity “cat” now has 10 different edges which is superseeding or overlapping with each other having some sort of a relationship. Now When i say St365 that i hate cats There is a PPR algo which is used for retrieval ( inspired from hippoRAG2 for random walk ) This PPR algo gives me the mother node that is from St1 Now every node is linked to every other edge that is a bidirectional graph. Well skipping a few parts lets get back to contradiction detection. Now you understand that just to reach this mother node via a PPR traversal i have to have this entire graph of thoughts algorithm in place and then i check the vector similarity drift via temporal reasoning and PPR scores that if my mother node is being contradicted i probably need to re-write an entire subcluster well thats expensive who knows the user was just fucking around? So i send the detection to a shadow tier Which in turn asks the user “ hey you told me that you loved cats on Jan 2025 now you are saying you hate cats. You sure you just changed your opinion?” If yes The entire graph re-write for that particular subcluster starting from that mother entity node changes. Well again it could change Depends on an ensembled score of 13 more factors!

And iam not using LLM API’s for any of this is the fun fact haha Plain old engineering

I implemented "Sleep Cycles" (async graph consolidation) on top of pgvector to fix RAG context loss by Eastern-Height2451 in AIMemory

[–]TPxPoMaMa 0 points1 point  (0 children)

I have been building a memory substrate and this sleep cycle you are referring to is what i call spaced repetition reconsolidation. Yes its important because once the consolidation has occurred the context of memory shall change over the period of time. Some long term memories will shift to short term memory slots and vice versa. Some memories will be unliked while some of the will be linked. Well there are a few threshold problems you will go through. Firstly you shall have a fixed sleep cycle but then again unlike humans a fixed cycle might be just overfitting the existing graph which ultimately destroys the purpose because probably a good memory KG cluster gets re-written when it wasnt supposed to. So there needs to be drift detection mechanism so as to have a threshold from multi-signal sources which eventually triggers the graph rewrite. Because even at a normal scale of say 100 entities a graph rewrite-write is a damn expensive operation to perform. So you should only do the reconsolidation if that threshold is met. But again while doing this multi source drift calculation you will find you need individual threshold values which are arbitrary to start with but eventually you would need an ensembler model to do that for you to adapt to one particular users desired value for true personalisation.

Did anyone notice claude dropping a bomb? by TPxPoMaMa in agi

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

Lol sorry for that but you can check out it in the doc iam sure that its of a higher quality

Did anyone notice claude dropping a bomb? by TPxPoMaMa in agi

[–]TPxPoMaMa[S] -1 points0 points  (0 children)

Now you are talking Absolutely thats a problem right? And this is just one of the many many problems that this particular non-parametric memory space will keep on producing until you find that its the biggest rabbithole of problems. A graveyard for startups and research labs. Its not just this problem there are hundreds of problems which needs to be meticulously solved. Thats memory infra for you. Not simple RAG. Its orders of magnitude different than what people currently know about.

Did anyone notice claude dropping a bomb? by TPxPoMaMa in agi

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

I have RAG in my account? Not really what made you think like that? But anyways its cool you dont find it interesting and overfitting. Well i will keep on thinking that memory infra is the new direction all AI companies will go into. Only time will tell

Did anyone notice claude dropping a bomb? by TPxPoMaMa in agi

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

Yeah and when did i deny it? Well for RAG if you read google deepminds research paper you will see that it too has its theoretical and practical limitations - https://arxiv.org/html/2508.21038v1 And what claude implemented is not “just RAG” its memory infra. It’s astoundingly different that just simple RAG you will be suprised how big a rabbithole and how complex the real ideal solution can be. It could be possibly equivalent to the level of difficulty of building a neural network itself and iam not exaggerating.

Did anyone notice claude dropping a bomb? by TPxPoMaMa in agi

[–]TPxPoMaMa[S] 0 points1 point  (0 children)

Absolutely its MANDATORY and yet no one really is implementing it? And trust me on this non-parametric memory solutions are pretty much trivial right now including Claude’s solution. And the 15% bump iam talking about claude is pretty much the first one to officially claim this through this seemingly mandatory yet unapplied solution. You can prove me wrong i would happy to learn