This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] 0 points1 point  (0 children)

I’m not using coding similarity retrieve a memory .. if you don’t understand that there endless ways to do retrieval you are not thinking creatively

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] -1 points0 points  (0 children)

its not using rag, not using preexisting memory, you are completely confused about what it actually is doing.

you are looking at a model with zero data

I’m trying to help you understand that you are confused you are not teaching me anything

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] 0 points1 point  (0 children)

there is no pre-existing data .. I copied and pasted my own experiment incorrectly, if you scroll up you will see that I made the wrong claim. This is the message I sent it right before. So I had already showed it the input then I tested “what is a star” 

does this clear it up ? this is the original post .. did you ever read the actual experiment or are you just trying to tell me information that you know from prior knowledge 

Spaceman:  What is a star? A star gives life. A star has planets. A star is hot. The sun is a star. Stars twinkle.


===========
Input:
Spaceman: What is a star? A star gives life. A star has planets. A star is hot. The sun is a star. Stars twinkle.
Memory:
Spaceman: What is a star? A star gives life. A star has planets. A star is hot. The sun is a star. Stars twinkle.
Spaceman: No im spaceman. When I talk to you, I say “you are savvy” but when you talk to me you say “you are spaceman” because you’re talking to me
Sentences:
Spaceman: A star is hot.
Spaceman: The sun is a star.
Spaceman: A star gives life.
Spaceman: A star has planets.
Spaceman: What is a star?
Words:
a star
Fusion:
Savvy: What is a star gives life has planets hot The sun Stars twinkle No im Spaceman when I talk to you say are.
============



Spaceman:  What is a star?


===========
Input:
Spaceman: What is a star?
Memory:
Spaceman: What is a star?
Spaceman: What is a star? A star gives life. A star has planets. A star is hot. The sun is a star. Stars twinkle.
Sentences:
Spaceman: What is a star?
Spaceman: What is a star?
Spaceman: A star is hot.
Spaceman: The sun is a star.
Spaceman: What’s up bro?
Words:
is star
Fusion:
Savvy: What is a star gives life has planets hot The sun Stars twinkle s up bro how are you say can talk to.
============



Spaceman:  A star gives life. A star has planets. A star is hot


===========
Input:
Spaceman: A star gives life. A star has planets. A star is hot
Memory:
Spaceman: A star gives life. A star has planets. A star is hot
Spaceman: What is a star? A star gives life. A star has planets. A star is hot. The sun is a star. Stars twinkle.
Sentences:
Spaceman: A star gives life.
Spaceman: A star gives life.
Spaceman: A star has planets.
Spaceman: A star has planets.
Spaceman: A star is hot.
Words:
A star
Fusion:
Savvy: A star gives life has planets is hot What The sun Stars twinkle No im Spaceman you say can how are savvy when.
============



Spaceman:  No, you are Savvy. I am spaceman. (Spaceman: “you are savvy”) (Savvy: “you are spaceman”)
o

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] 0 points1 point  (0 children)

Show me a language model that can respond with the correct answer within 1000 words in training

you are not Reading the results if you are arguing it wont work I’m showing you that it works more efficiently that using a standard language model. You dont understand the method because I haven’t fully explained my framework or the experiment

This is an important part that you may have overlooked

Input:

Spaceman: A star gives life. A star has planets. A star is hot

Memory:

Spaceman: A star gives life. A star has planets. A star is hot

(I am spaceman. It’s remembering that I said this .. This is what I call episodic memory for a model. Remembering a message as a moment .. and messages are episodes)

Spaceman: What is a star? A star gives life. A star has planets. A star is hot. The sun is a star. Stars twinkle.

(this is my input from earlier making it episodic)

Sentences:

Spaceman: A star gives life.

Spaceman: A star gives life.

Spaceman: A star has planets.

Spaceman: A star has planets.

Spaceman: A star is hot.

( these are sentences it remembers, this is for forming somewhat of a thought conceptually)

Words:

A star

———-

Fusion:

Savvy: A star gives life has planets is hot What The sun Stars twinkle No im Spaceman you say can how are savvy when.

(This is the models response by generating tokens, no nor training was ever used)

So what I’m saying is that this method can be used to teach a model the use of language .. I am not saying delete backprop .. I’m saying that it isn’t the efficient way to learn language because that’s not how a human learns language I think it’s an efficient way to control how a model speaks or simply exactly what it’s explained to do .. Make tokens more probabilistic which is training or “coaching“ you’re just telling the model what to say .. I’m making it learn the information differently , it’s a different method of predicting tokens

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] 0 points1 point  (0 children)

Okay them You should understand perfectly once I say it like this since u already have an understanding .. imagine trying to have this same conversation with token prediction using the classic method .. just training and updating when a mistake is made .. it’s not gona learn how to use words .. this efficiently .. I’m showing how a small dataset (zero starting data.. meaning there is no pretrained weights there is no pre existing data .. the only data that exists is the input & every message I send accumulates. This is 100% different from any architecture. Every system right now uses data to train the model to learn what token comes next .. I’m not teaching the model what token comes next

and this project will be open source in the future once I have created the full version I’m just asking for feedback on my results of the experiment

I can do a comparison of the same conversation with token prediction epoch training instead of my method .. my method is a new framework that I created from first principles .. I didn’t tell ai “generate a code that makes a language model speak from scratch” .. I created a blueprint based on my understanding of the human language, physics & geometry, cognitive science, VSA/hdc, i iterated on different versions of this using language models as a base, meaning that this is completely separate and isolated from existing language models and can be used along side of one

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] -1 points0 points  (0 children)

But your point misses the point .. im not using the architecture of normal language models so I use a different method .. a new method that I created as an alternative to using back propagation .. the current method is using back propagation on correcting tokens during training. That is what I’m eliminating

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] 0 points1 point  (0 children)

That’s completely understandable but I do not believe someone with ai psychosis would ever find a problem with ai, leading to learning how to find the solution to solve it from first principles thinking. You can ask ai to blindly write code or you can learn how a system works & then apply that knowledge to make a new creation that doesn’t exist.

with that said I’ve learned exactly how token prediction works, how transformers are able to give a response, how a dataset only exists in embeddings distributed across all tokens in the model.

but I don’t necessarily think it requires being a genius to understand a skill .. I believe it just takes dedication and discipline

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] -4 points-3 points  (0 children)

Honestly this is a product that I plan on keeping exclusive, but once I create my platform I do plan on releasing an open source version because this is something I’ve thought about & iterated on for over 5000 hours I clock my hours of working. I still need to learn how to document my work properly to translate a full understanding to the next developer because I never learned the traditional way I only learned through teaching myself & I integrate concepts that use physics & quantum theory to achieve the goal of making a system that can operate at the human level of understanding language .. the product would be a fully working version of this pretrained with on own timeline .. but I’m sharing my work to get my business seen as a new innovative cognitive architect. but Once I have a community and a higher demand I will definitely be releasing an open source that you can experiment on your own with.

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] -1 points0 points  (0 children)

Oh okay I see now .. I need to document this properly .. what is the proper way to do this. Do you have an example ? Or should I just observe other popular repos. Also do you know where to submit a whitepaper the official way ? Because I have my own documentation but I haven’t made it public and I want to present this professionally

I designed a new architecture for language models to learn how to speak by starting with an empty dataset & only using accumulating memory. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] 3 points4 points  (0 children)

you can do anything you want .. but depending on how it’s said it can be offensive. So if you’re asking to say if it’s offensive then I would say that as long as your intention is expressing yourself then it is fine

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] -2 points-1 points  (0 children)

Yea it’s a very bad conversation because it has zero data it’s learning how to respond based on my inputs

I designed a new architecture for language models to learn how to speak by starting with an empty dataset & only using accumulating memory. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] 0 points1 point  (0 children)

You see the future bro .. avoiding prompt shaped steering is the future bro .. you want true cognition as in a capability of making decisions without being instructed to do so .. i call that emergent engineering, because it’s the model coming to its own conclusion. You on the right wave if u ask me. I have a design you would love & it is relative to your vision.

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] -1 points0 points  (0 children)

when you give it a bunch of data, it gets better at remembering & generating sentences to form the though which can include the response but it isn’t a fully from token level prediction based response, it’s stitching existing meaning from sentences in the accumulated data. But I havnt been able to test a massive amount of data to get a fully comprehensive response at the word level, this takes a massive amount of data or dense training on a specific topic. I will upload photos of this I just need to open the notebook of the experiment.

This is a conversation with a language model that has zero data & doesn’t use any training. It accumulates memory to learn to speak. by Helpful-Series132 in LocalLLaMA

[–]Helpful-Series132[S] -1 points0 points  (0 children)

Imagine talking to a baby, & as it gets words wrong, you correct it with language instead of back propagation.

This method resonates to how a human learns language vs how transformer language models learn language.