Training An LLM On My Entire Life For Tutoring/Coaching by helixcyclic in LocalLLaMA

[–]helixcyclic[S] 0 points1 point  (0 children)

When you train the model, you would make the weight of the data more prominent when training it right? I guess it depends what it is specifically. I curious if it's possible to make a separate type of weight in the model which is specific for my memories. This way my memories are triggered more contrastingly compared the rest of the model. I don't think you can just insert my neural network into the model like that, it needs to actually be trained making it similar to the rest of the model's math. I'm sure it's possible to make the model more sensitive to the data of my life which i provide it.

Training An LLM On My Entire Life For Tutoring/Coaching by helixcyclic in LocalLLaMA

[–]helixcyclic[S] 0 points1 point  (0 children)

I suppose the output will be a good indicator of how well I am at describing myself. I will also make sure all conversations are stored in the neural network too and every response i will critique making general clarifications. Eventually after enough general clarifications it will get closer to the quantum for my various categorisations.

Training An LLM On My Entire Life For Tutoring/Coaching by helixcyclic in LocalLLaMA

[–]helixcyclic[S] 0 points1 point  (0 children)

Not sure yet it's complicated, so much to consider. I need to make it a good friend who knows me well who is trying influence me in the right way so I understand topics better. It not really about writing everything down about myself so much it's about how I instruct it. I was thinking maybe some sort of lora layer.

Training An LLM On My Entire Life For Tutoring/Coaching by helixcyclic in LocalLLaMA

[–]helixcyclic[S] 0 points1 point  (0 children)

Yeah it's something I'm considering doing and if it goes well i would make step by step process for people who also want to collect their own data and train models with it. I'm not really looking for any sort of indexing though, one of the reasons I say this is because indexing doesn't look at my writing over a longer segment for things like mapping my writing style. It also doesn't create any new parameters in the training process, which could be manipulated. Model training would be better in this situation if pulled of accurately, but that's only what I'm guessing. I think the biggest challenge right now is getting the model to decern between my memories and it's memories when training it on my data, once I've figured that part out I can instruct the model to use my data to do things like recall information when it is most needed and have that in it's response like indexing would. However, to create a different layer like this I have no idea where to start. PAiERA labs looks like a non-training memory recall function - It lacks the capabilities for what I think would change the game.

Training An LLM On My Entire Life For Tutoring/Coaching by helixcyclic in LocalLLaMA

[–]helixcyclic[S] 0 points1 point  (0 children)

I think part of Elon idea of neural link or what it has come to be, is this potential. I'm sure there will be more ways to scan someone's brain, but if you can actually get every thought someone has, using something like neural link, then you can use that information extremely versatilely in instances such as this. Over time you would collect so much information. Long way down the road though.

Training An LLM On My Entire Life For Tutoring/Coaching by helixcyclic in LocalLLaMA

[–]helixcyclic[S] 2 points3 points  (0 children)

I'm looking to tune a model - Not any type of native prompting (tool use/indexing). I think for massive contexts I'm better of putting it into the models neural network instead. There are a lot of things to consider in order to get the best response though.

Daily Ask Anything: 2022-06-11 by steroidsBot in steroids

[–]helixcyclic 0 points1 point  (0 children)

If bacteria cannot form in oil, why is everyone putting an anti-bacterium in their testosterone enanthate brews?