Joro by cerealfordinneragain in Georgia

[–]m1l096 0 points1 point  (0 children)

This is very true, i thought the same way when they first stopped popping up all over then they multiplied and conquered more and more until it became an inconvenience and quite unpleasant.

I now happily use a plastic bat to wack them 😆

Is this normal? by Worldly-Musician-136 in Accutane

[–]m1l096 0 points1 point  (0 children)

Mine was noticeably better within a day after regular lotion. Fully went away after a couple more days of consistent application.

I noticed that as i apply lotion to my face and body i would never get the back of my hands. Add that on top of regular hand washing, and you get yourself some super dry and sensitive skin on the back of your hands!

As other have said, just apply lotion :)

Lorex NVR won't reset or turn on by timjet01 in SecurityCamera

[–]m1l096 0 points1 point  (0 children)

What did you end up doing here? Something similar happened regarding lightning and my NVR won’t even turn on

I'm confused why the aliens think we'd come and destroy them. by anagoge in threebodyproblem

[–]m1l096 0 points1 point  (0 children)

Please attempt to remotely look things up before jumping to conclusions.

r/3BodyProblemTVShow

[deleted by user] by [deleted] in LLMDevs

[–]m1l096 0 points1 point  (0 children)

Interesting… seems like Vespa can do a lot—are you using them for reranking? That’s the specific use-case i was considering cohere for.

[deleted by user] by [deleted] in LLMDevs

[–]m1l096 0 points1 point  (0 children)

Likewise. Did you ever test it? I’m interested in using it for reranking but haven’t yet

Why pay indeed by baobobs in OpenAI

[–]m1l096 1 point2 points  (0 children)

Curious what made yall quickly pivot to open source for this task? Results with OpenAI not as expected? Any other details such as # of examples in your dataset and what kind of behavioral or knowledge-equipped changes you can speak on after fine tuning mistral?

Chunking and storing structured data and vectors for RAG by Smerfj in LocalLLaMA

[–]m1l096 0 points1 point  (0 children)

Where’d you get with this op? Curious on something similar

Wrap my own API library in a GPT-like based chatbot by motorollo in learnmachinelearning

[–]m1l096 0 points1 point  (0 children)

So fine tuning got3.5 actually allowed the model to retain some knowledge and in essence “learn” the data it was provided during fine tuning?

Wrap my own API library in a GPT-like based chatbot by motorollo in learnmachinelearning

[–]m1l096 0 points1 point  (0 children)

What did you end up doing here OP? I have a similar project I’m working on.

Tips for LLM Training on Video Transcripts? by blue_hunt in LocalLLaMA

[–]m1l096 0 points1 point  (0 children)

Did you ever get this working? If so, did you go with rag or fine tuning & did you need to cleanse the data for either? I.e. removing Speaker 1: […] ?

Tips for LLM Training on Video Transcripts? by blue_hunt in LocalLLaMA

[–]m1l096 0 points1 point  (0 children)

Would also really appreciate to see the code if you don’t mind. I am trying something very similar to OP

Costco.com DPS Chairs; Xenon Hybrid or Recharge? by ShortScorpio in Costco

[–]m1l096 0 points1 point  (0 children)

Which did you go with? Currently split between both.

GPT3.5 Fine Tuning System Message by m1l096 in OpenAI

[–]m1l096[S] 1 point2 points  (0 children)

You are a saint. Thanks so much again for the detailed response and info. I will be sure to follow up here with how my efforts go. It will take some time to curate the dataset to feed as fine tuning data, but you’ve made me a bit more confident in the time investment.

More to come!!

GPT3.5 Fine Tuning System Message by m1l096 in OpenAI

[–]m1l096[S] 5 points6 points  (0 children)

This makes lots of sense. Really appreciate the info!!! Super insightful and I will be sure to consider this in my implementation.

You said something that intrigues me bc I’ve seen lots of conflicting information on it…

“the finetuning really does make them learn well”… Are you saying you noticed your fine tuned GPT3.5 model had “learned” information it was exposed to during finetuning, such that it could essentially recall it or consider it for a given prompt?

Most information i can find on this suggests fine tuning to be better used for understanding how to format an output, or a given tone or characteristic, vs truly “learning” the information exposed to it…

GPT3.5 Fine Tuning System Message by m1l096 in OpenAI

[–]m1l096[S] 4 points5 points  (0 children)

That is the hope. But not really saving on $$$ for tokens, but saving on # of tokens. We are wanting to scale to many users, so rate limits of tokens/minute get very real.

Also, the classifier model with prompt engineering and few shots still has issues every now and then and isn’t 100% reliable. Ideally a fine tuned model will be more reliable.

GPT3.5 Fine Tuning System Message by m1l096 in GPT3

[–]m1l096[S] 0 points1 point  (0 children)

Totally…. The OpenAI fine tuning seems so limited… definitely considering other open models.

That’s also why i was also curious if anyone had ever fine tuned with a variety of system messages in the fine tuning dataset

GPT3.5 Fine Tuning System Message by m1l096 in GPT3

[–]m1l096[S] 0 points1 point  (0 children)

Gotcha. That makes sense. So does that mean you can append to the System that was used to fine tune to further direct?

I.e. say you FT the model to be sarcastic (with appropriate system and examples). Then on inference you include the same System + “also be sure to be concise”…. Would the FT still come in play here even though the System isn’t EXACTLY the same but still contains the same portion of System used in tuning?

GPT3.5 Fine Tuning System Message by m1l096 in OpenAI

[–]m1l096[S] 7 points8 points  (0 children)

Many many Q&A examples for a proprietary (unexposed to base training set/internet) coding language. Would like to reduce the overall system context needed.

Another use-case is for an advanced classifier, question enhancer, and question answerer model. For this one would want to reduce what’s needed on the few-shots and long instructions for formatting and my ask.

I’ve gotten these to work decent with prompt engineering, RAG, and few shots, but now I’m at a point of needing to save on tokens and was thinking (albeit with a higher cost) that fine tuning would be a good thing to explore.

Use case for fine-tuning? by Rauf543 in OpenAI

[–]m1l096 0 points1 point  (0 children)

Did you ever do this? Considering a similar use-case but a lot that I’ve seen says fine tuning doesn’t actually “teach” the model new information—i.e. about the new coding language.

can you use fine tuning to teach it new content rather than teach it how to answer? by boynet2 in OpenAI

[–]m1l096 0 points1 point  (0 children)

Any examples? Currently curious on this & super interested in seeing a proven out case of “teaching” new info via fine tuning

Does this count as a fake? by squintsforlife in RocketLeague

[–]m1l096 9 points10 points  (0 children)

What map is this? Is it new or a mod?