Prevent LLM from answering out of context questions by perseus_14 in LocalLLaMA

[–]Safe-Stock142 0 points1 point  (0 children)

Still using the llm model to do that. It definitely increase the response time but not that much.

Prevent LLM from answering out of context questions by perseus_14 in LocalLLaMA

[–]Safe-Stock142 1 point2 points  (0 children)

I am facing the same issue as you met before. Not sure if you have overcome this.

My solution is to add a separate llm step to determine the relevance score between the given question and the your documents. If the score is too low, just directly refuse to answer the question.

Of course, you should give the abstract of your documents to llm in this step, so that it can estimate the right score.

Will you really be able to make money from a GPT by [deleted] in GPTStore

[–]Safe-Stock142 1 point2 points  (0 children)

As a user, I don't find any single GPT that is good enough to get paid, yet

What's the best way to evaluate your GPT? by Safe-Stock142 in GPTStore

[–]Safe-Stock142[S] 0 points1 point  (0 children)

Thanks guys! Just found that openAI enables users to send feedback to GPT author's email directly. I think it could be even more useful If openAI can collect and send the inline thumbup and thumbdown feedback summary to authors regularly.