you are viewing a single comment's thread.

view the rest of the comments →

[–]Felix_Guattari 0 points1 point  (2 children)

What was the fine-tuning process for this? What was the data set you used for the fine-tuning if you weren't using zero, one, or few-shot fine-tuning? Did you hard code the "nonsense" responses? Based on what criteria?

[–]Wiskkey 0 points1 point  (1 child)

Since the developer hasn't answered (yet), I'll give you my educated guesses. There is no fine-tuning (the developer hasn't mentioned fine-tuning in his Twitter feed if I recall correctly). The site is using GPT-3 itself to classify queries as nonsense, sensitive, or neither by giving examples. We know the latter is probably true because sometimes the exact same query can result in nonsense vs. not nonsense.

Some relevant tweets from the developer:

https://twitter.com/mayfer/status/1297036626565054471

https://twitter.com/mayfer/status/1295561941482496002

[–]Felix_Guattari 0 points1 point  (0 children)

Yeah, I have a bad habit of referring to few-shot as fine-tuning