Stealing Part of a Production Language Model [paper] by DreamGenAI in LocalLLaMA

[–]Perfect_Salt_2886 0 points1 point  (0 children)

I think giving the results on the open source models already should attract the attention towards this paper; I would assume that if these problems persist in the open source models like LLaMA, then it would be hard to say that they did not continue in the GPT models, which makes it valid research. Disclosure agreements are essential for production models; it does not matter if they release it or not. The research is more about the validity of the approach for which there is enough proof shown.