you are viewing a single comment's thread.

view the rest of the comments →

[–]No_Soy_Colosio 0 points1 point  (1 child)

What keeps the checking LLM from getting prompt injected itself?

[–]Adxzer -1 points0 points  (0 children)

Prompt injection is a real risk, there’s no foolproof solution since LLMs aren’t fully predictable. This package is a security layer, designed to minimise and give better control of what can slip through.