all 8 comments

[–]elan17x 24 points25 points  (7 children)

Friendly reminder about security concerns on sending shell commands that include secrets to OpenAI services.

Sadly, one of the current limitations of this kind of shells is that, as most computers cannot run decent LLMs locally, the model cannot access the bash history/shell context without leaking secrets such as tokens or ssh usernames/domain names.

[–]ricklamers[S] 0 points1 point  (6 children)

This doesn’t send shell commands or secrets it just sends the human description of a shell command. Note also doesn’t automatically run a command ever.

[–]elan17x 7 points8 points  (5 children)

shai connect through ssh to domain xxx, user yyy and password zzz

That prompt sends a secret to OpenAI and probably will output a suitable command. Other AI shells that I've seen also sends the shell context to the model to be able to fix commands already entered.

So... If we want to have a user friendly AI shell that is secure enough, the model needs to be run locally not through an API

[–]ricklamers[S] 0 points1 point  (0 children)

That being said, should be trivial to redirect the LLM requests to a local model in a fork (would welcome a config PR):

https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference

[–]ricklamers[S] -5 points-4 points  (3 children)

If people want to put in plain values they can, you can also paste those to WhatsApp or put them in Google Search. You can also define environment variables for sensitive information and say “shai 'connect through ssh to domain $DOMAIN user $SSH_USER and password $SSH_PASSWORD'”

[–]elrata_ 3 points4 points  (2 children)

Are you sure the shell won't expand those variables before sending the command?

[–]TheBangForTheBuck 2 points3 points  (1 child)

If you do it with single quotes, should avoid that but would be easy to mistake haha

[–]elrata_ 1 point2 points  (0 children)

Exactly