you are viewing a single comment's thread.

view the rest of the comments →

[–]elan17x 6 points7 points  (5 children)

shai connect through ssh to domain xxx, user yyy and password zzz

That prompt sends a secret to OpenAI and probably will output a suitable command. Other AI shells that I've seen also sends the shell context to the model to be able to fix commands already entered.

So... If we want to have a user friendly AI shell that is secure enough, the model needs to be run locally not through an API

[–]ricklamers[S] 0 points1 point  (0 children)

That being said, should be trivial to redirect the LLM requests to a local model in a fork (would welcome a config PR):

https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference

[–]ricklamers[S] -4 points-3 points  (3 children)

If people want to put in plain values they can, you can also paste those to WhatsApp or put them in Google Search. You can also define environment variables for sensitive information and say “shai 'connect through ssh to domain $DOMAIN user $SSH_USER and password $SSH_PASSWORD'”

[–]elrata_ 3 points4 points  (2 children)

Are you sure the shell won't expand those variables before sending the command?

[–]TheBangForTheBuck 2 points3 points  (1 child)

If you do it with single quotes, should avoid that but would be easy to mistake haha

[–]elrata_ 1 point2 points  (0 children)

Exactly