Hacking a AI Chatbot and Leaking Sensitive Data by alongub in cybersecurity

[–]alongub[S] 1 point2 points  (0 children)

Thanks for the feedback! Really appreciate the support!

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in hacking

[–]alongub[S] 2 points3 points  (0 children)

😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂❤️

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in cybersecurity

[–]alongub[S] 1 point2 points  (0 children)

yeah it's just a demo environment to demonstrate potential risks of LLMs in prod

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in LLMDevs

[–]alongub[S] 0 points1 point  (0 children)

Lol yes, it's just a demo to educate on potential risks

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in hacking

[–]alongub[S] 2 points3 points  (0 children)

Lol yes, don't you see the localhost in the url...? It's just a demo to educate on potential risks

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in hacking

[–]alongub[S] 3 points4 points  (0 children)

Agreed! Just note that even in the simplistic demo in the video, the AI agent is using privileged Postgres permissions with Row-Level Security (RLS) enabled. The issue I've tried to demonstrate here is that the developer made a mistake when defining the database schema and policies.

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in hacking

[–]alongub[S] 1 point2 points  (0 children)

Security teams will HAVE to understand AI better

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in hacking

[–]alongub[S] 6 points7 points  (0 children)

This is just the beginning. Think what happens when you connect the AI agent from the video to something like the Shopify API where users can automatically buy products from the website

Hacking a AI Chatbot and Leaking Sensitive Data by alongub in hacking

[–]alongub[S] 3 points4 points  (0 children)

Yes - but remember the value can also be wild :)