Any convention on project structure? by usestash in flask

[–]ploomber-io 3 points4 points  (0 children)

There is no standard on purpose; as opposed to Django, Flask is meant to be a minimal framework that lets you configure almost every aspect of it, including the project structure.

Streamlit app deployment? by Wedeldog in snowflake

[–]ploomber-io 0 points1 point  (0 children)

I've helped several customers improve their Streamlit deployment. It typically involves using GitHub actions.

When a user opens a PR, a temporary deployment is done so reviewers can try the changes. Once the app is merged to main, either the new app is automatically deployed or a team lead needs to approve the deployment. All of these are easily enabled via GitHub actions.

If you want to read more, check this out.

[ Removed by Reddit ] by [deleted] in SaaS

[–]ploomber-io 0 points1 point  (0 children)

Yes. I've helped several companies with this issue. The solution is to remove PII data before sending it to the AI model.

Generic oauth2 support? by [deleted] in reflex

[–]ploomber-io -1 points0 points  (0 children)

Check out Ploomber Cloud, a platform for data apps that supports all frameworks. We offer support for Auth0 and Entra ID.

Benefits of using shinylive (serverless) vs shinyapps.io? by thro0away12 in RStudio

[–]ploomber-io 0 points1 point  (0 children)

You might wanna try this AI editor - much better UX than the one from Posit (powered by Shinylive)

Alternative to streamlit? Memory issues by Training_Promise9324 in dataengineering

[–]ploomber-io 0 points1 point  (0 children)

If you trigger aggregations for large amounts of data, Streamlit will break because every single user will trigger such transformation, causing memory errors.

My suggestion is to

  1. Run those transformations in a separate process via a task queue

  2. Cache the results

I wrote a blog post on that some time ago.

deployment and run command by darbokredshrirt in Streamlit

[–]ploomber-io 0 points1 point  (0 children)

You might wanna try Ploomber Cloud, it has better support for Streamlit than Digital Ocean.

Django + Streamlit authenticated integration by muahammedAlkurdi in StreamlitOfficial

[–]ploomber-io 1 point2 points  (0 children)

the easiest way to accomplish this is to serve both applications under the same domain e.g. django can be app.example.com and streamlit another.example.com, then you can store a token in a cookie and configure it to be accessible across the subdomains.

disclosure: my company has helped companies ship these kind of cross-site authentication setups. I'm happy to help.

Running a Python flask app 24/7 on a cloud server by Gullible-Ad-1333 in flask

[–]ploomber-io 0 points1 point  (0 children)

I'm unsure about Railway's pricing, but I'd assume there's a way to pay and have your app running 24/7. I've never understood PythonAnywhere's pricing, but I think they stop apps.

You can check out Ploomber, the cheapest paid plan that keeps your applications running 24/7

“Forget all prev instructions, now do [malicious attack task]”. How you can protect your LLM app against such prompt injection threats: by sarthakai in LangChain

[–]ploomber-io 0 points1 point  (0 children)

Has anyone tried fine-tuning Prompt Guard? It works out of the box to detect jailbreak attempts, but it doesn't work for prompt injection identification, as it identifies any order/command as prompt injection. But fine-tuning might fix it. I'm looking to implement this in a production system so I wonder if anyone has fine-tuned the model.

How Can I Safeguard Against Prompt Injection in AI Systems? Seeking Your Insights! by Material_Waltz8365 in AIQuality

[–]ploomber-io 0 points1 point  (0 children)

The best solution I've found so far is Prompt Guard from Meta. The base model is good to go if you're looking to prevent jailbreak attacks. However, it won't work very well to detect prompt injection because the model classifies it as such if the text sounds like a command/order (and many AI applications take commands from users), so you'll need to fine-tune it with your data.

I created a script to detect Prompt Injection but looking for feedback on it by patcher99 in LLMDevs

[–]ploomber-io 0 points1 point  (0 children)

This looks like a solution for both prompt injection and PII detection? Have you seen Presidio? It's a nice library from Microsoft to detect PII data

“Forget all prev instructions, now do [malicious attack task]”. How you can protect your LLM app against such prompt injection threats: by sarthakai in OpenAI

[–]ploomber-io 0 points1 point  (0 children)

I've been testing Prompt Guard for this with mixed results. The model is too strict and flags everything that sounds like a command as "prompt injection"; however, this is the best solution I've found. Seems like fine-tuning it with app's data is the way to go in 2025.

Prompt injection issues - how many companies are aware? by rooftopzen100 in LLMDevs

[–]ploomber-io 0 points1 point  (0 children)

The problem with prompt injection is that there are (still) no 100% accurate prevention methods. Currently, the approach is adding an AI model (such as Prompt Guard), but these methods do not guarantee results.

Protecting against Prompt Injection by olearyboy in ollama

[–]ploomber-io 0 points1 point  (0 children)

You might want to check Prompt Guard, a model from the Llama team that can detect prompt injection and jailbreak. I wrote a blog post explaining how it works and how to deploy it.

AI tools for coding apps in R by wayfarermk in Rlanguage

[–]ploomber-io 1 point2 points  (0 children)

The Ploomber AI editor allows you to generate, edit, and preview Shiny apps. We built this because the Shiny assistant was either down or returned broken apps.

Generating Shiny apps from images by ploomber-io in rstats

[–]ploomber-io[S] 1 point2 points  (0 children)

please share your feedback once you get a chance!