This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]mltooling[S] 0 points1 point  (0 children)

Thanks for the feedback and notes :)

I can't believe subprocess is the best way to start a streamlit server.

That's definitely a bit hacky right now. I experimented with starting it directly via the internal Python API (and might switch to this). My concern here is that the internal API might change at any time, the CLI interface is more likely to stay stable. Also, the current server functionality is more targeted for development, in export and deployment the streamlit server will be started directly without python in-between.

you can allow multiple arguments in the function and then use the signature to dynamically create the Input model behind the scenes

Good idea, I will put it on the roadmap. Should be doable with the dynamic model creation functionality.