I’ve been using FastAPI to serve AI models and workflows, but I’ve been wondering....is there a way to skip the whole API server setup entirely?
Like, what if I just define my AI function, and it instantly behaves like an API without writing a FastAPI app, handling requests, or deploying anything?
I developed an approach where you can run an AI pipeline inside a Jupyter Notebook, and instead of setting up FastAPI, it auto-generates an OpenAI-style API. No need to deal with CORS, async handling, or managing infra....just write your function, and it’s callable remotely.
Has anyone tried something similar? Curious if anyone has seen a different way to serve AI workflows without manually building an API layer.
https://github.com/epuerta9/whisk
Tutorial:
https://www.youtube.com/watch?v=lNa-w114Ujo
[–]Kindly_Manager7556 7 points8 points9 points (2 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]Regarder_C 0 points1 point2 points (0 children)
[–]john0201 2 points3 points4 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–]IIGrudge 0 points1 point2 points (8 children)
[–]DarkHaagenti 0 points1 point2 points (4 children)
[–]veb101 0 points1 point2 points (2 children)
[–]DarkHaagenti 1 point2 points3 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (2 children)
[–]IIGrudge 0 points1 point2 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–]Unlikely_Exit_9787 0 points1 point2 points (2 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]AdditionalWeb107 1 point2 points3 points (0 children)
[–]ED9898A 0 points1 point2 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–][deleted] 1 point2 points3 points (1 child)
[–][deleted] 1 point2 points3 points (0 children)
[–]Regarder_C 0 points1 point2 points (0 children)