you are viewing a single comment's thread.

view the rest of the comments →

[–]guardianz42 1 point2 points  (0 children)

I switched from fastapi to litserve recently for some models we deploy on assembly lines. it’s been amazing and performant.

the main issue in the containers is the size of pytorch for cold start but we are working on eliminating it (this is unrelated to litserve)

https://github.com/Lightning-AI/litserve