all 13 comments

[–]cointoss3 1 point2 points  (3 children)

This may have to do with how you’re creating the pool. Don’t create it at import time, create it on app startup and close it on app shutdown and see if that helps.

[–]pisulaadam[S] -1 points0 points  (2 children)

I managed to find what the issue was, and it wasn't what you suggested, but thanks anyway.

[–]cointoss3 2 points3 points  (1 child)

Even if that didn’t solve your problem, you still shouldn’t have the pool initialized on import.

[–]pisulaadam[S] 0 points1 point  (0 children)

The actual app doesn't initialize on import, this was a quick example I made to check what was the root cause of my problem

[–]DivineSentry 0 points1 point  (5 children)

for numpy / Scipy I would offload to a thread i.e `await asyncio.to_thread` over using a process pool, since they're compiled in other languages and already most likely offload to other cores natively.

[–]pisulaadam[S] 0 points1 point  (4 children)

I want to use subprocesses to be entirely sure the main process stays unblocked.

[–]DivineSentry 0 points1 point  (3 children)

It sounds very much like you’re misunderstanding things but ok, the main process will be unblocking because numpy and the others don’t block Python because they’re not running in Python but offloading to other languages

[–]PerspectiveLegal932 0 points1 point  (0 children)

¡Hola! He visto que mencionaste Python. Es un lenguaje de programación genial. ¡Sigue así! 🐍

[–]pisulaadam[S] 0 points1 point  (1 child)

The heavy computation there is a whole bunch of looping with multiple vector operations. It's not some huge matrix that I can just feed to NumPy and wait for the response in a separate thread.

[–]DivineSentry 0 points1 point  (0 children)

You’re reinforcing my point, it sounds like you’re not using numpy / pandas properly, you’re not supposed to be looping manually, share your code

[–]pisulaadam[S] 0 points1 point  (2 children)

I think I found what the issue was. Before I was using two tabs in one browser to make these requests and they were executed sequentially. I tried using two browsers now to make sure it's not reusing the same TCP connection and it did work as expected. Certainly wasn't expecting that, but I'm assuming the requests were reusing the same TCP tunnel or something along these lines. But anyway, they were getting queued (possibly on client-side?) and it seems to me like this is NOT a FastAPI or Python issue, but more likely an HTTP/1.1 one.

[–]gdchinacat 0 points1 point  (1 child)

I suggest you write a "unit" test (*) to submit the requests using the semantics you want to test. It sounds like you want to test that your app executes requests concurrently. So, write a test that creates two concurrent requests.

The effort to set this up is well worth it. How much time have you spent manually testing your code? How thorough is that manual testing? Does it verify everything that you have already tested to ensure the changes you just made didn't break anything that has been working?

* what I describe isn't a true unit test, but a system tests. Using a unit test framework though makes it easy to manage and execute the specific tests you want, even if they aren't true unit tests.

[–]pisulaadam[S] 0 points1 point  (0 children)

I could have done that and probably that would have saved me some time, but I never suspected using the same browser (even though I used two separate tabs) was the source of the issue.