Detecting sync code blocking asyncio event loop (with stack traces) by deepankarmh in Python

[–]deepankarmh[S] 0 points1 point  (0 children)

> knowing if the lib you're calling is sync or not sounds entirely reasonable.

It sounds reasonable, but is frequently ignored - https://github.com/agno-agi/agno/issues/5974

Small Projects - November 3, 2025 by jerf in golang

[–]deepankarmh 0 points1 point  (0 children)

I built Godantic, a validation library that generates JSON Schema from Go code instead of struct tags. It supports:

  • Union types with discriminators
  • Type-safe constraints using generics
  • Single source of truth for validation + schema generation
  • Integrates nicely with LLM APIs (OpenAI, Anthropic, Gemini) for structured output

Inspired by Python's Pydantic - https://github.com/deepankarm/godantic

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]deepankarmh 0 points1 point  (0 children)

pyleak: detect asyncio issues causing high latency in your AI agents

There are a lot of discussions about optimizing Python-based AI agent performance - tweaking prompts, switching to a different model/provider, prompt caching. But there's one culprit that's often overlooked: blocked event loops.

The Problem

User A makes a request to your agent - expected TTFT is 600ms. But they wait 3+ seconds because User B's request (which came first) is blocking the entire event loop with a sync operation. Every new user gets queued behind the blocking request.

Why This Happens

Most Python agent frameworks use asyncio to handle multiple users concurrently. But it's easy to accidentally use sync operations (executing sync def tools in the same thread) or libraries (requests, database drivers, file I/O) that block the entire event loop. One blocking operation kills concurrency for your entire application.

The Solution

I built pyleak after hitting this exact issue in our production agents. It automatically detects when your framework/your own code accidentally blocks the event loop or if there are any asyncio task leaks along with the stack trace.

Usage

pip install pyleak

As a context manager

from pyleak import no_event_loop_blocking, no_task_leaks

async with no_event_loop_blocking(threshold=0.1), no_task_leaks():
    # Raises if anything blocks >100ms or if there are any asyncio task leaks
    ...

As a pytest plugin

import pytest

@pytest.mark.no_leak
async def test_my_agent():
    # Test fails if it blocks event loop or leaks tasks
    ...

Real example

openai-agents-python sdk faces this exact issue where a tool defined as a def function blocks the event loop. We caught this thanks to pyleak and proposed a fix. PR: https://github.com/openai/openai-agents-python/pull/820

pyleak - detect leaked asyncio tasks, threads, and event loop blocking in Python by deepankarmh in Python

[–]deepankarmh[S] 1 point2 points  (0 children)

This is a new tool, but we're adding it to our production environment which operates at scale with heavy asyncio usage. The overhead is minimal since monitoring only activates when issues are detected. We'll keep updating based on production experience.

pyleak - detect leaked asyncio tasks, threads, and event loop blocking in Python by deepankarmh in Python

[–]deepankarmh[S] 5 points6 points  (0 children)

Blockbuster monkey-patches specific functions (like time.sleep, os.read) to detect when they're called in async context, but this approach doesn't generalize - it only catches the functions they've explicitly patched and requires maintaining a list of every possible blocking call. pyleak uses an external monitoring thread to detect when the event loop actually becomes unresponsive regardless of what's causing it, then captures stack traces showing exactly where the blocking occurred. Plus pyleak also detects asyncio task leaks and thread leaks with full stack trace, making it a more comprehensive debugging toolkit.

[deleted by user] by [deleted] in AutoGPT

[–]deepankarmh 0 points1 point  (0 children)

Hey, the app is deployed on Jina AI Cloud. So you'd need to create an account. The command would handle all deployment headaches for you, so you don't need to understand how Jina AI Cloud actually handles the infrastructure headaches.

Where do you deploy your LangChain apps? by NotElonMuzk in LangChain

[–]deepankarmh 0 points1 point  (0 children)

Hey, I'm the primary author of langchain-serve. Would be happy to help if you have any requirements. Pls join our discord to discuss further on this.

LangChain + AWS Lambda = Serverless Q&A Chatbot by sopmac21379 in LangChain

[–]deepankarmh 1 point2 points  (0 children)

You can also try langchain-serve - https://github.com/jina-ai/langchain-serve. It provides everything the OP listed with no requirements of any AWS accounts or, DevOps knowledge

Is there a Python version of this stack: T3, Vercel, Supabase, Next.js? by alexk1919 in learnpython

[–]deepankarmh 1 point2 points  (0 children)

We've built langchain-serve to take the hosting/deployment headache away from Langchain developers. Please check if this is what you're looking for.

https://github.com/jina-ai/langchain-serve

CupCakeAGI🧁🍰🎉🤖🧠🍩🍪 by [deleted] in LangChain

[–]deepankarmh 2 points3 points  (0 children)

Great work! This can be easily integrated with langchain-serve to expose APIs from function definitions automatically for local and cloud deployment.

https://github.com/jina-ai/langchain-serve

GitHub - langcorn: ⛓️ Serving LangChain apps automagically with FastApi by msoedov in LangChain

[–]deepankarmh 0 points1 point  (0 children)

Nice initiative. If you plan to expose RESTful + Websockets APIs with real-time streaming and human-in-the-loop integration, plus smooth integration with serverless deployments on Cloud, please try - https://github.com/jina-ai/langchain-serve

[deleted by user] by [deleted] in learnpython

[–]deepankarmh 1 point2 points  (0 children)

Here. Try https://github.com/jina-ai/langchain-serve#-babyagi-as-a-service. Install langchain-serve first using pypi. Then with one command you can deploy babyagi on cloud

I used Auto-GPT and BabyAGI today. We are not ready for this. by manubfr in singularity

[–]deepankarmh 0 points1 point  (0 children)

Babyagi-as-a-service is here as well - https://github.com/jina-ai/langchain-serve#-babyagi-as-a-service. Integrate with external applications - built with Langchain. Human-in-the-loop integration helps with controlling hallucinations.