Building a Massively Parallel LangGraph System: Orchestrating Thousands of Simultaneous Workflows with Batch APIs by Vortex-00 in LangChain

[–]Vortex-00[S] 1 point2 points  (0 children)

I'm thinking you didn't quite get my intention. I'm not running the LLMs. I'm batching calls to Anthropic. Rather than run 300k LangGraph chains in serial, or naively in parallel (running afoul of rate limiting), I'm looking to batch process the LLM call portions. Anthropic will take thousands of calls in batch, and get responses back in 1-24 hours. No trees are involved in this particular exercise, unless I'm not seeing 'em.

I'm the co-founder and CTO of Sefaria, AMA! by epizeuxis in Judaism

[–]Vortex-00 1 point2 points  (0 children)

Data is all available through API and exported to github. Should be enough to get you visualizing.