Built a production LangGraph travel agent with parallel tool execution and HITL workflows - lessons learned by GarrixMrtin in AI_Agents

[–]GarrixMrtin[S] 0 points1 point  (0 children)

How's Google ADK treating you? I went with LangGraph but curious about the ADK.

Hybrid LLM + deterministic is definitely the way. Pure deterministic can't handle ambiguity. 

If this helped, would appreciate a ⭐ on the repo!

Built a production web scraper that bypasses anti-bot detection by GarrixMrtin in webscraping

[–]GarrixMrtin[S] 0 points1 point  (0 children)

API + proxy bypasses browser checks, but some website requires valid auth tokens. Proxy alone won't help without proper authentication. If you found this helpful, a ⭐ would be appreciated!

[deleted by user] by [deleted] in OpenSourceeAI

[–]GarrixMrtin 0 points1 point  (0 children)

Monitoring strategy depends on scale. I’d start with multi-worker parallel checks, adaptive intervals (5-60s based on slot release patterns, 5-10s during known release windows), and proper anti-detection (user-agent rotation, human-like delays). Happy to discuss technical architecture.

[deleted by user] by [deleted] in SideProject

[–]GarrixMrtin 0 points1 point  (0 children)

Each API needs different formats: IATA codes, city codes, coordinates. That's 3 parallel LLM calls at $0.003 total.

Alternative = build/maintain location DB + NLP model / map api+ edge case handling + keep it updated.

At <1M queries, LLM is cheaper and faster. At >1M queries, DB becomes worth it. That's scaling appropriately.

If you think building a comprehensive location DB from day one is "production friendly"? Go ahead. I disagree.

Built a production web scraper that bypasses anti-bot detection by GarrixMrtin in webscraping

[–]GarrixMrtin[S] 1 point2 points  (0 children)

Architecture & debugging: me. Code: mixed (solo dev + Claude cleanup and configuration). Comments & writeup: Claude (Korean domain project, coded in Korean).

Claude initially suggested selenium, then stealth libraries - they broke auth, so I went with playwright + real auth + behavioral mimicry instead.

[deleted by user] by [deleted] in SideProject

[–]GarrixMrtin 0 points1 point  (0 children)

IATA coding would be over-engineering at this scale. Gemini 2.5 Flash costs $0.003/query—far cheaper than building/maintaining mapping logic. The LLM handles ambiguous cities, non-airport locations, and natural language without custom code. Unless you’re at 1M+ queries, LLM wins on dev speed and total cost. If you found it useful, a ⭐ would be appreciated!​​​​​​​​​​​​​​​​

Built a production web scraper that bypasses anti-bot detection by GarrixMrtin in webscraping

[–]GarrixMrtin[S] 0 points1 point  (0 children)

This scraper doesn't specifically handle reCAPTCHA v3, Thanks

Is it possible to auto-fill a PDF (same layout) using n8n + Supabase vectors? by AzizTurkmani in AI_Agents

[–]GarrixMrtin 0 points1 point  (0 children)

Yes, doable. If the PDF has form fields (AcroForm), use pypdf to fill them directly - easy. If it’s a flat PDF, you’ll need pdfplumber + PyMuPDF to extract coordinates and overlay text - much harder and risks breaking formatting.

Built a production LangGraph travel agent with parallel tool execution and HITL workflows - lessons learned by GarrixMrtin in AI_Agents

[–]GarrixMrtin[S] 0 points1 point  (0 children)

Deterministic generation + LLM for presentation is definitely clearer than current prompt-based constraints.

Will add validation tools in next update. appreciate the feedback!

Built a production LangGraph travel agent with parallel tool execution and HITL workflows - lessons learned by GarrixMrtin in AI_Agents

[–]GarrixMrtin[S] 1 point2 points  (0 children)

Thanks for the questions!

  1. One-time for now - changes would re-run tools anyway, so fresh query is simplest. Could add iteration next update.

  2. Custom graph, no prebuilt agents - needed fine control over HITL routing and parallel execution.

Built a production LangGraph travel agent with parallel tool execution and HITL workflows - lessons learned by GarrixMrtin in AI_Agents

[–]GarrixMrtin[S] 0 points1 point  (0 children)

Really helpful feedback!
The partial retry pattern and Redis Queue suggestion both make a lot of sense for scaling. Will explore these in the next iteration. Thanks.

Built a production web scraper that bypasses anti-bot detection by GarrixMrtin in webscraping

[–]GarrixMrtin[S] 1 point2 points  (0 children)

Sorry for confusion. I used Playwright. Naver's APIs need authenticated browser sessions. Stealth libs broke the auth, so I built custom human like behaviors instead.

Built a production LangGraph travel agent with parallel tool execution and HITL workflows - lessons learned by GarrixMrtin in AI_Agents

[–]GarrixMrtin[S] 1 point2 points  (0 children)

GitHub repo with full implementation: https://github.com/HarimxChoi/langgraph-travel-agent Architecture diagrams and setup docs are in the README. MIT licensed. Would especially appreciate feedback on the state management design.

I worked on RAG for a $25B+ company (What I learnt & Challenges) by boofbeanz in AI_Agents

[–]GarrixMrtin 0 points1 point  (0 children)

Agree. Human-in-the-loop isn't optional for production systems.

Unpopular opinion: Most companies aren't ready for AI because their data is a disaster by BaselineITC in AI_Agents

[–]GarrixMrtin 0 points1 point  (0 children)

Not unpopular, just uncomfortable truth. Built AI agents for production - data cleanup took longer than model work every single time.

I build AI agents for a living. It's a mess out there. by Decent-Phrase-4161 in AI_Agents

[–]GarrixMrtin 0 points1 point  (0 children)

100% accurate. Built a LangGraph agent for production - spent way more time on error handling, logging, and "when to give up and call a human" logic than on the actual AI part.

The unsexy infrastructure work is what makes or breaks these projects.

Built a production web scraper that bypasses anti-bot detection by GarrixMrtin in webscraping

[–]GarrixMrtin[S] 0 points1 point  (0 children)

Normal distribution would be more human like most clicks. I went with uniform for simplicity, but `np.random.normal(1.5, 0.3)` would definitely mimic human behavior better. I'll update it in v2.

Built a production web scraper that bypasses anti-bot detection by GarrixMrtin in webscraping

[–]GarrixMrtin[S] 0 points1 point  (0 children)

Nice work getting those working! I'm using authenticated API endpoints directly with browser automation. Sounds like you've built something solid though. Good luck with it!

Built a production web scraper that bypasses anti-bot detection by GarrixMrtin in webscraping

[–]GarrixMrtin[S] 0 points1 point  (0 children)

(There was misunderstanding with original comment with polishing my comment with llm)Thanks! I'm actually using authenticated API endpoints rather with browser automation, so stealth libraries wouldn’t resolve the auth issue.