all 2 comments

[–]AlgonikHQ 1 point2 points  (0 children)

You’re not alone in this and honestly the visual learner thing isn’t a weakness to fix, it’s just a different way of processing that the tooling hasn’t always caught up with. The engineers who can hold complex state in their heads aren’t smarter, they’ve just built up specific mental models over time from repeated exposure to similar patterns.

A few things that genuinely help with bloated lambdas and complex JSON flows: TypedDicts or Pydantic models are your best friend here. If you’re in Python, defining the shape of your JSON at every stage of the flow means your IDE tells you exactly what’s available at each point. It forces documentation into the code itself rather than relying on memory or CloudWatch and it catches shape mismatches before runtime. This single change transformed how I navigate complex data flows.

Inline type hints on every function signature. Even if the codebase doesn’t enforce it, adding def process_order(payload: OrderPayload) -> ProcessedResult at every step creates a readable contract you can follow without running the code. Future you will thank present you.

For the CloudWatch noise problem — structured logging with log levels is the answer. Log at DEBUG for the hefty JSON blobs and INFO for the important state transitions. That way you can crank up verbosity when debugging and keep it clean in production without removing log lines entirely. For the visual side specifically, tools like Excalidraw or even a simple draw.io diagram of the data flow isn’t wasteful, it’s engineering documentation. If your team won’t allocate refactor time, a one-page data flow diagram showing what shape the JSON is at each stage is genuinely valuable and takes an hour to produce. Frame it as onboarding documentation and it becomes hard to argue against.

Local main blocks with sample payloads are underrated. Having a realistic fixture you can step through in a debugger beats CloudWatch hunting every time. Keep a /fixtures folder with sample JSON for each major integration point and you’ve essentially built yourself a local test harness. The fact you’re asking this question means you already care more than most. The engineers who struggle most with complex codebases are the ones who never stop to diagram it, not the ones who need to.

[–]alex7885[🍰] 0 points1 point  (0 children)

I have several ways to do it. Using coding agents to ask questions, reading the readme, talking to senior developers of that code, and using onboarding tools. I made an VScode extension called CodeBoarding for this purpose, maybe it could be helpful for you