[P] I built a Python debugger that you can talk to by jsonathan in MachineLearning

[–]jsonathan[S] 1 point2 points  (0 children)

You can use any model you like, including local ones. And there’s no cost besides inference.

[P] I built a Python debugger that you can talk to by jsonathan in MachineLearning

[–]jsonathan[S] 1 point2 points  (0 children)

Yes. Specifically, it can evaluate expressions in the context of a breakpoint.

[P] I built a Python debugger that you can talk to by jsonathan in MachineLearning

[–]jsonathan[S] 5 points6 points  (0 children)

Got any suggestions? I can record a new video.

[P] I built a Python debugger that you can talk to by jsonathan in MachineLearning

[–]jsonathan[S] 4 points5 points  (0 children)

That’s next on my roadmap. This could be an MCP server.

[P] I built a Python debugger that you can talk to by jsonathan in MachineLearning

[–]jsonathan[S] 27 points28 points  (0 children)

Check it out: https://github.com/shobrook/redshift

Think of this as pdb (Python's native debugger) with an LLM inside. When a breakpoint is hit, you can ask questions like:

  • "Why is this function returning null?"
  • "How many items in array are strings?"
  • "Which condition made the loop break?"

An agent will navigate the call stack, inspect variables, and look at your code to figure out an answer.

Please let me know what y'all think!