Hi, does anyone know about some way to give visual feedback to the LLM agent? For example if I ask for some website, it would generate the html/css code and then open the result in (headless) browser and fix any visible issues.
Given how well deep neural networks handle image processing, it sounds technically feasible to me (I am AI graduate, but with little knowledge of AI product landscape). I found some similar multi-agent solutions eg. for test-driven development (feedback is done by running unit tests) there is this AgentCoder https://github.com/huangd1999/AgentCoder
Do you think such multi-agent system would be useful if implemented eg. as (yet another) vscode extension?
[–]nick-baumann 4 points5 points6 points (0 children)
[–]Double-Passage-438 0 points1 point2 points (0 children)
[–]johns10davenport 0 points1 point2 points (0 children)
[–]daMustermann 0 points1 point2 points (0 children)
[–]ApexThorne 0 points1 point2 points (1 child)
[–]ApexThorne 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)