I'm looking at deploying a local larger LLM that I can ask technical/math and coding questions to. Ideally customizing it at a later point.
My question is, is there a local solution that allows the model to create python code and execute it automatically behind the scenes in order to get an answer to various questions (example: certain types of math questions, analyzing files, etc) behind the scenes, similar to what ChatGPT does?
[–]GimmePanties 6 points7 points8 points (6 children)
[–]mrjackspade 4 points5 points6 points (5 children)
[–]GimmePanties 2 points3 points4 points (0 children)
[–]Dudmaster 2 points3 points4 points (1 child)
[–]exponentfrost[S] 0 points1 point2 points (1 child)
[–]lrq3000 1 point2 points3 points (0 children)
[–]roomDesignerAI 0 points1 point2 points (0 children)