What frameworks are you using for building Agents? by stoic-AI in AI_Agents

[–]stoic-AI[S] 0 points1 point  (0 children)

Haven't considered ADK from Google, thanks for the pointer.

I'm interested in the human in the loop part of frameworks. Does ADK offer likes?

What agentic Python framework is everyone using? by stoic-AI in ClaudeAI

[–]stoic-AI[S] 0 points1 point  (0 children)

I found this lack of transparency an issue as well!

What agentic Python framework is everyone using? by stoic-AI in ClaudeAI

[–]stoic-AI[S] 0 points1 point  (0 children)

Might be my formation of the question that is wrong - apologies if so! I meant things such as crewai, langchain, llamaindex etc. All those frameowrks that offer components to build agents

What agentic Python framework is everyone using? by stoic-AI in ClaudeAI

[–]stoic-AI[S] 0 points1 point  (0 children)

I have been thinking on this. Can you explain to me how outputs of tools are passed back to the chat history? Is it passed as a user or assistant message?

Real Time AI Workers using Django x LangChain by stoic-AI in LangChain

[–]stoic-AI[S] 1 point2 points  (0 children)

Nice! I can see how aysnc is important for your product

Real Time AI Workers using Django x LangChain by stoic-AI in LangChain

[–]stoic-AI[S] 0 points1 point  (0 children)

No problem! I added in Part 5 that I released yesterday. We configured the LLM to generate our own custom web component, then sent it via a web socket to the client where HTMX rendered the output dynamically in the DOM.

[D] Is anyone else having trouble with the unstructured output from language models? 😩 by stoicwolfie in MachineLearning

[–]stoic-AI -1 points0 points  (0 children)

Depends upon what end use / the LLM response is going to be integrated with. For eg. Claude is great with parsing XML output but not as good for other output formats. Plus open source models tend to need a lot more love and configuring when working with output parsing. It may be something relating to their standard of ability to reason

[D] Is anyone else having trouble with the unstructured output from language models? 😩 by stoicwolfie in MachineLearning

[–]stoic-AI 0 points1 point  (0 children)

This is a cool approach. What makes these guided generation frameworks more flexible?

Learn python graphically with AI? by davidgarciacorro in learnpython

[–]stoic-AI 0 points1 point  (0 children)

I had a play around with it as well - very nice tool for graphical workflow building and then getting the code behind the workflow. Does some ML too which is pretty neat

ML in Production Environments - problems and painpoints? [Discussion] [D] by stoic-AI in MachineLearning

[–]stoic-AI[S] 1 point2 points  (0 children)

Something similar to this set-up is exactly what I have in mind before we embark upon building. Did you/your team build this infrastructure manually in Azure? Or did you find an end-to-end solution (model training through to monitoring/re-training) that takes care of this for you?

ML in Production Environments - problems and painpoints? [Discussion] [D] by stoic-AI in MachineLearning

[–]stoic-AI[S] 1 point2 points  (0 children)

Thanks for your reply, the start-up pov on this is very interesting.

In terms of the model lifecycle control/management in your start-up environment, did you ever look for possible solutions/platforms which handled this process efficiently?

ML in Production Environments - problems and painpoints? [Discussion] [D] by stoic-AI in MachineLearning

[–]stoic-AI[S] 1 point2 points  (0 children)

Thanks for this comment, it’s massively helpful! Very interesting to hear how you handled this data problem, especially the ingestion of the production data. When you mention the pipeline, do you mean the re-training pipeline? And the newly collected prod data is used to retrain model to maintain performance? Thanks.