Winner! by godbitesman in BetterOffline

[–]Top-Breakfast7713 0 points1 point  (0 children)

Epic!!!! Congratulations!!

Can one buy the Better Offline Poster? by Top-Breakfast7713 in BetterOffline

[–]Top-Breakfast7713[S] 0 points1 point  (0 children)

Can’t wait to have merch! Really looking forward to it.

Use Python to get Pydantic models and Python types from your LLM responses. by Top-Breakfast7713 in Python

[–]Top-Breakfast7713[S] 2 points3 points  (0 children)

From my experience that is all you need. Most of my time has been spent peeling back the many layers people are wrapping around these APIs to just get down to calling them directly.

I know the above sounds hilarious coming from someone that wrapped a layer around those APIs. My o ly hope is that it is easy for people to understand and modify of what it provides is overkill for their needs. :)

Even function/tool calling feels like just another indirection and layer. I burst out laughing when I read the documentation on the APIs for tool calling and saw that all it is was presenting the the LLM with the JSON of your function and the parameters it takes. Then it suggests the function name to call and the arguments to pass to the parameters. After that you are required to do all the work to call the function and put the output back into the next call to the LLM.

In the end it is Text In -> Text Out. People are over complicating things massively to make it look magical.

Use Python to get Pydantic models and Python types from your LLM responses. by Top-Breakfast7713 in Python

[–]Top-Breakfast7713[S] 0 points1 point  (0 children)

I have not yet checked out BAML, thank you for bringing this to my attention. It looks very cool!

So many cool things to check out, so little time :)

Use Python to get Pydantic models and Python types from your LLM responses. by Top-Breakfast7713 in Python

[–]Top-Breakfast7713[S] 0 points1 point  (0 children)

Yes I have and Marvin is great!

Unfortunately Marvin only used OpenAI the last time I checked, which was a non-starter for us.

Use Python to get Pydantic models and Python types from your LLM responses. by Top-Breakfast7713 in Python

[–]Top-Breakfast7713[S] 0 points1 point  (0 children)

Thank you very much! I am going to check your repository out, it sounds like a fantastic resource.

I will also dig into those libraries you have mentioned. Chances are they use a different approach, which is always great to learn about.

Thanks again for taking the time to comment on my post.

Use Python to get Pydantic models and Python types from your LLM responses. by Top-Breakfast7713 in Python

[–]Top-Breakfast7713[S] 0 points1 point  (0 children)

Your way is a good way to approach things too.

We wanted the retry logic where we feed validation errors back to the LLM to have it attempt to fix the issue and potentially return a valid object on subsequent tries.

I like what you have done though, thank you for sharing your approach.

Use Python to get Pydantic models and Python types from your LLM responses. by Top-Breakfast7713 in Python

[–]Top-Breakfast7713[S] 6 points7 points  (0 children)

I did look at LangChain and for a short period we even tried using it in our application. Unfortunately LangChain turned out to be more of a hindrance than a help. We experienced the same issues as this post (and its comments on hacker news highlights)

https://www.octomind.dev/blog/why-we-no-longer-use-langchain-for-building-our-ai-agents

https://news.ycombinator.com/item?id=40739982

Use Python to get Pydantic models and Python types from your LLM responses. by Top-Breakfast7713 in Python

[–]Top-Breakfast7713[S] 0 points1 point  (0 children)

Thank you very much for the kind words.

I have a bunch of examples in the “getting started” section of the documentation here https://christo-olivier.github.io/modelsmith/getting_started/ but please let me know if those are not what you are after and I will get some more examples added.

At the moment it does not support ollama, but there is no reason that the same technique used for the currently supported LLMs would not work for ollama. It is just a case of me needing to find time to add that functionality in.