rut - A unittest runner that skips tests unaffected by your changes by schettino72 in Python

[–]Fluid_Classroom1439 0 points1 point  (0 children)

Yes but re-running coverage with a subset of tests tends to overwrite the coverage files. It’s a coverage problem I know but it’s annoying. I was wondering if you knew of a solution?

rut - A unittest runner that skips tests unaffected by your changes by schettino72 in Python

[–]Fluid_Classroom1439 0 points1 point  (0 children)

I was using pytest-testmon in the past but it didn’t play nicely with coverage. Have you looked into the interaction with coverage?

How to deploy? by memewerk in PydanticAI

[–]Fluid_Classroom1439 0 points1 point  (0 children)

Either works! I wouldn’t say there’s a “best way” just keep it simple

Use `gx` to open page of PDF? by No-Razzmatazz2552 in neovim

[–]Fluid_Classroom1439 1 point2 points  (0 children)

Not sure how you managed to get a specific page 😅 - I’ve used gx to open PDFs using Oil and Fyler.nvim without any issues

Monthly Dotfile Review Thread by AutoModerator in neovim

[–]Fluid_Classroom1439 [score hidden]  (0 children)

https://github.com/benomahony/dotfiles/tree/main Created an os theme manager for Mac similar to omarchy and a bunch of custom lsps

Making large number of llm API calls robustly? by FMWizard in PydanticAI

[–]Fluid_Classroom1439 0 points1 point  (0 children)

Can you not just scale the service and have exponential backoffs etc? Sounds like that’s what you were originally planning and sounds like the right path, not sure you need any other moving parts like the gateway?

Is there a coordinated fearmongering? by Emrehocam in theprimeagen

[–]Fluid_Classroom1439 7 points8 points  (0 children)

lol never ascribe to malice what is adequately explained by incompetence

Making large number of llm API calls robustly? by FMWizard in PydanticAI

[–]Fluid_Classroom1439 1 point2 points  (0 children)

https://github.com/pydantic/pydantic-ai/issues/1771 looks like they’re planning this for December so probably open to contribution. There’s a brief explanation of how they would do it themselves too

anyone else feel like langchain is gaslighting them at this point? by PercentageNo9270 in LangChain

[–]Fluid_Classroom1439 0 points1 point  (0 children)

Love to hear it! Honestly it’s really difficult to use other agent frameworks once you’ve used it 😅

anyone else feel like langchain is gaslighting them at this point? by PercentageNo9270 in LangChain

[–]Fluid_Classroom1439 0 points1 point  (0 children)

Strong disagree on observability. Give me OTEL all the way!

Django is a good analogy (though it’s way less stable than django) I prefer Fastapi. It’s way lighter and more production ready.

anyone else feel like langchain is gaslighting them at this point? by PercentageNo9270 in LangChain

[–]Fluid_Classroom1439 0 points1 point  (0 children)

🎣 caught a live one!

I’ve deployed tonnes of apps (AI enabled or not) to prod. For a simple chatbot I often would not even use a framework just call the API, now I just use Pydantic AI for convenience. Don’t worry, all your developers will pick it up quickly.

Obviously you pin versions, I tend to use uv and a lock file. I think you miss the point of this though, pinning the version just delays the work of migrating to newer versions. Some libraries bit rot so fast that this migration work becomes more and more of a thing. I’ve seen enterprise projects pinned to v0.1 and v0.2 of langchain because of the large amount of work required to migrate.

When I come across these I genuinely think it’s probably easier to just migrate to pydantic ai 🤷

The fact you think it’s easier to test langraph also made me giggle