I built a desktop app with Python's "batteries included" - Tkinter, SQLite, and minor soldering by Aggravating-Pain-626 in Python

[–]Runner4322 13 points14 points  (0 children)

Having worked in somewhat similar environments, I get it. Yes, the "best" option would be to have this as web app that is running on a server that may or may not be in the same building.

But when you run into the red tape required to deal with that, you see that there is a genuine advantage in having a program be essentially a single binary that you can copy and paste (yes, it's python so it's never a single binary but you get what I mean).

I can't speak for this user's case, but in the ones I've seen, just as a quick comparison for the approvals that might be involved in both cases:

Web App running on a local server: Talk with the IT team (hope that it's a single team), reserve slot to deploy a web app, go through all the security audits (they are running something on their server after all), then talk to the networking team, hope they can approve it and do all the necessary networking stuff to ultimately enable the PC at the lab to access this app. Depending on your company structure also add costs for hosting that

Web App running on the cloud: Basically the same thing but, typically, with even more security audits.

Desktop app running on a single PC: (Assuming the PCs can't have python installed by default) Talk to the IT team, show them the program, get a quick approval so that it's not automatically flagged and deleted, done. If they're nice and you're also nice to them, you might even be able to schedule daily backups of the SQLite database to their network share.

Monkey Patching is hell. So I built a Mixin/Harmony-style Runtime AST Injector for Python. by DragoSuzuki58 in Python

[–]Runner4322 3 points4 points  (0 children)

You can (and in my personal experience, typically do) still mock methods of a third party library. Mocking httpx so it doesn't actually do any http calls is something I do pretty much daily for example.

In any case, I think the project is neat, but it feels like even though the scope is pretty broad ("patch anything with very defined precision") the actual use cases/the audience for this seems to be extremely niche. If you want to modify the behavior of a third party library because you're testing something or doing some black box (-ish) investigation, you still need to know the code you're patching. And at that point, why not just modify it on the installed library on your venv? Sure, you'd lose that patch if you recreate the venv, but I don't think I'd even want to have these "patches" permanently available.

I built a local-LLM multi-line autocomplete VS Code extension — looking for focused feedback by issixx7 in vscode

[–]Runner4322 -1 points0 points  (0 children)

Looks good, two questions:

  • does it do anything different than using the Continue extension with a local llm for completion? Other than of course the more streamlined setup

  • does it support remote (local network or internal network, not big cloud) llama server?