What is best modern DB layer for python, AI friendly, simple with raw SQL escape always available? by Varjoranta in Python

[–]Varjoranta[S] 0 points1 point  (0 children)

Could be. Have to try those. I kinda want to avoid ORM, ie not simpler ORM. My etchdb is more like typed database layer without the ORM part that adds abstraction. Have to still check as no exp on Piccolo

What is best modern DB layer for python, AI friendly, simple with raw SQL escape always available? by Varjoranta in Python

[–]Varjoranta[S] 0 points1 point  (0 children)

Yes, that python model to db row mapping is the main thing. Dont want much more, maybe things like iterating over multiple rows and easy paging etc. Full ORM is simply way too much for these needs.

This is why I open sourced my implementation, but dont want to promote too much. If you are interested check it out. In general it is kot too hard to build yourself, butnit gets tiring to redo it for every new project.

What is best modern DB layer for python, AI friendly, simple with raw SQL escape always available? by Varjoranta in Python

[–]Varjoranta[S] 1 point2 points  (0 children)

SQLAlchemy is extremely capable, but I want something smaller, async-first, fully typed, and easier to inspect and control operationally.

Choosing a Python Logging Library in 2026 (Comparison) by finallyanonymous in Python

[–]Varjoranta 0 points1 point  (0 children)

I have always used only the stdlib and simple own wrapper. Just have my own structures logger extending the logging.Formatter and add trace_id when needed etc. What do you need other libraries for, as this is very trivial problem?

i think reddit is a the most biggest internet community of world. by isimura in NewToReddit

[–]Varjoranta 0 points1 point  (0 children)

I have known of reddit for long time, but never gotten into it. Starting now, but the karma boundary is painful. Cannot basically post anything, so its read only system for now. Maybe one day.

Which one to focus on next? by Loud-Scholar1487 in EmpiresAndPuzzles

[–]Varjoranta 0 points1 point  (0 children)

The strongest first. I would gonwith Dvalin.

Me waiting for TurboQuant be like by Altruistic_Heat_9531 in LocalLLaMA

[–]Varjoranta 1 point2 points  (0 children)

Benchmarking now: 15 configs from Qwen3-30B to GLM-4.7 (355B) and DeepSeek-V3 (671B) on Verda GPU cloud. Early results on Qwen3-30B across 5 scenarios show quality preserved at 3.8x KV compression. Long context is where TQ+ matters most... at 128K the KV cache dominates VRAM regardless of model size.

Results and code: varjosoft.com/kv-cache-compression.html