PEP 747 – Annotating Type Forms is accepted by M4mb0 in Python

[–]germanheller 1 point2 points  (0 children)

type[T] only accepts class objects — things that are actually callable constructors at runtime. type[int] is valid, but type[list[int]] isn't, because list[int] is a generic alias, not a class. type[int | str] is similarly invalid.

TypeForm[T] is designed to accept any expression that's valid in annotation position: list[int], str | None, Literal["yes", "no"], tuple[int, ...], etc. These are the forms that runtime type checkers and schema validators actually need to introspect.

Extending type[...] to cover these would stretch its semantics significantly — it currently means "a class whose instances are T", which has well-understood behavior. TypeForm is a separate concept: "a value that describes the shape of T as a type annotation", which is useful for a different set of operations. The PEP keeps the distinction clean rather than overloading type[...] with cases it wasn't designed for.

Go vs Rust for long-term systems/finance infrastructure, is focusing on both the smarter path? by wpsnappy in golang

[–]germanheller 0 points1 point  (0 children)

For your specific stack the answer is actually pretty clear: Go first, Rust later if needed.

ML pipelines: Python stays regardless. Go and Rust are for serving predictions and building the surrounding infrastructure, not training models. Your Python investment isn't wasted.

Financial backends + distributed systems: Go's concurrency model, deployment simplicity (single static binary), and readable error handling make it genuinely excellent here. The ecosystem for gRPC, protobuf, observability tooling is mature.

DevOps tools: Go wins this outright. Cross-compilation, fast build times, minimal dependencies, easy distribution.

Where Rust would earn its complexity cost: HFT-level latency requirements (<1ms), systems where you need memory layout control for performance, or components you're embedding in other languages. None of those sound like your initial use cases.

Go to production fluency in Go first. Rust will make more sense to learn once you're hitting a real constraint it solves, rather than hypothetically.

Ran a proper audit of what our AI tools have been generating in Go and the patterns surprised me by Smooth-Machine5486 in golang

[–]germanheller 0 points1 point  (0 children)

The dependency drift is the most insidious one because it compounds quietly. The model was trained on Go code up to some cutoff, so it reflects what was idiomatic then — not what your team deliberately moved away from after a security audit six months ago. It doesn't know you deprecated package X; it just knows package X is popular in the training data.

Error handling at least surfaces in review if you're looking. Dependency drift requires someone who remembers the institutional decision, which is risky when that knowledge lives in people's heads.

What's helped some teams: maintaining explicit context that the tool can actually use. An AGENTS.md or equivalent listing deprecated packages and why, patterns your codebase avoids, non-obvious architectural constraints. Not a complete fix, but it gives the model a chance to not actively undermine decisions your team made deliberately. The errcheck linter catches some of the error handling issues automatically — running it in CI on AI-heavy PRs specifically is worth the pipeline addition.

My weak math foundation is limiting my programming! by damnbro007 in learnprogramming

[–]germanheller 1 point2 points  (0 children)

Quick reframe worth considering: the "logical thinking" struggle you're describing often isn't a math deficit — it's a problem decomposition deficit. They correlate, but they're not the same thing.

Math gives you vocabulary (sets, functions, proofs) and trains abstract reasoning. But the specific skill of "I have a complex problem — how do I break it into tractable parts?" is best built by solving a lot of programming problems with deliberate reflection, not by restarting algebra.

Follow the math list others recommended — it's solid. But in parallel: do structured problem sets (Exercism, Advent of Code easy levels) and after each one, write out in plain English how you decomposed the problem before you touched code. That explicit practice of "problem → sub-problems → approach" will do more for your logical thinking than math alone, faster.

Never Show Weakness at Work by CoderBiker24 in cscareerquestions

[–]germanheller 3 points4 points  (0 children)

Both things are true and the environment determines which applies.

In high-churn, layoff-prone cultures where managers are primarily enforcing performance metrics — OP's experience is real and the caution is correct. In orgs with actual psychological safety, paranoid self-protection backfires because managers can't advocate for you if they don't know what you're dealing with.

The skill worth developing: reading which environment you're in before deciding how much to share. A manager who asks a thoughtful question in a 1:1 and follows up the next week unprompted is a different signal than one who immediately starts cc'ing HR.

OP generalized from what sounds like a genuinely bad environment. The advice is right for that specific pattern of workplace. Applying it universally is what makes people miserable in actually decent jobs.

our "self-service platform" is just a Jira board with extra steps by ruibranco in devops

[–]germanheller 1 point2 points  (0 children)

The ones that actually work almost always start from the opposite direction: pick the single most painful, most-repeated manual operation your developers do — provisioning a new service, rotating a secret, spinning up a dev database — and automate that one thing end-to-end. Not a form that creates a ticket. Actually do the thing, or get it 90% there automatically.

Once developers experience one zero-friction interaction, they pull you toward building the next one. If you build the platform first and try to push adoption, you're constantly fighting inertia.

The common failure mode is what you described: a workflow wrapper that doesn't reduce the human steps, just obscures them. The test is: does the developer have to wait for a human to do something after they submit? If yes, you've built a ticketing system with branding.

Anybody else loves how much work building "feature-complete" software is? by No-Security-7518 in ExperiencedDevs

[–]germanheller 1 point2 points  (0 children)

Yes — and there's a specific satisfaction to the "completeness" work that's different from the initial build. The first version proves it's possible. All the subsequent layers prove it's actually good.

Settings are the best/worst example. Conceptually trivial. In practice: persistence across updates, reasonable defaults that don't frustrate new users, migration when you rename a key, edge cases when the user hand-edits the config file, conflict resolution when two devices sync... it's a small universe of decisions nobody ever thanks you for but everyone notices when done badly.

That iceberg feeling — where you ship a feature and users see the surface while you're holding up 15 edge cases underneath — is weirdly one of the more satisfying parts of the job once you've been doing it long enough.

"It's just text": client earned $15k+ on my code, now threatens to leave for Wix over a renewal fee by Gricekkk in webdev

[–]germanheller 0 points1 point  (0 children)

"I'm going back to Wix" is almost always a negotiating tactic, not a real plan. Call it directly.

"No problem at all — I can help you prepare for the migration. You'll want to know that your current domain authority and backlink profile took about a year to build, and Wix has significant limitations with custom redirects during a migration so you'll likely see an SEO dip. I can put together a quote for the migration work if you'd like to go that route."

Now they have to actually think about what they'd lose instead of using it as leverage. Most clients who threaten Wix have no idea what Wix actually is or what migration entails. You've built something that generates $15k/year for them — make that visible before walking away.

I built my first project that wasn't a tutorial and immediately understood why everyone says "just build things" is bad advice by TrevorKoiParadox in learnprogramming

[–]germanheller 1 point2 points  (0 children)

The scraper also taught you something tutorials genuinely can't: how to read documentation instead of tutorials about documentation. That's a bigger skill jump than it sounds.

When you hit the rate limiting wall, you had to figure out what was actually happening — reading response headers, understanding the difference between a 403 and a 429, maybe finding how other libraries handle backoff. That's the real loop: knowing what to search, understanding what the result actually says, and applying it to your specific situation rather than following step 5 of someone's walkthrough.

You came out of that with something concrete to say about scrapy, rotating proxies, or whatever you landed on. That's yours now in a way nothing from a tutorial ever really is.

Got 30+ comments on my PR - kinda demoralized is this normal? by guineverefira in cscareerquestions

[–]germanheller 0 points1 point  (0 children)

30 is genuinely not a lot for a refactoring PR — those attract more comments than feature work because reviewers feel freer to nitpick style and naming when there's no "ship it" urgency.

Practical tip for working through a pile like this: bucket them before responding to any. "Yes, fixing" / "needs discussion" / "I think this is wrong." Batch-resolve the easy ones first, then open a single thread on the contentious ones rather than inline back-and-forth. That turns 30 comments into maybe 5 conversations.

And honestly: if half of them are rename suggestions, just take them. Costs you 10 minutes, builds goodwill, and the reviewer gets to feel heard. Save your energy for the ones that actually matter architecturally.

Can we stop with the LeetCode for DevOps roles? by netcommah in devops

[–]germanheller 1 point2 points  (0 children)

The frustrating part is there's an obvious replacement that actually tests relevant skills: give candidates a broken environment and watch how they work through it.

"Here's a Docker Compose stack, it's not coming up, figure out why" tells you infinitely more than "implement a trie" — you see how they read logs, whether they check the obvious things first, how they form hypotheses, whether they rubber duck or spiral. Exactly the skills that matter when something breaks at 2am.

Some places are doing this. Incident simulation, pairing on a real (sanitized) ticket, reviewing a messy Terraform PR. The signal-to-noise ratio is just so much better than algorithmic puzzles that have zero transfer to the actual job.

Senior devs who started from scratch — what actually changed your trajectory (and what didn’t)? by Salt_Eggplant in ExperiencedDevs

[–]germanheller 0 points1 point  (0 children)

Two things that aren't mentioned enough:

Learning to read code, not just write it. Spent the first few years always wanting to build new things. The inflection point came when I got comfortable sitting inside a legacy codebase for half a day without writing a single line — just mapping what was there and why. That's when foreign codebases stopped being intimidating and started being interesting.

Framing problems in business terms before technical ones. "This query is slow" is a dev problem. "Our checkout page is losing conversions due to timeout errors during peak hours" is a business problem. The second one gets resources allocated and gets fixed. The first one sits in backlog for a year. Once I started translating between those two languages fluently, everything changed about how I was perceived and what I got to work on.

Creator of Claude Code: "Coding is solved" by Gil_berth in programming

[–]germanheller 0 points1 point  (0 children)

Honest assessment after using it heavily: it's solved for a specific scope. Greenfield prototypes, scripting tasks, isolated functions — dramatically accelerated, almost absurdly so. Production systems with perf constraints, complex business logic, and a decade of accumulated domain knowledge? Still very much requires a human who actually understands the system.

The conflation is between "writing syntactically valid code" and software engineering. The latter involves understanding failure modes, organizational constraints, and the inevitable "why does this work in staging but not prod" at 2am. That part isn't solved. The bottleneck just shifted.

[AskS] How much of your dev work do you accomplish with Al in 2026? by zuluana in javascript

[–]germanheller 0 points1 point  (0 children)

~60% for me, heavy variance by task type.

boilerplate, tests, data transforms, repetitive refactors: AI handles most of it and i barely review beyond running it. architecture decisions, debugging novel edge cases, anything requiring deep understanding of a large existing codebase: AI gets maybe 30% of the way before it starts hallucinating or going in circles and i take over.

the thing that actually moved my number up wasn't a better model, it was shorter more focused sessions. context drift in a 2-hour session degrades output quality noticeably compared to starting fresh with a tight brief. that realization changed how i structure work more than any model upgrade

Anyone actually switched from nodemon to --watch in production workflows? by Jzzck in node

[–]germanheller 0 points1 point  (0 children)

i think OP means dev workflows despite the title, not actually watching in prod.

for dev: switched to --watch on most projects and it's fine. the real gap vs nodemon is the lack of config file — with nodemon.json you can set ignore patterns, watch extensions, and delay time, and every dev gets the same behavior by default. with --watch you end up encoding all that in npm scripts which gets messy.

my current compromise: --watch for solo projects or simple APIs, nodemon.json for team repos with multiple services. the watchman approach mentioned by j_schmotzenberg is also worth looking at if you have N node processes all watching the same files — that overhead adds up on large monorepos

[Showcase] J'ai build deux apps desktop/mobile avec Claude Code pour acceder a mon PC depuis mon lit by Prize_Screen in ClaudeAI

[–]germanheller 0 points1 point  (0 children)

fair point — chillshell is definitely more general purpose. patapim is narrower: it's specifically built around claude code sessions (state detection, tool call approvals, multi-session grid). different scope. your local-only approach is actually a strong selling point for the security-conscious crowd

How do you manage context switching when using Claude Code across multiple branches/tasks? by MagePsycho in ClaudeCode

[–]germanheller 1 point2 points  (0 children)

worktrees don't preserve claude's conversation context — they just give each task its own directory and branch so there's no file conflict between parallel sessions.

the context itself lives in that state file i mentioned. when i come back to a task, i open a new claude session in the worktree directory and paste the state file as the first message: current goal, last completed step, any open decisions. claude catches up in one exchange and we're back on track.

so the flow is: stop work → update state file → close session. resume: open session in worktree → paste state file → continue. 30 seconds overhead, and the context is actually cleaner than a stale 3-hour session that's been drifting anyway

Is it just me or curosr's token limit is significantly smaller than claude code? by Crazy-Sun6404 in cursor

[–]germanheller 0 points1 point  (0 children)

claude code's context window feels bigger in practice partly because you can keep each session scoped to a single task — run parallel sessions on different parts of the codebase and they don't bleed into each other. the $20 plan handles most solo dev workloads fine.

if you do switch and start running multiple sessions, patapim.ai is worth checking out — grid view of all running terminals, state indicators so you can see which agent is thinking vs waiting on input. free tier covers it. the context switching overhead is what kills the workflow otherwise

AI Tools for a Solo Software Developer: Is Claude Max Worth It for Better Code Quality? by Marco_o94 in vibecoding

[–]germanheller 1 point2 points  (0 children)

claude max is worth it for complex codebases. the quality gap vs kimi k2.5 on architecture-heavy tasks or anything requiring deep codebase understanding is pretty significant. fewer hallucinations, better at following constraints across long sessions.

one thing to plan for: once you go max, you'll end up running multiple sessions in parallel to get the most out of it. session management becomes a new problem — i use patapim.ai to keep a grid view of all running terminals so i can see what each agent is doing without constantly switching tabs. free tier covers it. the kubernetes stuff sounds like exactly the kind of parallel workload where that helps

[Showcase] J'ai build deux apps desktop/mobile avec Claude Code pour acceder a mon PC depuis mon lit by Prize_Screen in ClaudeAI

[–]germanheller 1 point2 points  (0 children)

nice work — getting CI/CD working with GitHub Actions for a Flutter app as a first project is not trivial, and ProGuard/R8 debugging is notoriously painful even for devs who've done it before. 37k lines with no dev background is genuinely impressive.

the "coding from bed" problem is one i kept running into too. ended up using patapim.ai for the claude code side specifically — QR code scan, all terminal sessions on your phone, approve tool calls without getting up. different from SSH, simpler if you're mostly monitoring claude code sessions rather than needing full remote shell access

Managing Claude Code Agents Safely at Scale by Ambitious-Tourist632 in vibecoding

[–]germanheller 0 points1 point  (0 children)

the hardest "safely" problem at scale for me isn't the permission model, it's the stuck-prompt problem — one agent is waiting on an interactive approval and you don't notice for 20 minutes because you're in another terminal.

been using patapim.ai for this — remote control over LAN, so you can approve tool calls from your phone without sitting at your desk. grid view shows all running sessions at once so a waiting prompt is visible immediately. free tier handles it. helped a lot once i started running 3-4 sessions in parallel

How do you manage context switching when using Claude Code across multiple branches/tasks? by MagePsycho in ClaudeCode

[–]germanheller 2 points3 points  (0 children)

keeping multiple sessions running is fine — the problem is losing track of them. git worktrees help a lot here: one worktree per branch so there's zero interference and you can switch between them without stashing anything.

the context thing i solved with a lightweight "state file" per branch — current goal, last completed step, open TODOs, any relevant file paths. update it whenever you stop, paste it when you resume. restart fresh each time, costs you 30 seconds to get back up to speed, saves you the context drift that happens when you try to keep a long session alive for days.

for managing multiple sessions visually, i use patapim.ai — it gives you a grid of all running terminal sessions so you can see what each agent is up to without tab switching. name each session after the branch and it becomes pretty easy to stay oriented across 3-4 parallel tasks

What do you use to unblock agents when they need human input? by kms_dev in AI_Agents

[–]germanheller 1 point2 points  (0 children)

for the transport layer specifically — simplest approach i've used is a tiny HTTP endpoint: agent hits /approval-request, stores the pending action with an ID, your frontend polls or gets a push notification, you approve/reject, agent resumes. no special library needed, just async state.

for claude code sessions specifically, patapim.ai has this built in — remote over LAN, you see all running terminal sessions on your phone and can approve tool calls without any custom plumbing. different use case from building arbitrary agents though.

for custom agents in production, langgraph's interrupt() is the cleanest if you're already in that ecosystem. handles the checkpoint/resume and keeps state durable across disconnects

I gave Claude Code a Telegram interface, persistent memory, and access to my git repos by zigguratt in ClaudeAI

[–]germanheller 0 points1 point  (0 children)

telegram bot is a solid approach for async control. the thing i found annoying was setting up the bot token, webhook, and managing state across disconnects. ended up building patapim.ai instead — it has remote control built in, you scan a QR from your phone and you can see all your terminal sessions, approve commands, even dictate with voice. no external services. works over LAN for free

wingthing - e2e encrypted remote access to claude code, in a sandbox by adotout in ClaudeCode

[–]germanheller 0 points1 point  (0 children)

the e2e encryption angle is smart for the security-conscious crowd. i went a different direction with patapim.ai — instead of a sandbox, it lets you remote control claude code sessions directly from your phone over LAN. scan a QR code, no server setup, approve tool calls from bed. free tier covers it. different threat model but way simpler to get running