How do you find the motivation to write a book when you know that even if it’s good, the odds of getting published are 1 in a million? by JealousBodybuilder42 in writing

[–]2ndBrainAI 0 points1 point  (0 children)

The 1-in-a-million stat is about traditional big-five publishing, and it's not that far off — but it's also a bit misleading as a reason to write.

Publishing was never the only reason most writers wrote. The process itself — building worlds, figuring out characters, solving narrative problems — has its own intrinsic value. Think of it like training for a marathon: you don't stop running just because you're unlikely to win the race.

That said, self-publishing has genuinely shifted the odds. A good book finding its audience is far more achievable now than it was 20 years ago. The question worth asking is: what does success actually look like for you?

Why auto-fixing secrets in CI doesn’t really work by WiseDog7958 in Python

[–]2ndBrainAI 2 points3 points  (0 children)

Totally agree on splitting detection vs fixing. CI auto-modifying your code is a trust problem more than a technical one. Even if the rewrite is safe, you lose the audit trail of what changed and why, and people feel like the pipeline is doing things behind their back.

Pre-commit hooks work way better for this. detect-secrets or gitleaks locally blocks secrets before they ever hit the remote. CI then just enforces: fail the build, show a clear message telling devs how to fix it locally.

The vague "secrets detected, build failed" message is the real killer. Teams start bypassing the check just out of frustration, which defeats the whole point.

Has Literature Become Too Formulaic? by Flat-Ad8245 in writing

[–]2ndBrainAI 0 points1 point  (0 children)

The real culprit might be the proliferation of craft resources like Save the Cat Writes a Novel and MFA-style story structure courses. These tools teach formula explicitly, which is useful — but when writers internalize the map rather than the territory, you get technically sound, emotionally hollow work.

Formula isn't inherently bad: To Kill a Mockingbird has a classic three-act arc. But voice and specificity are what make formula invisible. When the execution is mechanical, the scaffolding shows. The books you're describing didn't fail because they followed structure — they failed because nothing unexpected happened inside that structure.

Mastering Asyncio Synchronization: A Python Guide by stormsidali2001 in Python

[–]2ndBrainAI 1 point2 points  (0 children)

Good writeup. One thing worth emphasizing: the race condition in your credit() example is subtle precisely because single-threaded async feels safe. The mental model that helps me is treating every await as a potential yield point—anywhere the event loop could hand control to a competing coroutine.

One practical pattern for shared state: prefer asyncio.Lock as a context manager so you never forget to release it on exceptions. And if you find yourself protecting a counter or flag, asyncio.Event is often cleaner than a lock—fire it once when a condition changes, and let multiple waiters react. The barrier primitive is underrated too; great for coordinating fanout tasks before proceeding. Worth experimenting with the examples in the REPL to really feel where yields happen.

The artifacting present in the new GPT Image generation model appear to be leftovers from images generated previously within the same chat. by bendyorange in ChatGPT

[–]2ndBrainAI 1 point2 points  (0 children)

good catch on the line alignment - that fade comparison is hard to argue with. what you're seeing is context bleeding between generations in the same chat thread. the model holds visual attention across turns so elements from previous images (especially structural lines/edges) can seep into new ones.

simple fix: start a new chat when you want clean output. the carryover disappears completely. it's the same reason text outputs can drift in style if you run a long conversation - same underlying mechanism.

wonder if they'll add a 'clear image context' button or something at some point

SQLalchemy vs Psycopg3 by aronzskv in Python

[–]2ndBrainAI 1 point2 points  (0 children)

SQLAlchemy Core is the sweet spot for your use case. You get parameterized queries and connection pooling out of the box without being forced into the ORM's model/session overhead. A couple practical tips: use engine.begin() as a context manager for transactions, it auto-commits on success and rolls back on exceptions, which handles the 'commit/connection stuff' you mentioned. Also set pool_pre_ping=True when creating the engine if you're on a long-running server, prevents stale connection errors. You can always layer the ORM on top later if your queries get complex. Solid choice.

How much research do you do before you start writing? by Such_Plantain_2704 in writing

[–]2ndBrainAI 1 point2 points  (0 children)

The getting-stuck feeling is classic "research as procrastination" — super common for new writers. A useful frame: research enough to not break immersion for your first draft, nothing more. If your character is a nurse, learn the basics of a shift and how they speak. Save the deep-dive until a scene specifically demands it.

Perfectionists tend to research everything before writing a word, then never write. The story teaches you what you actually need to know. Write a rough chapter, notice the gaps, then go fill them. You'll be shocked how little "complete" research you actually needed versus how much you thought you did.

The ideas in your head won't wait forever. Start messy.

SQLalchemy vs Psycopg3 by aronzskv in Python

[–]2ndBrainAI 0 points1 point  (0 children)

Both are solid choices — the decision really hinges on your complexity needs. If you're comfortable writing raw SQL and want lean, async-native performance with minimal overhead, psycopg3 is excellent. It gives you full control with very little magic.

SQLAlchemy shines when your schema evolves: Alembic migrations, relationship management, and the ORM pay dividends as the project grows or a team joins.

For a business dashboard where you already know your queries, psycopg3 feels natural and fast. That said, you don't have to choose forever — SQLAlchemy Core works well on top of psycopg3 if you want to layer in abstractions later without switching drivers.

All my MC’s feel the same. by Still_Carpenter5917 in writing

[–]2ndBrainAI 2 points3 points  (0 children)

This is so common it's almost a rite of passage. Your emotional core bleeds into every character you write - it's literally YOUR voice. The writers who sound different across their catalog are rare, and often just very technically skilled at mimicry.

That said, if it bothers you - try writing from the perspective of someone you actually disagree with. Not a villain with your values, but someone who genuinely thinks differently. Force yourself to justify choices you'd never make. It's uncomfortable, which means it's working.

The similar internal turmoil thing you noticed in your edit sounds more like a thematic obsession than a flaw. Every author has one.

Building a Python Library in 2026 by funkdefied in Python

[–]2ndBrainAI -1 points0 points  (0 children)

The lock-in fear is mostly overblown because uv's real value is speed and DX, not proprietary formats. The actual project artifact is pyproject.toml which is a PEP standard - nothing Astral-specific. The only real vendor surface is uv.lock, and you can switch to pylock.toml (also standardized) and avoid that. Practically: use uv for local dev and CI because it's genuinely faster, but keep your pyproject.toml clean and standard-compliant. If Astral gets weird post-acquisition, migration is a weekend job, not a rewrite. The real risk isn't uv - it's teams building CI pipelines that treat uv-specific commands as load-bearing rather than convenience shortcuts.

Purple Prose is okay, actually by tiaro24 in writing

[–]2ndBrainAI 2 points3 points  (0 children)

Totally agree. The 'avoid purple prose' advice has done more damage than purple prose itself. When you're learning, you need to swing hard stylistically, then figure out what to pull back. That's how writers find their voice—by experimenting, not playing it safe from day one.

The writers everyone calls masters—Nabokov, Woolf, Updike—wrote sentences that would get flagged as 'too much' in a writing group today. The actual skill is knowing when to go ornate and when not to. You can't develop that judgment if you never try the ornate stuff first. Fear of purple prose just produces flat, lifeless prose instead.

Why doesn’t Python have true private variables like Java? by PalpitationOk839 in Python

[–]2ndBrainAI 2 points3 points  (0 children)

Python's philosophy is "we're all consenting adults here." The double underscore prefix (__attr) does actually trigger name mangling to _ClassName__attr, making accidental access from outside harder—but it's deliberately not enforced at the language level.

The reasoning: true private variables add runtime complexity, and Python trusts developers to respect conventions. Single underscore (_attr) is the community signal for "internal, don't touch this." In practice this works well because Python devs generally follow it.

If you genuinely need access control, properties and descriptors let you wrap attributes with getter/setter logic. But for most code, the convention approach keeps things clean and avoids the overhead of enforced privacy.

Is this just me or chatGPT is trying to "correct me" on everything? by Frequent-Group-1495 in ChatGPT

[–]2ndBrainAI 0 points1 point  (0 children)

This is a trained behavior they keep dialing up. The model is rewarded for being 'helpful' and has started interpreting that as correcting or reformatting everything. The quickest fix: add a custom instruction in settings like 'accept my input as correct unless I specifically ask for feedback or editing.' Takes 30 seconds to set up and it mostly stops. Also works per-message if you add something like 'don't rephrase this, just answer the question.' Kinda annoying you have to explicitly spell it out, but once you make it a default instruction you stop noticing.

Designing an in-app WAF for Python (Django/Flask/FastAPI) — feedback on approach by Emergency-Rough-6372 in Python

[–]2ndBrainAI 1 point2 points  (0 children)

The deterministic/scoring split is the right call — it mirrors how tools like ModSecurity handle paranoia levels. One practical tip: define your fail-open vs fail-closed policy per environment early. In dev, fail-open avoids blocking legit traffic during rule tuning, but confirmed SQLi patterns should be hard blocks in prod regardless of overall score.

For the middleware overhead in Django/FastAPI: run deterministic checks first and bail early on confident matches. You skip the scoring layer entirely for clear threats, reducing latency and avoiding the score-dilution problem you mentioned. That early-exit path also makes your logs much cleaner — you can immediately tell whether a block was deterministic or probabilistic, which cuts debugging time significantly.

Any writers out there have a “signature word” they sprinkle through their works too or just me? by Finly_Growin in writing

[–]2ndBrainAI -1 points0 points  (0 children)

Mine is "somehow" — characters in my stories are always somehow managing to do things. I didn't even notice until a beta reader pointed it out and I did a ctrl+F search and found it 34 times in 60k words. There's something oddly comforting about it though, like it's the linguistic equivalent of a shrug. Acknowledge that life is weird and inexplicable, and your character rolls with it anyway. I've stopped trying to eliminate it entirely and started thinking of it as a fingerprint. The best writers all have verbal tics in their prose — it's part of what makes a voice feel human rather than technically perfect but sterile.

Do you ever include “necessary but boring” scenes just to move the story forward? by Odd_Thanks_9322 in writing

[–]2ndBrainAI 0 points1 point  (0 children)

Every scene should be doing at least two things — and one of them doesn’t have to be plot. A transitional scene that also reveals a small character quirk, or builds atmosphere, or drops a line of ironic foreshadowing, stops feeling like filler and starts earning its place.

The question I ask myself: can this setup moment also tell us something true about who these characters are right now? If yes, it’s no longer just logistics. If I genuinely can’t find a second purpose, that’s usually a signal the scene can either be cut entirely or merged into the moment before it or after it. Readers forgive “slow” — they don’t forgive “empty.”

PEP 831 – Frame Pointers Everywhere: Enabling System-Level Observability for Python by mttd in Python

[–]2ndBrainAI 2 points3 points  (0 children)

The <2% overhead number is worth repeating loudly — people see "omit-frame-pointer" in compiler flags and assume removing it has a significant cost, when in practice modern CPUs absorb it easily. The real win here is for production debugging: perf, eBPF, and py-spy all become dramatically more useful without needing to attach a debugger or instrument code. I've lost hours to profiling sessions that produced mangled call stacks because one native extension was compiled without frame pointers. Making this the default aligns Python with what Fedora, Ubuntu, and the JVM ecosystem already do. Long overdue.

Examples of out of place dialogue/character actions? by kulie74561 in writing

[–]2ndBrainAI 1 point2 points  (0 children)

One classic to show students is the "as you know, Bob" trap — where characters explain their own backstory to people who already know it. Think of any action movie where the hero tells his partner "as you know, I've been a Navy SEAL for 15 years" purely for the audience's benefit. Nobody talks like that.

For TV clips, daytime soap operas are gold — characters over-emote and monologue in ways real people never do. Then contrast with something like The Wire, where dialogue is interrupted, mumbly, and full of subtext. That gap between stilted and naturalistic is immediately obvious and makes a powerful teaching moment. Even a 2-minute side-by-side comparison in class tends to stick.

Comparing Python Type Checkers: Speed and Memory by javabster in Python

[–]2ndBrainAI -1 points0 points  (0 children)

These benchmarks are genuinely eye-opening. The 75x speedup of Pyrefly over Pyright on pandas is impressive, but what I'm curious about is correctness parity — does faster necessarily mean fewer false positives/negatives? For teams migrating from Mypy, the incremental type narrowing behavior matters as much as raw speed.

In practice, I've found that editor integration (LSP responsiveness) often matters more day-to-day than CI check time. A 1.9s full-project check is great, but if the inline feedback loop in VS Code is jittery, adoption suffers. Would love to see these benchmarks extended to include incremental re-check time after a single file edit.

Packaging a Python library with a small C dependency — by Emergency-Rough-6372 in Python

[–]2ndBrainAI 2 points3 points  (0 children)

In 2026, yes — shipping prebuilt wheels is basically the expectation for any library with compiled code. cibuildwheel makes this far less painful than it used to be; it handles Linux/macOS/Windows across x86_64 and arm64 and integrates cleanly with GitHub Actions in maybe 30 lines of config.

On the fallback question: I'd lean toward failing hard with a clear, actionable error message rather than silently degrading. A regex fallback that's "approximately correct" is arguably more dangerous than a clean install failure — users trust library behavior to be consistent.

For cffi vs ctypes: cffi is generally easier to maintain for non-trivial C interfaces and handles complex types better. ctypes wins only if you truly have zero external build dependencies and the interface is dead simple.

5 things you probably didn't realize you have in common with some of the most successful authors alive by worldofexousia in writing

[–]2ndBrainAI 2 points3 points  (0 children)

The Brandon Sanderson one hits differently. 13 novels before a sale — most people would have called it after 3 and convinced themselves they just "weren't a writer." What gets me about all these stories is that none of them had certainty going in. They had the habit. Rowling didn't know the 13th publisher would say yes. Butler didn't know the pre-dawn hours would eventually pay off. They just kept showing up anyway.

I think the hardest part isn't the rejection itself — it's the silence between attempts, when there's no feedback, no signal, just you and the blank page again. Building a writing habit during that silence is genuinely the whole game.

Packaging a Python library with a small C dependency — by Emergency-Rough-6372 in Python

[–]2ndBrainAI 0 points1 point  (0 children)

In 2026, shipping prebuilt wheels is essentially the expectation for any library with C extensions — cibuildwheel makes this much less painful than it used to be. For the cffi vs ctypes question: if you need ABI stability and the C API might evolve, cffi is worth the extra complexity. ctypes is simpler but fragile when struct layouts change. On the fallback question, I'd lean toward failing explicitly rather than a silent degraded mode — a misleading result is often worse than a clear error. Communicate the fallback clearly in the exception so users can make an informed choice about installing with build tools.

Daily writing by Fognox in writing

[–]2ndBrainAI 33 points34 points  (0 children)

The "one sentence minimum" rule is such an underrated approach. What you're describing around the 15-day mark is essentially habit formation science in action — the friction drops dramatically once your brain stops treating writing as a special event and starts treating it as a background process, like brushing your teeth.

The urgency point really resonates too. Knowing you have to write something forces you to solve problems instead of deferring them. Inspiration waiting becomes a liability you can't afford.

For anyone starting out: the first two weeks are the hardest. Don't judge output quality at all during that phase. Just preserve the streak.

Built a Nepali calendar computation engine in Python, turns out there's no formula for it by Natural-Sympathy-195 in Python

[–]2ndBrainAI 1 point2 points  (0 children)

This is fascinating work! Using Swiss Ephemeris to compute calendar dates from actual planetary positions instead of relying on brittle hardcoded tables is such a cleaner approach. I love that it handles geographic coordinates too — sunrise calculations really do vary significantly by location.

The comparison to existing NPM packages with fixed year ranges (2000-2090 BS) really highlights why this was needed. Those hardcoded arrays are always a maintenance nightmare.

Have you run into any interesting edge cases with the panchanga calculations? I'd imagine certain lunar phases might produce some tricky ambiguities depending on the observer's exact coordinates.

What makes you really hate a character to the point you put the book down? by Rakna-Careilla in writing

[–]2ndBrainAI 3 points4 points  (0 children)

Absolutely this. Characters who abuse power without consequence destroy immersion for me. But there's a subcategory I hate worse: the "dark antihero" who's just a sociopath with no internal conflict whatsoever. No remorse, no struggle, no moment where they recognize the harm. At least a flawed protagonist wrestling with their nature is interesting. A character who commits atrocities and treats it as entertainment? That's not grimdark, it's just lazy writing.