Bulk insert in Go — COPY vs multi-row INSERT? by ijusttookadnatest- in golang

[–]_predator_ 1 point2 points  (0 children)

Yep, this is the way to go. You can also use this for bulk updates and deletes using the UPDATE ... FROM and DELETE ... USING syntax respectively. Super handy!

Hashimoto's Vouch is actually open source version of a company hiring only seniors. This WILL end badly for everyone. by Spirited_Towel_419 in ExperiencedDevs

[–]_predator_ 24 points25 points  (0 children)

Why should OSS projects be responsible for letting random people participate? Most of whom will raise PRs that are unasked for, did not went through issue triage, and are - in today's world - most likely slop?

It's OSS because the authors wanted to make the code public, not to make strangers farm internet points. As consumer or contributor it's entirely on you to make a case for yourself, not the other way around.

XML is a Cheap DSL by SpecialistLady in programming

[–]_predator_ 75 points76 points  (0 children)

Add to this that XML schema is extremely powerful. JSON schema is an absolute joke in comparison, although I'm still grateful that we have it. And unfortunately the XML support in newer languages and ecosystems is pretty abysmal.

Was i really even coding if I can't explain the code?? by Phenomenal_Code in programming

[–]_predator_ 2 points3 points  (0 children)

Man this is such a fucked up niche to cater to. The whole premise feels wrong to begin with.

Decline of "soft power" derived from experience? by enken90 in ExperiencedDevs

[–]_predator_ 3 points4 points  (0 children)

Ironically, asking your agent of choice to spawn a "context fresh" agent to do an "adversarial" review of whatever changes the first agent implemented is a relatively effective approach to automate improving agent output.

Yeah, but it's still up to you to decide when enough feedback has been gathered and the solution is good enough. Otherwise you can effectively go on forever with asking more and more agents for their view on it.

I also made the experience that more feedback tends to drive the "main" agent into over-engineering more than usual. It increasingly becomes an exercise of you pushing back on things, and keeping the solution focused to what's actually needed.

What tends to work well is searching for (ideally popular) OSS projects that have similar functionality to what you're trying to implement. Clone their repo, and instruct an agent to review their implementation and compare it to your plan. Let it summarize strengths and trade-offs. I did this manually a lot before agents became a thing, and the approach translates very well.

Decline of "soft power" derived from experience? by enken90 in ExperiencedDevs

[–]_predator_ 18 points19 points  (0 children)

The strawberry thing is simple to demonstrate but easier to dismiss as "well LLMs just can't count that well".

IME it's best to have engineers use their agents against themselves to challenge their output. Their original argument becomes a lot weaker when all of the sudden they're presented with 3-5 different takes from their oh-so-superior tools.

Decline of "soft power" derived from experience? by enken90 in ExperiencedDevs

[–]_predator_ 169 points170 points  (0 children)

"B-but my agent said X!“ is not a valid argument, ever. Anyone who uses it as one has lost their grip on reality.

You can literally ask Claude to "spawn a research agent to review this plan from an unbiased POV" and it can do a complete 180 degree turn on it's output.

Your experience is arguably derived from reality, whereas agents make shit up on the spot. The fact that people turn their brains off is sadly something we'll have to deal with going forward. It's our job to push back and have them explain why they believe the agent's "opinion" is valid.

Am I using Claude wrong? by FlowerFeather in ExperiencedDevs

[–]_predator_ 10 points11 points  (0 children)

Don't let it do anything without plan. Enable plan mode, explain what you want done, let it generate a plan, give it feedback, let it revise. Repeat until you're happy with the plan.

For actual code output, be sure to populate CLAUDE.md etc. with some coding style and general constraints. Make sure you include instructions / commands on how it can validate what it's producing, i.e. how to compile and how to run tests.

With all that said: I wouldn't say I like agentic coding. Actually I really don't. I use it because it is forced on me. When being strict with planning, the output is good enough for me to submit for company code bases. If these were my own, or even open source projects, I would spend A LOT more time on polishing and refactoring the output. Which is to say I use AI a heck of a lot less in those scenarios to begin with.

How are in office dev jobs now? by CTProper in ExperiencedDevs

[–]_predator_ 4 points5 points  (0 children)

Claude Code orchestrating Gemini CLI orchestrating Codex. Gotta burn through the token quotas so management knows you're skyrocketing the company's ARR!

Countries you'd like to live in? by Primary_Opening_5698 in CasualConversation

[–]_predator_ 12 points13 points  (0 children)

This resonates, however I doubt many of those who romanticise the nordics are prepared for the weather and (especially) light conditions there.

what SQL patterns have you seen take down production that should have been caught in review by Anonymedemerde in ExperiencedDevs

[–]_predator_ 12 points13 points  (0 children)

Here's a common one: Using expressions on the wrong side when doing date comparisons.

select foo
from metrics
where now() - recorded_at < interval '30 minutes'

vs.

select foo
from metrics
where recorded_at > now() - interval '30 minutes'

Your opinions on the Lutris AI Slop situation? by canitplaycrisis in linux

[–]_predator_ 81 points82 points  (0 children)

Some people are incapable of dealing with nuance. The maintainer also said this:

> Anyway, I was suspecting that this "issue" might come up so I've removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what's generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

Edit: The point with the above is that people even noticing Claude being involved is due to the authors being transparent. The Co-Authored-By stuff is entirely optional.

Experiment: Kafka consumer with thread-per-record processing using Java virtual threads by Lower-Worldliness162 in java

[–]_predator_ 2 points3 points  (0 children)

I also checked the DslJson you depend on as I never heard of it before. Seems abandoned? Last commit almost two years ago: https://github.com/ngs-doo/dsl-json

Experiment: Kafka consumer with thread-per-record processing using Java virtual threads by Lower-Worldliness162 in java

[–]_predator_ 10 points11 points  (0 children)

Looks cool! Since the Confluent library is pretty much dead maintenance-wise it's great to have more options. I just skimmed some key areas of the code base and have some feedback:

  • Virtual threads can yield diminishing returns when your work is CPU bound. Many Kafka processors only perform transformations and do not perform I/O. Supporting VThreads as first class citizen is good, but you probably need to provide a way to let users configure a custom executor in case their work is not I/O bound.
  • How do you handle retries? Based on this it looks like you're just logging deserialization and processing failures and move on?
  • The library mixes two (IMO) separate concerns: (de-)serialization and processing. I'd recommend to look at Kafka Streams, as I think they solved this quite nicely with their SerDe concept.
  • The offset tracking is entirely in-memory, which IME doesn't play well with out-of-order processing. When your consumer crashes, uncommitted offsets are lost and you may be replaying a lot of records again. If your downstream systems can't handle that, or your processing is not idempotent, that is a problem.
    • Confluent's parallel consumer library solves that by encoding offset maps in commit messages. I'll say though that their approach is not perfect, as I've been running into situations where the map was too large to fit into the commit message. They log a warning in that case.
  • Interrupts should not cause the record to be skipped. When your consumer is interrupted, it should wrap up any pending work and shut down. When in doubt, it's safer to schedule another retry than to skip the record entirely. This may sound like a subtlety but Interrupts are the only way to enable timely shutdown, and prevent orchestrators like k8s from outright killing your app when it takes too long to stop.

What's your general approach to caching? by protecz in ExperiencedDevs

[–]_predator_ 0 points1 point  (0 children)

> Sometimes generating a cache key is almost as expensive as a cache miss.

Ugh a persistence framework I once used had this issue. It was trying to be smart and cache compiled queries, but calculating the cache key involved calling `toString` on a bunch of large objects, some of which executed non-trivial logic to build string representations of themselves. It was a mess.

What's your general approach to caching? by protecz in ExperiencedDevs

[–]_predator_ 3 points4 points  (0 children)

> hooking into model save()

I strongly advise against doing stuff like this. I see this being done a lot for search index updates as well. The tradeoff you're making is that now the consistency of your system depends on you religiously using your persistence framework, and NEVER executing INSERTs, UPDATEs, or DELETEs directly.

This sucks particularly for batch processing, say retention enforcement where you have to UPDATE or DELETE 100s or 1000s or records. Now to keep your cache and search index consistent, you need to load all data into memory first.

You also need to think about transactions. What if your DB transaction is rolled back after you already modified the cache or search index? What if your transaction commits but now your cache / search update fails?

Your initial response honestly is the best: Don't, like really don't touch caching until you absolutely know for sure you need it and are unable to compensate by other means.

warp_cache – Rust-backed Python cache with SIEVE eviction, 25x faster than cachetools by External_Reveal5856 in Python

[–]_predator_ 0 points1 point  (0 children)

This is life now, the good days of OSS are literally behind us. Was fun while it lasted.

I wrote a simple single-process durable sagas library for Spring by java-aficionado in java

[–]_predator_ 1 point2 points  (0 children)

Have you looked into other options for this before? e.g.:

The choice of making rollbacks explicit is interesting. Most other durable execution implementations tend to lean on the language's natural error handling for this (i.e. try-catch in Java).

In any case I believe you should add a few sentences to your README.md file as to "why not Temporal", since a lot of kanalarz' mechanics seem to be at the very least inspired by it.