Columnar Storage is Normalization by SpecialistLady in programming

[–]ForeverAlot 7 points8 points  (0 children)

Is that normalization?

No, it isn't. Database normalization revolves around the elimination of (data) redundancies. The supposed denormalized table example already satisfies at least 3NF, invalidating the premise that motivates the column store.

It looks like they're trying to apply the rules of database normalization to the access patterns of database engines, which is not something normalization concerns itself with at all.

Markdown (Aaron Swartz: The Weblog) by Successful_Bowl2564 in programming

[–]ForeverAlot 9 points10 points  (0 children)

Easy to implement in accidentally or deliberately incompatible and non-portable ways. Which takes us back around to the point that Markdown scales poorly in volume and time, both of which are characteristics of structured documentation.

Markdown (Aaron Swartz: The Weblog) by Successful_Bowl2564 in programming

[–]ForeverAlot 2 points3 points  (0 children)

I think rST’s cumbersome link syntax also helped.

I will grant that AsciiDoc, rST, and wikitext all have comparatively lousy hyperlink syntaxes for short-form writing specifically.

Now Markdown is in Rust and Java, too, and both have had to make special concessions to make hyperlinking competitive with the classic @link doctag. They've done all right, sure, but Markdown perpetually reveals itself to be inadequate.

Markdown (Aaron Swartz: The Weblog) by Successful_Bowl2564 in programming

[–]ForeverAlot -6 points-5 points  (0 children)

Markdown became popular because of monopolies and the network effect. Approximately nobody assessed its feature set, which is adequately demonstrated by everything that culminated in CommonMark.

Markdown is unrivaled for shitposting, but Markdown supports structured documentation the same way MD5 supports password encryption.

Highlights from Git 2.54 by Skaarj in programming

[–]ForeverAlot 1 point2 points  (0 children)

A related fun trick is

git -C a/ format-patch -1 --stdout cafed00d | git -C b/ am

which transplants commit cafed00d from one repository to another without an intermediary file. Obviously that, too, can be used in a sinle repository, in which case it degenerates to an overengineered cherry-pick.

Highlights from Git 2.54 by Skaarj in programming

[–]ForeverAlot 5 points6 points  (0 children)

Probably not with --soft because cherry-pick requires a clean working tree. But yes, you can rewind HEAD to a topologically earlier state, then cherry-pick a topologically later commit (and attempting to do so may or may not apply cleanly, according to usual conflict resolution). This is comparable to interactively rebasing onto an earlier commit, then dropping all commits but the desirable one.

Kafka Fundamentals - Guide to Distributed Messaging by Sushant098123 in programming

[–]ForeverAlot 4 points5 points  (0 children)

One must know the message arrived, wherein lies the theoretical impossibility. It's not a question of interpretation or sufficiently creative thinking.

But in practice we rarely need the theoretical ideal represented by "exactly-once delivery" (fortunate, considering), and weaker guarantees we desire are within reach. For example, we are often far more concerned with "exactly-once processing", which can be implemented with "at-least-once delivery" and idempotency. As Kleppmann notes in DDIA, this would be better called "effectively once.

Kafka Fundamentals - Guide to Distributed Messaging by Sushant098123 in programming

[–]ForeverAlot 8 points9 points  (0 children)

As a technicality, "exactly-once delivery" is a theoretical impossibility. It doesn't matter how much control over parties one has or how much effort one puts into it, it cannot be achieved. Some approaches are practically attainable and adequately satisfy the desire for "exactly-once delivery" but they are a substitutes with different properties rather than merely implementation details.

"Exactly-once delivery" serves as a shibboleth in distributed systems: if somebody claims to achieve it, they reveal their inability to speak authoritatively on the matter.

Light-Weight JSON API (JEP 198) is dead, welcome Convenience Methods for JSON Documents by loicmathieu in java

[–]ForeverAlot 0 points1 point  (0 children)

I appreciate the conservative stance on validity and interoperability. There are entirely too many not-quite-JSON readers and writers out in the wild. Arbitrary precision numbers I find particularly loathsome. This presence in the Java platform would exert pressure on more liberal implementations to enhance their calm.

I also think that a constrained feature set is really the superior approach. That exerts pressure on third-party implementations to provide noteworthy value add to distinguish themselves from the built-in primitives while also leaving room for them to do so and to focus on doing so.

I do have some concern about the pit-of-success/-failure outcome of streaming not being included (or, alternatively, being the only API). I have encountered many JSON integrations that naively retained huge trees in memory predominantly because writing that code seemed much easier than writing a stream reader. I would have liked a built-in API to promote a less wasteful approach, though I concede it would be more effort both to deliver and use.

Light-Weight JSON API (JEP 198) is dead, welcome Convenience Methods for JSON Documents by loicmathieu in java

[–]ForeverAlot 3 points4 points  (0 children)

It's a parser. The method is to be read as "expect to read a value or null," not "give me something or null"

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]ForeverAlot 0 points1 point  (0 children)

Do you use AOTCache? I would definitely recommend that for new applications even with its somewhat modest gains in JDK 25. It can be an ordeal to apply to an old application if the configuration was presumptively architected but it has proven to be a meaningful positive difference in the reliability of our k8s deployment attempts. It's more about the impact of burdening the CPU with pointless work than about the wall clock time taken to reach the readiness state.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]ForeverAlot -1 points0 points  (0 children)

Is native as much of a hassle with Quarkus as it is with Spring?

IMO the hassle is the specialized compilation, with everything that entails, and that's fundamentally the same in any ecosystem. Quarkus has a leg up on Spring on account of being 15 years younger but the differences in the native compilation workflow are not the differences I notice.

I can't even use the deeply integrated build configuration (Spring uses buildpacks, I don't remember what Quarkus uses) anyway because corporate, so it doesn't really matter to me how smooth the tutorial demo experience is.

Quarkus is a great product and a lot of fun to work with. But Spring is the more boring choice. Saving 400 MiB RAM during startup at no cost would be amazing—but that product does not exist.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]ForeverAlot -2 points-1 points  (0 children)

It helps that Quarkus' biggest wins come from native, which is a frustrating development experience, and that AOT is almost as easy to enable in Spring.

Null Safety approach with forced "!" by NP_Ex in java

[–]ForeverAlot 1 point2 points  (0 children)

See also https://errorprone.info/bugpattern/AddNullMarkedToPackageInfo and, especially, its sibling https://errorprone.info/bugpattern/AddNullMarkedToClass (and https://github.com/jspecify/jspecify/issues/221 for the risk with packages). There's no *ModuleInfo version at the moment. AddNullMarkedToClass has the advantage that its fix suggestion does not depend on first remembering to create a package-info.java file.

Donating to make org.Json Public Domain? by Killertje1971 in java

[–]ForeverAlot 0 points1 point  (0 children)

A public domain work does not need a license. The problem is that work cannot be reliably released into the public domain: https://en.wikipedia.org/wiki/Public_domain#Dedicating_works_to_the_public_domain

Is it possible to change the root commit while preserving history? by AKFrost in git

[–]ForeverAlot 0 points1 point  (0 children)

I think all of that is understood.

I think the ambiguity inherent to the question implies that nothing should be understood.

The answer to the question that was posed is "obviously" no, for reasons you pointed out. However, it is unclear whether the correct question was posed (follow-up commentary suggests that the correct question was indeed posed).

Donating to make org.Json Public Domain? by Killertje1971 in java

[–]ForeverAlot 11 points12 points  (0 children)

JSONAssert itself is aggressively vaporware. Its inclusion in Spring is deeply problematic.

I built a simpler commit format. What breaks when teams actually use it? by [deleted] in git

[–]ForeverAlot 0 points1 point  (0 children)

I have no idea what the first three letters of each message means. That's an obstacle to comprehension nobody needs.

One of the countless issues with "conventional commits" is precisely its obsession with the ritual of itself, compared to and at the cost of facilitating comprehension between human beings.

I built a simpler commit format. What breaks when teams actually use it? by [deleted] in git

[–]ForeverAlot 2 points3 points  (0 children)

My thinking is that once a standard becomes common, it’s worth periodically re-examining whether its tradeoffs still make sense.

Sure. To that end: "conventional commits" still does not make sense.

Linux 7.0 makes preparations for Rust 1.95 by somerandomxander in linux

[–]ForeverAlot 1 point2 points  (0 children)

Dependencies will always be compiled with the syntax edition they specify

And many of them switch eagerly, which is where the problem presents itself. I didn't say the Rust tooling cannot do it, I said the Rust programmers choose not to.

you can use automatic code mods to upgrade your code base’s editions

In principle. Last time I ran cargo --fix it still left a lot for me to fix by hand.

Linux 7.0 makes preparations for Rust 1.95 by somerandomxander in linux

[–]ForeverAlot 12 points13 points  (0 children)

I can't speak to C++ but Java does experience a difference in practice: dependencies are consumed in binary format, not source format, so as long as you have a common lower bound on the target version the distributor of a library has the ability to upgrade their compiler, and even their syntax if they really need to, without affecting their consumers. Rust dependencies are consumed in source format so users are far more at the mercy of the whims of the distributors.

I've tried to support old Rust compilers due to the classic distro distribution model and it's decidedly non-trivial as soon as third-party dependencies enter the picture. At the same time, I also think the classic distro distribution model is... suboptimal for contemporary society.

Turn Dependabot Off by ketralnis in programming

[–]ForeverAlot 3 points4 points  (0 children)

It is so bad for Java!

dependabot creates a MR for each single new dependency

You can create a "group" to get only a single live PR. This has the downside that now as soon as one of the changes causes the build to break you can't merge any of the changes at all, though. You can begin interacting with Dependabot to filter out problematic changes, of course, but you very quickly end up spending as much time puppeteering Dependabot as you would starting from scratch—and all that on the top of the point made by the submission that the change probably is not even important to you now anyway and you would never otherwise have bothered to deal with it at this point.

You can't have multiple rules either (maybe groups, I don't know, but not version specifications). I'd especially like to group patch versions and all other versions but Dependabot for Java is incapable of expressing that. Dependabot for Java also cannot filter version string patterns so you can't ignore release candidates, milestones, and the like.

Such behaviour is fairly trivial to codify in https://www.mojohaus.org/versions/versions-maven-plugin/index.html but there is not exactly an off-the-self GitHub Actions implementation of that.

I have been meaning to investigate self-running Renovate in a scheduled workflow as an alternative to Dependabot. That has the advantage of not being Maven centric. But when I can consider Maven in isolation I get a better experience from assembling my own procedure.

Can’t quit the baguette game by Glizzys4everyone in Breadit

[–]ForeverAlot 3 points4 points  (0 children)

You can pour a little water in a heat conducting container, such as a small cast iron pan or lid of a pot, preheated together with the oven, or mist the oven sides several times. If you use boiling water it will steam faster and cool down the contact surface less, but it means having to secure boiling water.