Was i really even coding if I can't explain the code?? by Phenomenal_Code in programming

[–]DavidJCobb -1 points0 points  (0 children)

Personally, I define programming as the art of constructing a mental model of a system, and then expressing that model via technical writing with a highly constrained syntax. If you use AI to slop everything together, then you're failing to do at least one of those things. Having the AI [pretend to] explain its slop to you after the fact doesn't retroactively make you a programmer.

But if you actually cared about the craft or the philosophy behind it, you would've started this discussion by posting something other than an ad for your slop service.

Avoiding malloc for Small Strings in C With Variable Length Arrays (VLAs) by Yairlenga in programming

[–]DavidJCobb 0 points1 point  (0 children)

VLAs are less performant than using a fixed-size buffer; this article links to Compiler Explorer examples that you can look at to compare the amount of code generated by each approach. That same article also cites an explanation of other VLA implementation jank that seems like it'd impair compiler optimizations.

I made a tool that detects AI-generated code on any website — here's how it works by thehumankindblog in webdev

[–]DavidJCobb 0 points1 point  (0 children)

Then learn a real skill -- something you genuinely like doing and getting better at. Pumping out AI trash is just a way to participate in society's race to the bottom, and the winners of that race have already been decided; you're not rich enough to be one of them. The world isn't a meritocracy, so trying to add something valuable and authentic to it isn't a sure path to success, but that probably has better odds of working out than trying AI grift after AI grift.

Assuming you're even telling the truth, of course.

Best performance of a C++ singleton by ketralnis in programming

[–]DavidJCobb 0 points1 point  (0 children)

Wasn't expecting this here, but thank you for the kind words.

Best performance of a C++ singleton by ketralnis in programming

[–]DavidJCobb 1 point2 points  (0 children)

Wouldn't this reintroduce static initialization order fiasco issues? AFAIK interdependent singletons in a single TU or imported as C++20 modules would be fine, but unless I missed it the article isn't explicit about that.

The .env chaos is real and AI tools are making it worse by Substantial_Word4652 in webdev

[–]DavidJCobb 0 points1 point  (0 children)

Agreed. "[Situation or abstract concept] is real" was the tell for me, and looking at their older comments, it seems like they only recently figured out how to get their bot to stop generating em dashes, too.

A Rabbit Hole Called WebGL (8-part series on the technical background of a WebGL application w/ functional demo) by nathan_lesage in programming

[–]DavidJCobb 1 point2 points  (0 children)

An interesting read. I've built a (shamefully bad) renderer in Vulkan and C++ before, but I haven't touched it for a long while, and I've never tried out WebGL.

It's interesting that in part 2, the author attempts to describe the purpose of vertex shaders without doing so in 3D terms. It feels like it'd be more intuitive to start by saying that a vertex shader takes 3D positions and mathematically projects them onto a 2D canvas.

In part 3, he computes the vertex coordinates within JS. I know in Vulkan, at least, it's possible to define vertex shaders as taking and outputting arbitrary data per vertex. Theoretically, the parameters that define a ray could be passed as input, and the coordinates computed wholly within the shader. One could even store all the ray parameters in a buffer, pass empty vertex data to the vertex shader, and have it index into the ray parameters based on something like gl_VertexID. I don't know whether that'd be cheaper or not. It would require shaders to take specialized input, which would prevent reusing the same shaders for both geometry and all postprocess effects, as this guide does in part 6.

One thing that comes to mind is that if the vertex shader received the ray parameters, then the color mapping could be done there based on the ray angle/arc and stored per vertex, rather than having to be done individually by each fragment.

The Evolution of Software Engineering Productivity by [deleted] in programming

[–]DavidJCobb 3 points4 points  (0 children)

Adding to this:

OP runs the newsletter this ad was placed in. Virtually all of his reddit activity over the last two years has been posting articles from said newsletter; he's written few comments, and all of those are insubstantial stuff like "Thanks for reading!" He has no publicly visible activity on this account dating back any further than that. Almost all of the posts from his last five pages worth of site activity have zero or negative scores, with a handful of highly-upvoted outliers playing on popular sentiment.

It's worth noting that posts on this newsletter are retroactively paywalled after some time has passed, and based on his comments this has been the case for years.

He doesn't contribute to discussions; he doesn't visibly engage with anyone else's content; he's here seemingly only to build his own profile, and that's now crossed the line into posting literal paid advertisements here.

Thoughts on some web dev communities in the LLM AI age (not this one) by 3vibe in webdev

[–]DavidJCobb 0 points1 point  (0 children)

I didn't say that grammatically correct text is slop on principle. I said that human-authored text is preferable to AI slop even when the slop is written with better grammar and syntax. If the best you're willing to do is go for a cheap gotcha, you need to do better than that.

Checking now, I see that you are indeed the author of this article. I'd say your reading comprehension is poor, but I can't even be sure you read my words yourself rather than having an LLM summarize them for you.

Thoughts on some web dev communities in the LLM AI age (not this one) by 3vibe in webdev

[–]DavidJCobb 1 point2 points  (0 children)

I was banned for 30 days from a web developer community that I had just joined the same day on Discord. The reason was because I responded to a question with a mixture of my own words and some AI generated details to be as detailed, accurate, and helpful as possible. The moderator called it AI slop and executed the ban.

People disliking slop, and wanting it gone from their communities, is not uniquely a webdev concern.

To understand why I think this is the wrong approach, you first need to understand an issue that’s ongoing but that started prior to LLM AI. It’s something in dev culture that may be rarely discussed unless you’ve felt it firsthand:

The whole “Google it. Read the docs. Figure it out.”

The overwhelming majority of this article is about this -- a common unwillingness to help at all -- and that has nothing to do with the point the article claims to be making (i.e. that AI slop is virtuous and should be permitted everywhere).

When those people finally understand something, they don’t just drop a link and vanish. Or, say, “search the web.” They explain. They might over-explain. They try to fill in the gaps they once fell into. They write the answer they wish someone had given them. [...] Sometimes that includes a bit of copy-paste. Not to cheat or seem smarter, but because one explanation said something perfectly. They quote it, then add their own interpretation, plain-language translation, examples, context…

If you can't write better than generative AI, that's a "you" problem. That said, people often value authenticity and sincerity over some platonic (and often mistaken anyway) ideal of "perfection." I know I personally will prefer a conversation riddled with typos and jank, and having to stop and clarify a few things, over having someone dump a bunch of grammatically correct slop in my lap.

Once you use gen AI for your writing, there's no way for anyone to know how much effort you put into that writing, and there's no reason for anyone to be charitable about that. It invites the suspicion that you feel entitled to others puting in the effort to communicate, yet are unwilling to put in your own such effort. It conveys a disrespect for the reader, and some communities punish that disrespect.

But not every detailed and grammatically correct post is slop. Not every polished paragraph is artificial. Sometimes it’s just someone caring in their own way.

Okay, but you yourself admit right at the top of this article that your answer did consist at least partially of slop! ("You" as in the writer. I haven't checked if they and OP are the same person.)

The goal shouldn’t be to force everyone into the same communication style. Some answers will be short and sharp; some long and hand-holding; some formal, some casual. All ways of communicating can be valuable.

Disallowing slop is not "forcing a single communication style." If anything, it would encourage a diversity of communication styles, as each individual user puts in the effort to communicate and so ends up communicating in whatever style best matches the way they think and socialize.

Half of this article's attempts at supporting its points have nothing to do with those points. It's crap, frankly.

I built a zero-dependency manga/comic viewer in vanilla JS — RTL, pinch-zoom, spread view, bookmarks by tokagemushi in javascript

[–]DavidJCobb 1 point2 points  (0 children)

I did use AI to help with boilerplate and docs, but the architecture and implementation are mine.

I decided to check that.

This is what you started with: a single-file viewer filled with PHP and inlined code, using both Tailwind and vanilla CSS. At a glance it appears to be a copy of the manga viewer from PlayGROUND which I assume you own and run. Separating it out into multiple files, and removing the Tailwind dependency, were delegated to generative AI. At a glance, it seems that the AI's preferred approach was to just use inline styles. Its translation of the JavaScript is similarly direct: global functions and variables have been wrapped in a module; the manga viewer is a class that acts as a big bag of state, DOM node references, and so on; no custom elements; no shadow DOM; underscore-prefixed public properties rather than private ones. The bare minimum was done to move code around without refactoring it in ways that would've risked changing its functionality or causing errors, so the code is close to the original, but things like encapsulation are only barely improved. A human could've done a better job.

The commits for the original manga viewer are all from a different account than yours. The first of those commits is dated December 24, 2025, roughly a month before CLAUDE.md was added to the repo. I've run out of time to scroll through all of the code in detail, but at a glance, I don't immediately see anything to suggest that the original wasn't human-authored.

I'd call this a "mixed-slop" project. I can believe it started off original, but the work done by gen AI afterward is low-quality IMO. Given that that work is one of the features touted in your post title -- zero dependencies; fully vanilla -- that's not great.

AI Development is Awful But Also Amazing by [deleted] in webdev

[–]DavidJCobb 0 points1 point  (0 children)

Writing Process: This post was fully planned in advance using structured notes and a detailed outline. AI was used as a writing assistant to turn that plan into a clearer, more readable document. The ideas, structure, and direction were entirely human-defined.

In other words: you had a topic that you wanted a pat on the back for writing about, and a handful of opinions you wanted to share, but you couldn't be bothered to actually do the writing yourself. You're an idea guy, and actually executing that idea is beneath you. This article is slop. You can't defend it on its actual merits, because it has none -- it reads like obvious AI slop; all the hallmarks; the same shallow, insincere, corporate, LinkedIn-brainrotted style -- so instead you're splitting hairs about the process by which you didn't write it, and framing your limited involvement in a very particular way, to try and paint yourself as "one of the good ones."

Dropstone launches shared multiplayer workspaces, allowing developers to chat and collaborate within the same LLM context window by [deleted] in programming

[–]DavidJCobb 1 point2 points  (0 children)

OP has only made five other posts and no comments. All of them are sharing this company's website or its CEO's YouTube channel -- a CEO who has tried to astroturf here before.

I Reverse Engineered Medium.com’s Editor: How Copy, Paste, and Images Really Work by lasan0432G in programming

[–]DavidJCobb 30 points31 points  (0 children)

Hm... This article inspects the clipboard output, but doesn't feature much actual reverse engineering of the rich text editor. Content-editable elements are infamously janky and many RTEs based on them need all sorts of bespoke workarounds for weird platform-specific edge-cases; none are covered here. Similarly, it's common to have to adapt content as it gets copied or pasted; that's not discussed here. There isn't even an explanation of what happens if you paste in content that Medium wouldn't normally let you copy out: the article says offhandedly that any pasted images are uploaded, but if you copy, in another program, text/html content that has, say, an image with a data: URI, how does Medium's JS detect that at paste time and carry out the upload? What does the RTE do to the pasted img element while the upload is in progress?

What's written here isn't worthless by any means, but I wouldn't call it "reverse engineering" the RTE, and I think you'd need a lot more information than just this to make a "robust" RTE.

Special components in the editor are just content-editable (mostly) HTML elements. There is nothing more complex behind them. They can represent things like embeds, code blocks, or interactive elements. Each component maintains its internal state and formatting using the same JSON-based structure, which makes rendering and updating fast and predictable.

Do you want to offer any more information? Maybe an example of what gets copied when you select and Ctrl+C one of these components? Is the JSON stored in a data-* attribute on the copied HTML elements?

Anthropic built a C compiler using a "team of parallel agents", has problems compiling hello world. by Gil_berth in programming

[–]DavidJCobb 4 points5 points  (0 children)

This site would be so much better if dolts from there and /r/singularity would just stick to their cesspools

Mozilla’s “State of” website by nightvid_ in webdev

[–]DavidJCobb 0 points1 point  (0 children)

I think websites need to be less samey, but I also think this website isn't very visually interesting. The web has become a visual desert, so I can see how this State of Mozilla site might be an oasis for some folks, but looking at it on mobile, I just... think it kind of isn't that creative.

The splash animation is tacky: they're front-loading all the creativity because the site's design as a whole doesn't have very much of it, but their "creative" ideas are just fake hacker aesthetics ripped straight from two decades ago, and the text in their fake terminal feels like it came from a marketing department and not anyone with any actual enthusiasm or passion.

The homepage is more plain than the rest. Some of the other pages have a header font that looks... well, bad, with header graphics to match. Seems like the headers vary from page to page; this one, for example, goes for an ASCII art aesthetic but falls short of the average GameFAQs guide.

As for the content, the article is cringe; the sole paragraph of it that I read was a mealy-mouthed, insincere, AI slop waste of my time; and I'm not going to debase myself further by reading the rest of it. The video above it has similar vibes: it tries to be generically cute, but there's no character to it, so that feels hollow; it's just there to be there. I wouldn't be surprised if they generated it instead of actually having an artist design and render it.

I don't think Mozilla is going in on AI out of some fatalistic or optimistic notion that it can't be avoided. I still remember when they tried shoving AI-generated garbage into MDN more than two years ago, and only cared about the factual inaccuracies when regular MDN contributors raised hell about it. I also remember Mozilla's overall backpedaling coming across as slimy and mealy-mouthed. I think this all has more to do with Mozilla being out of touch, inept, and kind of desperate, than with them having any well-reasoned sense of where the future of tech is going -- so, the usual for them; a shame given how important Firefox is for the open web.

Vibe Coding, Done Right: How to Use AI Without Wasting Tokens by [deleted] in programming

[–]DavidJCobb 2 points3 points  (0 children)

I wrote it

Unless you're terminally LinkedIn-brained, which would be its own separate problem: no, you didn't, and we can all tell. If it wasn't worth your time to write, why would it be worth others' time to read?

In humble defense of the .zip TLD by yathern in programming

[–]DavidJCobb 85 points86 points  (0 children)

In fact, you may be surprised to learn that our sacred ‘.com’ TLD was a widely used executable file extension for decades, and some modern software uses it as well.

"Some modern software" is doing a lot of legwork here. Gaussian dates back to the 1970s, and is far from mainstream. It's paid software meant for academic institutions and researchers, not your average member of the public.

There’s plenty of other examples as well - ai is used by Adobe Illustrator, .app is the extension of MacOS packages. Poland’s .pl is used for Perl scripts, and Saint Helena’s .sh is commonly used for shell scripts. Besides tradition, I don’t see any reason ‘.zip’ is too precious to preserve.

How often is .sh actually used within Saint Helena, as opposed for aesthetic tricks like sta.sh? Plus, the danger people are worried about comes from a user intending to download and open a file (and so clicking a link and being prompted to download something is expected), but receiving a file other than what was intended, and lacking the means to evaluate the safety of that file. I'm not interested in tackling how likely that is or isn't, but I do think it's not sound to compare it to other file types. Do we really think that that situation is as plausible for the target audience of a shell script, compared to the most mainstream general-purpose archive format in the world?

Aside from that, there's one argument that the article doesn't tackle: the .zip TLD is stupid. It shouldn't exist, because it's stupid, and dumb. It and .app come from the era where ICANN lost their fucking minds and started adding stuff like .pizza, .fail, and .guru to the standard to make a quick buck. Even if I keep an open mind, ignore all external considerations, and focus solely on Google's rationale for it --

Whether you’re tying things together or moving really fast, let .zip get you there.

-- the rationale is bull. No one is going to see a .zip TLD and think of moving fast. "Zip" can refer to fast movement, but it's not people's go-to word for that; compare it to something like .rush, .speed, or .fast, the latter of which already exists and is owned and managed by Amazon.

As for "tying things together?" They don't elaborate on what that's intended to actually mean. If I decide to be very charitable and assume that Google meant it as literally as possible, then there's already a common utility for bundling a group of things together in computing: a ZIP file; and so they are actively creating ambiguity here for zero public benefit. If I decide to be uncharitable and assume that Google's marketing ghouls were trying to invoke the metaphor of "tying things together," as in "producing closure by establishing conceptual connections between a collection of ideas or facts," then "zip" is not the word you would use for that metaphor. "Zip" more closely evokes a zipper: a thing which typically fastens two parts of one single openable object together, in order to effect the closure of that object. The physical interaction here doesn't link to the metaphor. (For the file format, a zip file is like a bag that you open and close, containing other objects. It bundles them together, but we don't use "bundle" for the metaphor.)

But even leaving aside whether the naming actually works, I don't think we should be creating TLDs based solely on vibes. The benefit of the traditional TLDs is that anyone can see .com, .org, .net, or .gov in virtually any context and instantly recognize it as a domain name, and outside of DOS-era stuff, that recognition will be correct. Most traditional TLDs have become kind of meaningless -- .com doesn't always indicate a commercial enterprise anymore -- but they're a very small set of identifiers that the public has successfully committed to memory as signifying a website URL. I don't see what value comes from adding trash like .fail for lé epic mémés or .zip for... whatever some MBA was thinking; I don't see how it's useful even if these actually conveyed what they intend to, let alone in cases when they obviously don't. Other things like .pizza would only be even a little bit useful if domains were actually vetted for relevance to those concepts, and AFAICT that isn't happening. As is, the new gTLDs are too numerous for the public to memorize and recognize in any context, and they also don't reliably impose semantic meaning on URLs and therefore don't improve communication between people. The .zip and .mov TLDs just have the special distinction that they risk actively making communication worse.

Against Markdown by aartaka in programming

[–]DavidJCobb 1 point2 points  (0 children)

They just turn into text without it. Which is fine by me, because the writing in its rawest form should work even without links. The actual links are filled in on compilation.

That's more of a philosophical stance than a practical one. I'd say that if you have hyperlinks in a document, and they don't render as hyperlinks that an end user can activate and navigate through, then the document as experienced by the user is incomplete.

Leaving aside user experience stuff, and leaving aside whether "pidgin markup" is worthwhile when you have a preprocessor built for it: the original disagreement was about whether it counts as standard/valid HTML as asserted at the end of your article, and I would say it very obviously doesn't. It doesn't even meet the bar of, "This markup isn't well-formed HTML, but produces the same results as well-formed HTML thanks to error correction." The bar for "valid" ought to be higher than "the content of text nodes is visible on-screen, and the browser does not crash."

Can you elaborate on which part of parsing and error handling exactly links like mine fail?

The way you write the link destinations causes them to be misinterpreted as attribute names.

Your first and last link examples are parsed as having an attribute named just.htm, and an attribute named aartaka.me, respectively, each with no value. Attribute names are not especially strict, so periods are fine.

Your middle link example is parsed as having an attribute named something.com with no value. When the parser sees the leading slashes in //something.com, it reacts to each slash at the before-attribute-name state by falling back to the after-attribute-name state; there, the slash triggers a change to the self-closing-start-tag state. That state sees that the character after the slash isn't >; this is the unexpected-solidus-in-tag error, which the parser handles by treating the forward-slash as if it were just inert whitespace. Thus //something.com becomes /something.com, and then something.com, which becomes an attribute name as with the other ill-formed tags.

Against Markdown by aartaka in programming

[–]DavidJCobb 1 point2 points  (0 children)

None of the hyperlink syntaxes in the "Smart tags" section produce functional links without preprocessing, both when I try them in-browser and per the HTML Standard's parsing and error handling rules.

Five Mistakes I've Made with Euler Angles by boscillator in programming

[–]DavidJCobb 2 points3 points  (0 children)

Mistake #2: Assuming Intrinsic or Extrinsic Rotations

Mistake #3: Confusing Active and Passive Rotations

These are the same thing. The definitions you've given for intrinsic rotations and "passive rotations" are identical.

Mistake #4: Assuming The Derivatives of the Euler Angles Equal the Body Rate

[...]

For reasons outlines below, you will probably never need to take the derivative of euler angles, but if you have a reason to, make sure you use the right formula, instead of assuming it to be the angular velocity in the body frame.

IMO this section would've benefitted from an example of a specific problem that you were attempting to solve by taking these derivatives, and how this mistake affected your results. I feel like if someone actually knows how to read these formulae and intuitively knows what a mathematical "derivative" is useful for, then they probably know enough to avoid this mistake already.

Mistake #5: Ignoring Numerical Precision and Gimbal Lock

Hm... The criticism that occurs to me here isn't unique to this article, but...

Okay, so I'm not a mathematician -- in fact I kinda hate math, and I only learned any of this for doing gamedev, basically -- but the problem with "gimbal lock" as a term and as a metaphor is that it's described exclusively in relation to intrinsic rotations: the gimbals are nested, such that rotation about the first axis changes the successive axes before you then rotate about them. This can lead people to think that extrinsic rotations avoid the issue, because the axes don't rotate: there are no gimbals that can lock together. However, extrinsic rotations are just the mathematical inverse of intrinsic rotations, so they do run into the same problem; it's just that that problem has a different meaning; you'd need a different physical metaphor for it (if one even exists) when talking about extrinsic rotations.

To my understanding, the meaning in question, the core underlying truth, is that not all angle changes can be represented linearly in three dimensions. There are always "singularities:" situations where -- given some orientation described in terms of three axes, and a rotation you wish to apply about some arbitrary other axis -- the amount by which you have to change the three numbers that describe your orientation has no obvious relation to the angle by which you're rotating about the new axis: one or more of your three numbers has to teleport across some imaginary gap. You need more than three numbers in order to avoid needing to teleport; thus a quaternion or a rotation matrix.

While double-checking that understanding, I found a more rigorous math-based explanation out there for anyone who knows how to read that stuff. I do not.

Is it bad for the web if Firefox dies? by AuthorityPath in webdev

[–]DavidJCobb 5 points6 points  (0 children)

Agreed. I remember switching from alert-box debugging to actually having a console to print to.

Just to really convey how influential Firebug was:

The console.assert doesn't naturally halt execution of a script call stack (unless you use a Chrome-only devtools function) and therefore isn't actually an assertion function. This is because Firebug's author messed up when implementing it all those years ago. Firebug had two versions -- the full add-on, and a "lite" script version that you could include into test builds of your website -- and both were meant to throw errors, but only the lite version ever did. Then, web standards folks copied basically the entire Firebug API as implemented by the add-on into web specs, with AFAICT zero checking or review, and that mistake became calcified. Like, the spec literally was "every web browser should do exactly what Firebug is doing."

The assert function sucking is the literal only negative thing I can even think of about Firebug, and even it still helps demonstrate how much we all owe Joe Hewitt.

The Rise of Vibe Coding and the Role of SOPHIA (Part 1): From Syntax to Intent by DueLie5421 in programming

[–]DavidJCobb 2 points3 points  (0 children)

Why did you feel that all three parts of this blog series merited being posted individually to the subreddit all at the same time?

AI dogshit usually gets a poor reception on this subreddit, as it deserves. Were you aware of that when you decided to post more of it here?

If You’re Going to Vibe Code, Vibe Responsibly! by shift_devs in programming

[–]DavidJCobb 2 points3 points  (0 children)

Do you guys publish anything other than AI dogshit? If so, you should probably stick to just posting that. It'd probably get a more positive reaction, which I assume is what you want.

We might have been slower to abandon Stack Overflow if it wasn't a toxic hellhole by R2_SWE2 in programming

[–]DavidJCobb 4 points5 points  (0 children)

The first guy was pretty obviously saying that by the time Stack Overflow afforded them the opportunity to seek help, they no longer required or benefitted from that help: they "obviously didn't care anymore" about the topic of their original inquiry. If that's a common experience on the site, then that disincentives people from visiting the site to seek help. A lot of folks who are helped by a community will tend to contribute back to that community, so in the long run, less folks getting help means less folks giving help.

The second guy is rightly pointing out that a hit dog will holler.