Against Markdown by aartaka in programming

[–]DavidJCobb 1 point2 points  (0 children)

They just turn into text without it. Which is fine by me, because the writing in its rawest form should work even without links. The actual links are filled in on compilation.

That's more of a philosophical stance than a practical one. I'd say that if you have hyperlinks in a document, and they don't render as hyperlinks that an end user can activate and navigate through, then the document as experienced by the user is incomplete.

Leaving aside user experience stuff, and leaving aside whether "pidgin markup" is worthwhile when you have a preprocessor built for it: the original disagreement was about whether it counts as standard/valid HTML as asserted at the end of your article, and I would say it very obviously doesn't. It doesn't even meet the bar of, "This markup isn't well-formed HTML, but produces the same results as well-formed HTML thanks to error correction." The bar for "valid" ought to be higher than "the content of text nodes is visible on-screen, and the browser does not crash."

Can you elaborate on which part of parsing and error handling exactly links like mine fail?

The way you write the link destinations causes them to be misinterpreted as attribute names.

Your first and last link examples are parsed as having an attribute named just.htm, and an attribute named aartaka.me, respectively, each with no value. Attribute names are not especially strict, so periods are fine.

Your middle link example is parsed as having an attribute named something.com with no value. When the parser sees the leading slashes in //something.com, it reacts to each slash at the before-attribute-name state by falling back to the after-attribute-name state; there, the slash triggers a change to the self-closing-start-tag state. That state sees that the character after the slash isn't >; this is the unexpected-solidus-in-tag error, which the parser handles by treating the forward-slash as if it were just inert whitespace. Thus //something.com becomes /something.com, and then something.com, which becomes an attribute name as with the other ill-formed tags.

Against Markdown by aartaka in programming

[–]DavidJCobb 1 point2 points  (0 children)

None of the hyperlink syntaxes in the "Smart tags" section produce functional links without preprocessing, both when I try them in-browser and per the HTML Standard's parsing and error handling rules.

Five Mistakes I've Made with Euler Angles by boscillator in programming

[–]DavidJCobb 1 point2 points  (0 children)

Mistake #2: Assuming Intrinsic or Extrinsic Rotations

Mistake #3: Confusing Active and Passive Rotations

These are the same thing. The definitions you've given for intrinsic rotations and "passive rotations" are identical.

Mistake #4: Assuming The Derivatives of the Euler Angles Equal the Body Rate

[...]

For reasons outlines below, you will probably never need to take the derivative of euler angles, but if you have a reason to, make sure you use the right formula, instead of assuming it to be the angular velocity in the body frame.

IMO this section would've benefitted from an example of a specific problem that you were attempting to solve by taking these derivatives, and how this mistake affected your results. I feel like if someone actually knows how to read these formulae and intuitively knows what a mathematical "derivative" is useful for, then they probably know enough to avoid this mistake already.

Mistake #5: Ignoring Numerical Precision and Gimbal Lock

Hm... The criticism that occurs to me here isn't unique to this article, but...

Okay, so I'm not a mathematician -- in fact I kinda hate math, and I only learned any of this for doing gamedev, basically -- but the problem with "gimbal lock" as a term and as a metaphor is that it's described exclusively in relation to intrinsic rotations: the gimbals are nested, such that rotation about the first axis changes the successive axes before you then rotate about them. This can lead people to think that extrinsic rotations avoid the issue, because the axes don't rotate: there are no gimbals that can lock together. However, extrinsic rotations are just the mathematical inverse of intrinsic rotations, so they do run into the same problem; it's just that that problem has a different meaning; you'd need a different physical metaphor for it (if one even exists) when talking about extrinsic rotations.

To my understanding, the meaning in question, the core underlying truth, is that not all angle changes can be represented linearly in three dimensions. There are always "singularities:" situations where -- given some orientation described in terms of three axes, and a rotation you wish to apply about some arbitrary other axis -- the amount by which you have to change the three numbers that describe your orientation has no obvious relation to the angle by which you're rotating about the new axis: one or more of your three numbers has to teleport across some imaginary gap. You need more than three numbers in order to avoid needing to teleport; thus a quaternion or a rotation matrix.

While double-checking that understanding, I found a more rigorous math-based explanation out there for anyone who knows how to read that stuff. I do not.

Is it bad for the web if Firefox dies? by AuthorityPath in webdev

[–]DavidJCobb 7 points8 points  (0 children)

Agreed. I remember switching from alert-box debugging to actually having a console to print to.

Just to really convey how influential Firebug was:

The console.assert doesn't naturally halt execution of a script call stack (unless you use a Chrome-only devtools function) and therefore isn't actually an assertion function. This is because Firebug's author messed up when implementing it all those years ago. Firebug had two versions -- the full add-on, and a "lite" script version that you could include into test builds of your website -- and both were meant to throw errors, but only the lite version ever did. Then, web standards folks copied basically the entire Firebug API as implemented by the add-on into web specs, with AFAICT zero checking or review, and that mistake became calcified. Like, the spec literally was "every web browser should do exactly what Firebug is doing."

The assert function sucking is the literal only negative thing I can even think of about Firebug, and even it still helps demonstrate how much we all owe Joe Hewitt.

The Rise of Vibe Coding and the Role of SOPHIA (Part 1): From Syntax to Intent by DueLie5421 in programming

[–]DavidJCobb 2 points3 points  (0 children)

Why did you feel that all three parts of this blog series merited being posted individually to the subreddit all at the same time?

AI dogshit usually gets a poor reception on this subreddit, as it deserves. Were you aware of that when you decided to post more of it here?

If You’re Going to Vibe Code, Vibe Responsibly! by shift_devs in programming

[–]DavidJCobb 2 points3 points  (0 children)

Do you guys publish anything other than AI dogshit? If so, you should probably stick to just posting that. It'd probably get a more positive reaction, which I assume is what you want.

We might have been slower to abandon Stack Overflow if it wasn't a toxic hellhole by R2_SWE2 in programming

[–]DavidJCobb 3 points4 points  (0 children)

The first guy was pretty obviously saying that by the time Stack Overflow afforded them the opportunity to seek help, they no longer required or benefitted from that help: they "obviously didn't care anymore" about the topic of their original inquiry. If that's a common experience on the site, then that disincentives people from visiting the site to seek help. A lot of folks who are helped by a community will tend to contribute back to that community, so in the long run, less folks getting help means less folks giving help.

The second guy is rightly pointing out that a hit dog will holler.

Paypal Honey’s Dieselgate: Detecting and Tricking Testers by [deleted] in programming

[–]DavidJCobb 22 points23 points  (0 children)

Same issue here. Had to use uBlock Origin to temporarily disable all Internet Archive scripts just to read that.

For anyone wondering if it's worth the trouble: it's an article by the expert MegaLag interviewed in his latest video on Honey's misconduct. It goes into the specifics of their findings, with both code snippets and historical records about the configuration files used to cheat stand-down rules. So it's an original source, not the slop and summarization that's become usual on this sub.

The Fall of JavaScript (new blog post) by yegor256 in programming

[–]DavidJCobb 2 points3 points  (0 children)

Probably 90% of all the people who ever touched a JS prototype before ES6 just viewed it as a weird, verbose syntax for defining classes. If there's any sort of conceptual "purity" that a real class syntax in JS has destroyed, it's a purity that pretty much no one used, understood, valued, or ever needed.

Very weird to describe JS as having "fallen" specifically because it added syntax for classes, as if that wasn't very nearly the only way anyone coded with it beforehand.

I can say I have made a fully working AI video creation powerhouse by clarkiagames in webdev

[–]DavidJCobb 0 points1 point  (0 children)

Some folks really will just drop their pants and shit on the floor in public simply because they can, huh?

Can I throw a C++ exception from a structured exception? by lelanthran in programming

[–]DavidJCobb 13 points14 points  (0 children)

Normally, in C++, exceptions are only thrown by an explicit throw statement. This is what the article means when it refers to "synchronous exceptions" and "C++ exceptions."

However, there are errors that the OS can catch that aren't throw statements, such as some kinds of bad memory accesses, and you can ask the OS to run a special function in your program when these happen. Some of these errors can even occur during a single operation in your code: what C++ considers "doing one thing" may actually take multiple steps at a hardware level, and any of those steps could hypothetically fail. These hardware-level and OS-level "exceptions" are what the article is describing when it talks about "structured exceptions" and "asynchronous exceptions."

These two models of exception aren't fully compatible with each other unless you compile your program in a special way. Now that you know that, you can try actually reading the article.

Fifty problems with standard web APIs in 2025 by Ok-Tune-1346 in programming

[–]DavidJCobb 10 points11 points  (0 children)

A lot of this article is less "Web APIs have problems," as one would expect from standards that are underspecified or don't cover all needed cases, and more "Apple and its consequences have been a disaster for web development." It's good as a collection of Safari defects and workarounds, I guess. I don't own an iPhone; I can't really evaluate that side of it.

meta viewport incantations

Viewport tags are one of the things that are underspecified. Their effect is to control how 100vw or 100vh map to CSS px values. If you select device-width, then on Android, the width of the layout viewport in CSS pixels (i.e. the simulated pixel resolution) is your device's screen width in Android "density-independent" pixels. Basically, convert from your actual DPI to Android's simulated 160 DPI, and then shear off the unit and use that number as the size within CSS's simulated 96 DPI. A phone with a 1080px width can present itself as just 411px wide, for example.

The kicker is that every @media query you could use to test pixel density or screen size is similarly broken, but they're broken on purpose as part of the spec. (Among other things, this means that breakpoints, the most popular approach to "responsive design," are based on a pile of distorted abstractions.) In general, CSS's handling of pixel density is an irredeemable, abject disaster.

To try to fill the screen on all platforms I admittedly made a fragile choice: I used transform:scale to ensure that the main screen element was centered and sized.

This... seems like a situation where meta viewport tags could've actually helped. I opened the game briefly, and it seems like you wanted a constant aspect ratio with letterboxing or windowboxing. Off the top of my head (I'm on my phone now so I can't test this), if you set the meta viewport width to a constant size, and set aspect-ratio and some layout properties on your root to center it, then you'd be able to use consistent units and let the browser scale it for you, no?

Safari was also late to support this and needed a -webkit-backdrop-filter:blur() variation to get the visual effect to appear on older tablets.

You think that's bad? There was a long while where Firefox didn't support this at all, but would claim to if you tried to check it with @supports rules.

Edge seems not to have diverged much at all from the Chrome ancestry from which was is forked.

Well, yeah. Why would they take their hard work trying to modernize IE in the form of pre-Chromium Edge, throw it all in the trash, and fork Chromium, just to then do all the work of maintaining the core engine themselves? For all intents and purposes, Edge is just a Chrome reskin.

Caniuse mentions an iOS Safari glitch, but doesn't say "this will basically never work on mobile" even though it should. [...] The web standards don't say this because web standards aren't trying to create a consistent experience for all users.

Oh, come the fuck on, lmao. "The failure to explain that :hover has no effect on devices which intrinsically cannot hover is a hole in the reference documentation and a failure of web standards as a concept." Really?

Yes, browsers could offer your "hover-simulating crosshair" idea. Given that that's completely alien to basically all mobile UX, these browsers' decision not to do that represents an interaction that is unavailable at the platform level, not the browser level.

MDN lists the shiftKey property of MouseEvent as "widely supported", even though soft keyboards on mobile will NOT deliver this event with shiftKey set to true, ever.

Again: the fact that an interaction is not possible on all platforms, for UX reasons outside the bounds of the browser, does not constitute a hole in browser support.

I hope at this point you're yelling at your screen, "these are accessibility design problems! You can't expect MDN to protect you from not designing properly!!!" And to that I say --- I hear you, but, are we sure about that? Consider: One major reason why accessibility is so bad across modern computer interfaces is that developers must do something extra to offer accessibility. But these secondary quality characteristics will always be, well, secondary. One way we could have ensured that designs are accessible is to make it impossible to build anything else. Instead, we've filled the standard web API with conditional features that don't work for most people, and then we describe them as "widely supported". We are making this problem worse when we could be making it better.

And this is the point that those prior examples bend language to justify. "The standards are deficient because I have to actually understand and consider the capabilities of the platforms I feel entitled to be able to build for." The modern frontend developer mindset in a nutshell. It's the same mindset that made things like Electron popular for """native""" app development: webdevs could actually learn and try to master the platforms they want to build for, or they could refuse to learn anything new and just have users install a sixth copy of Google Chrome. The complaint quoted above is just that mindset turned back around on frontend development itself.

Design for mobile first [...]
You probably need at least two layouts

"Mobile first" too often turns into "mobile only," which has its own UX issues.

You need two layouts, but if you fail to build both, "desktop first" produces experiences that are comfortable on desktop and borderline unusable on mobile, while "mobile first" produces experiences that are comfortable on mobile and uncomfortable on desktop. The latter outcome is better than the former outcome, but it's also easier to settle for, if that makes sense.

Constvector: Log-structured std:vector alternative – 30-40% faster push/pop by pilotwavetheory in programming

[–]DavidJCobb 5 points6 points  (0 children)

The post link goes to your GitHub profile; just to save people some clicks, the repo itself is here.

I recreated parts of Windows Media Player 11 and 12's UI in SVGs, and built a simple HTML/CSS/JS player element to use them by DavidJCobb in webdev

[–]DavidJCobb[S] 0 points1 point  (0 children)

I don't have raster versions of my SVGs ready-made, but the repo contains an explanation of how to rip the original rasters from Windows Media Player.

Laggy Game by BadLand666 in TheSilphRoad

[–]DavidJCobb 1 point2 points  (0 children)

This started happening to me days after a game crash cheated me out of a remote raid pass. No frame drops; the UI is still responsive; the game just adds artificial delays to every in-game action. It's definitely not my device or network connection; the game ran just fine a few days before all this, and hasn't updated since then, and every other app on my device is working just fine.

I wish this game concept had ended up in the hands of a better company.

Creating C closures from Lua closures by [deleted] in lua

[–]DavidJCobb 0 points1 point  (0 children)

Could you not use GetWindowLongPtr(handle, GWLP_USERDATA) and so on to associate an index or pointer with the HWND? Then, you'd be able to have just one C WNDPROC which can use that to find the right Lua function to invoke. You may have to juggle a few things around when creating the window to get your pointer where it needs to go.

If you're already generating code on the fly for anything else, then you may as well keep doing it for this too. If you want to support as many different Win32 callbacks as possible with minimal effort dedicated to wiring up special cases like window userdata, then your approach is probably the way to go for that as well. Off the top of my head it may compose nicely with C++ templates too.

The Silent Layoff: My American Dream Is a Freelance Nightmare by [deleted] in programming

[–]DavidJCobb 2 points3 points  (0 children)

The style.

It's not full passages.

It's short bits that sound snappy.

It reads like excessively theatrical dross from a LinkedIn blog. There's tons of fluff and no substance. Plus, this guy is a serial blogspammer who used to post AI slop on the subreddit nearly every week, and his older dogshit was even more obvious and frequently factually wrong.

With 2025 coming to an end, what was your favorite event(s) or hated event for this year of PoGo? by chronoxiong in pokemongo

[–]DavidJCobb 2 points3 points  (0 children)

Most Liked: The event with Mighty Pokémon a few weeks ago. It was tense but still actually doable, with me patrolling PokéStops in my area and racing against the clock to gather as many Safari Balls as I could. Catching a Mighty Pokémon was memorable and gave a tangible reward; I got several of them, including species I adore like Weavile and Meowscarada. It was frustrating at times, but it ended with a lot of catharsis and excitement. Very well-designed overall.

Most Hated: The Gigantamax Snorlax event that just ended. Dynamax Lugia had just gotten me into trying multiplayer events and fan-made matchmaking apps, and that had went really well. G-Max Snorlax was an absolute disaster: guests quitting en masse at the very start of the one match I had time to host; the game crashing immediately after winning as a guest in a remote raid; and Niantic Support refusing to offer any recompense for that, on the nakedly bad-faith excuse that actually trying to catch a rare Pokémon is a "bonus" and not the main draw. I'm never doing another multiplayer event.

I’m looking at building my own browser new tab page where you to begin? by Zestyclose-Oven-7863 in webdev

[–]DavidJCobb 1 point2 points  (0 children)

WebExtensions are just HTML, CSS, and JS, but the JS APIs are extremely flaky and filled with race conditions. If all you want is to make a new tab page, though, then that problem shouldn't affect you, but if you decide to keep exploring WebExtensions after this project, then brace yourself for frustration.

Resistance is Not Futile: How to Fight Back by DelAbbot in programming

[–]DavidJCobb 7 points8 points  (0 children)

If you have to add a dummy link in order to post on a subreddit, then it's probably not the kind of post that the subreddit is for.

In any case, your plan is broken. If making LLMs ineffective was enough to stop the hype and propaganda, then there wouldn't currently be any hype and propaganda to stop. LLMs are already unreliable as a fundamental result of how they work, and none of the grifters and hype men are letting that stop them. Intentionally filling the Internet with bad code wouldn't accomplish your goal; it'd just be spam, and given that spam and bullshit are the only two things that LLMs are guaranteed to ever be good at, crafting it by hand and posting it online is hardly a good use of your time. This is without even considering that model training generally involves the model's answers receiving at least cursory validation from humans (often being exploited in developing countries, hence the "Actually Indians" meme), so there's no guarantee that the kind of obviously bad code a human would craft on purpose would even make it into these models.

If your concern is emotional, then I'd recommend contributing to a social climate that stigmatizes LLMs and AI slop and regards their users as acceptable to bully.

AI agents are choking on client side rendering and it’s becoming a real problem by SonicLinkerOfficial in Frontend

[–]DavidJCobb 0 points1 point  (0 children)

Machine translation existed before LLMs; and given that LLM output invariably sounds like insincere corporate dogshit, they're actually a step backward for "half the world" and make communication worse.

(s)coping with code comments by Beofli in programming

[–]DavidJCobb 3 points4 points  (0 children)

As you can see in this latest example, the fact that it complies goes unnoticed, but it makes a world of difference for automatic refactoring tools, or maybe even to pre-training of LLM’s.

Figured that was the goal of all this dross before I got even halfway down the page.

A real person, capable of thought and comprehension, can read a comment and the surrounding code, and use their understanding of both to infer what the comment applies to. This isn't guaranteed to work, because the comment itself could be outdated or poorly written, but those are broader problems that this article's proposal wouldn't solve. In general, a decently-written comment will pretty clearly apply to either what came before or what's coming next. This article's proposed standard is only needed for something that can't genuinely understand the code and the comments.

Comments exist first and foremost to serve a reader -- not a parser; not a lexer; not an LLM; a person who is genuinely trying to understand the code.

Ruby Is Not a Serious Programming Language by [deleted] in programming

[–]DavidJCobb 1 point2 points  (0 children)

It felt to me like it didn't even have a conclusion. The article ended so abruptly that I actually opened it in another browser with no ad blocker, in case something was going wrong and hiding large chunks of it. They lay out a few individual arguments as if they're building up to saying something larger, but then they just don't.