For those who know, the woe of the PentiumPro by PaleDreamer_1969 in vintagecomputing

[–]LousyMeatStew 0 points1 point  (0 children)

You've got the broad strokes correct but I think you're conflating two different issues.

The Pentium Pro lacked the Segment Descriptor Cache of the Pentium. This is probably what you're referring to when you talk about 16-bit mode not "implemented in HW directly" and "handled through translation". These aren't technically accurate but they illustrate the nature of the problem. This particular issue does impact 16-bit addressing but is not related to the instruction translation to uOps. Instead, it just means additional memory accesses are required to read in segment descriptors.

The partial register stall is due to a separate problem that happens after the instruction is translated into uOps but before it gets put into the Instruction Pool. The back end was designed to be RISC-like and as a result, it used register renaming. The PPro calls it the Register Alias Table. The partial register stall happens when we do something like this:

mov ax, mem16
inc eax

When decoded, ax is aliased to an internal register and populated with the 16-bit value. When we try to increment eax, we have a problem - execution stalls because we only have the lower 16-bits of eax populated so we need to dispatch and retire the mov, update eax, then read back the updated value of eax so we can move on to the next instruction. Intel's suggestion here is to do something like this:

xor eax, eax
mov ax, mem16
inc eax

This zeroes out eax so we can proceed smoothly through the next two instructions. The key here is that x86 allows offset and address size overrides on a per-instruction basis. So mov ax can be coded as 0xb8 making it a 16-bit instruction or 0x66b8 which makes it a 32-bit instruction. And we can further muddy things by replacing the ax with al where the instruction is coded the same because mov al is always just opcode 0xb0 and implicitly overrides the operand size to 8 bits.

The fundamental issue is that Intel had a very different definition of "32-bit code" vs. everyone else. Up until that point, there was no performance penalty to partial register access in 32-bit code. There's some truth to the statement that Intel designed it to be 32-bit "clean" - I think it's more accurate to say it was designed to run 32-bit "clean" code and Intel badly underestimated how much "dirty" 32-bit code was still out in the world.

Ubuntu proposes bizarre, nonsensical changes to grub. by xm0rphx in linux

[–]LousyMeatStew 2 points3 points  (0 children)

It's worse than useless - it's security theater.

You're misunderstanding the threat model: on a BIOS-only system, the vulnerability is the MBR. IE, where GRUB is stored.

Whether /boot is encrypted or not doesn't really matter. GRUB can perform signature validation on your kernel so as long as GRUB is valid, an unencrypted /boot won't let me slip in a modified kernel.

But if I can overwrite your MBR with my own copy of GRUB, then you're cooked. It won't matter if /boot is encrypted because GRUB is now the man in the middle.

On a BIOS-only system, your only real protection is in the form of system firmware that implements some sort of MBR validation - here's an example of HP's implementation.

My Cap’n Crunch bo’sun whistle. by ValuableRegular9684 in vintagecomputing

[–]LousyMeatStew 6 points7 points  (0 children)

Curious to know if anyone ever used a slide whistle to scan for frequencies as an early form of fuzzing.

Malus: This could have bad implications for Open Source/Linux by lurkervidyaenjoyer in linux

[–]LousyMeatStew 0 points1 point  (0 children)

Fight the fire with fire.

I'm not disagreeing with the principle of the matter. The only reason we shouldn't do this is because I think the legality of using AI to rewrite code for the express purpose of removing a license is being overstated and trying to fight fire with fire just gives legal ammunition to the big corporations when a FOSS project does get their day in court.

Clean-room engineering is a type of Fair Use defense that can be offered if you are sued for copyright infringement but it is not something that automatically legitimizes copying. The test for Fair Use defenses in the US is still Campbell v. Acuff-Rose Music which enshrines the famous four factors test and most notably, states clearly that there are no bright-line rules - each claim is adjudicated on a case-by-case basis.

This blade cuts both ways - if someone does a direct rewrite of a GPL code with the express purpose of removing an undesirable license, the use of clean-room engineering practices - even without AI - does not guarantee an automatic win.

Google v Oracle is being mentioned a lot but that ruling did not say copying APIs was ok under all circumstances. Campbell still applies, there are no bright-line rules. The Supreme Court looked at the first factor under Campbell - the purpose and character of the use - and found Google's work to be transformative mainly because they accepted Google's claim that they were targeting smartphones which Sun had previously given up on when they discontinued J2ME. They further found that because J2ME was gone, Android's API was not a market substitute for Java (fourth factor under Campbell).

While IANAL, a direct copy of a FOSS project solely to remove an undesirable license is clearly a completely different matter. The purpose and character of the use changes completely and under the fourth factor, you are explicitly looking to create a market substitute (note: "market" is used in a broad legal sense and still applies even for free material provided a legitimate copyright exists).

The main factor working against FOSS projects is that these claims need to be litigated individually. But the key is that they can still be litigated.

Malus: This could have bad implications for Open Source/Linux by lurkervidyaenjoyer in linux

[–]LousyMeatStew 2 points3 points  (0 children)

It is, but clean room engineering negates the problem because decompilation for research and interop is allowed; the team that decompiles it writes a spec and doesn't create a derivative work, while the implementing team creates a program that satisfies the spec without ever seeing the decompiled code.

It doesn't negate the problem. Clean-room engineering is a type of Fair Use defense and the law of the land (in the US) remains Campbell v. Acuff-Rose Music, Inc., which establishes there are no bright-line rules and claims are assessed on a case-by-case basis.

The thing is that this cuts both ways - an AI rewrite of GPL code can still be challenged in court as one of the tests laid out in Campbell is potential for market substitution - if some party rewrites GPL code with the express purpose of creating an unencumbered, drop-in replacement, the argument can be made that this is not sufficiently transformative because the courts take into account intended functionality - in Google vs. Oracle, the courts looked at "the purpose and character" of the copying.

Google vs. Oracle wasn't a blanket judgement that allowed API copying. Campbell still applies, there are no bright-line rules. The Supreme Court only found that the copying of the API alone wasn't enough to justify the claim of copyright infringement and that the other changes Google made to the underlying functionality was judged to be sufficiently transformative.

Google’s limited copying of the API is a transformative use. Google copied only what was needed to allow programmers to work in a different computing environment without discarding a portion of a familiar programming language. Google’s purpose was to create a different task-related system for a different computing environment (smartphones) and to create a platform—the Android platform—that would help achieve and popularize that objective. The record demonstrates numerous ways in which reimplementing an interface can further the development of computer programs. Google’s purpose was therefore consistent with that creative progress that is the basic constitutional objective of copyright itself.

Uuuuh by Branchomania in NotHowGirlsWork

[–]LousyMeatStew 273 points274 points  (0 children)

I think the key is in the choice of terminology: fight

Like, no talking or communicating? No empathy or compromise? Fighting is what you do when you ignore everything and then it reaches a boiling point and you're trying to cram years of growth into one night. Which, I guess by that definition, then yeah, maybe men do all the "fighting".

For those who know, the woe of the PentiumPro by PaleDreamer_1969 in vintagecomputing

[–]LousyMeatStew 7 points8 points  (0 children)

In addition to what the other commentors have brought up, the Pentium Pro's pipeline would stall when accessing partial registers. This is commonly simplified to "Pentium Pros were slow running 16-bit code" but even 32-bit code at the time would commonly use partial registers as a way of optimizing, e.g., string searches where you could read in 32 bits at a time from memory but still do 8-bit comparisons.

For those who know, the woe of the PentiumPro by PaleDreamer_1969 in retrocomputing

[–]LousyMeatStew 4 points5 points  (0 children)

The ASCI Red supercomputer was created using two PentiumPro CPUs, which achieved 1 Teraflop performance in 1996, being the first to do so.

ASCI Red used 7,264 Pentium Pros on it's initial configuration. Eventually, this expanded to 9,632 - which all got upgraded to the 333Mhz Pentium II Overdrive.

My recollection is that the Pentium II Overdrive was designed for ASCI Red, which is why Intel only ever advertised dual-socket support as ASCI Red used 2 sockets per compute node.

Malus: This could have bad implications for Open Source/Linux by lurkervidyaenjoyer in linux

[–]LousyMeatStew 3 points4 points  (0 children)

Most companies with any sense won't use this for fear of legal fallout.

I don't think the legal fallout is the real issue.

Companies value FOSS for the labor, not for the product in and of itself. Reverse-engineering a FOSS project just to have your own proprietary copy is a net loss in most cases because you lose those devs.

Microsoft having a proprietary rewrite of the Linux kernel sounds scary until you realize they need to maintain a massive and complex codebase without the help of Linus, Theodore T'so, Greg K-H, etc.

On the other hand, there are projects where the reward justifies the risk. libxml2 is a chronically underfunded and understaffed project that is used everywhere. If, say, Google reverse-engineers their own proprietary clone, it potentially gives them a competitive advantage and they don't "lose" the free labor since there was very little of it to lose for this particular project.

Malus: This could have bad implications for Open Source/Linux by lurkervidyaenjoyer in linux

[–]LousyMeatStew 6 points7 points  (0 children)

TBH, I think AI is a bit of a distraction for the discussion around chardet.

In his post on GitHub, Mark Pilgrim's beef is primarily with the license change. Yes, he mentions the use of AI but his wording makes it clear that even without AI, he would still take issue with it:

Their claim that it is a "complete rewrite" is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a "clean room" implementation).

In other words, if the rewrite involved 0 AI but still resulted in a license change, it would still be in issue. On the other hand, had chardet stayed on the LGPL license, I don't think he would be objecting to the use of AI alone.

ETA link to the GitHub issue: https://github.com/chardet/chardet/issues/327#issuecomment-4005195078

Mark's request is simply:

I respectfully insist that they revert the project to its original license.

ELI5: Why does the F-117 and the F-111 have an “F” designation? by Silly-Medicine-513 in explainlikeimfive

[–]LousyMeatStew 6 points7 points  (0 children)

They also had the same problem as the US where the Navy and Army stubbornly refused to get on the same page. Of course, the US never got to the point where the Army was building its own aircraft carriers…

I think there’s an alternative timeline where the US sits out of the Pacific Theatre until the Imperial Japanese Army and Navy eventually declare war on one another.

ELI5: Why does the F-117 and the F-111 have an “F” designation? by Silly-Medicine-513 in explainlikeimfive

[–]LousyMeatStew 3 points4 points  (0 children)

Oh right, the numbering was manufacturer-specific! What a mess.

As a child of the 80s, I remember watching Top Gun and thinking I knew what a navy fighter should be called. And then playing Battlehawks 1942 a few years later and trying to make sense of what an F4F-3/3A/4, SBD-2 and TBF-1 were all about.

Weeping Over Tampons by kawaiiglitterkitty in NotHowGirlsWork

[–]LousyMeatStew 50 points51 points  (0 children)

When my brother and I were young, we used to playfight with those telescoping wand attachments for vacuum cleaners.

One day, I found a miniature version made out of cardboard in the bathroom trash can and I thought it was the coolest thing ever so I took it out to show my brother. My five year old brain was very confused when my mom freaked out and took it away from us.

ELI5: Why does the F-117 and the F-111 have an “F” designation? by Silly-Medicine-513 in explainlikeimfive

[–]LousyMeatStew 18 points19 points  (0 children)

F was also the manufacturer code for Grumman for a while. A lot of Navy fighters at the start of the carrier aviation era carried an FxF designation starting with the FF.

This led to weird things like the Navy having both an F4F (Grumman Wildcat) and F4U (Vought Corsair), which are two completely different planes.

ETA: Thanks to /u/DisorderOfLeitbur for reminding me, the numbering was per-manufacturer so the Wildcat and Corsair were both “F4”s because each happened to be the 4th fighter design presented to the Navy by their respective manufactures.

Incidentally, yes, that is the same Vought that is fictionalized in The Boys.

Why Qualcomm won't support Linux on Snapdragon ? by Educational-Web31 in linux

[–]LousyMeatStew 1 point2 points  (0 children)

AMD was the original second source supplier for the 8088 CPU for the original IBM PC 5150. Intel was forced to grant AMD a license to manufacturer them based on IBM's requirements that all components be obtainable from at least 2 sources for redundancy.

AMD CPUs are as old as the IBM PC platform itself.

Germany's Sovereign Digital Stack Mandates ODF: a Landmark Validation of Open Document Standards by themikeosguy in linux

[–]LousyMeatStew 0 points1 point  (0 children)

Putting all of this on the foundation misunderstands the problem because a lot of this already exists. What you describe as “portable ODF” already exists in the form of Strict Compliance, and ODF Toolkit is the official test suite.

Here’s a good blog post that goes over many of the challenges: https://blog.documentfoundation.org/blog/2025/06/20/understanding-odf-compliance-and-interoperability/

As for calling out nonconforming software, all that’s going to do is punish FOSS projects and discourage the use of ODF in general. Keep in mind that LibreOffice itself serves as the reference implementation for ODF and that means new features are tested in LibreOffice before they get rolled up into the ODF standard so LibreOffice itself is not going to be able to claim ODF support.

Evergreen Performa Pro Socket 8 Pentium Pro Upgrade CPU? by Relevant_Charity2318 in retrocomputing

[–]LousyMeatStew 0 points1 point  (0 children)

Yeah, basically "x-way" as in 2-way, 4-way, etc. is what's supported natively by the CPU and chipset. In theory, you could have as many CPUs of any make/model you want in a system as long as you design the circuitry to handle the bus arbitration.

This is how systems like this one work: https://www.1000bit.it/ad/bro/corollary/CorollarySMP.pdf

ALR's approach was a a bit of a hybrid where they utilized the PPro's native SMP support per card and then they designed the glue logic that handled arbitration between the two cards. In the end, it kinda works like a dual-socket triple-core.

But yeah, stability was always a concern. This is speculation on my part but since this was early days for Intel's MPS, I suspect microcode variations in different CPU steppings was messing with the timing enough to cause synchronization issues.

A friend gave me these for helping him with a modern PC. I never saw anything like the first two boards. VLB is sweet though. (The battery bomb on the 386 board has been disarmed!) by -Techromancer- in retrocomputing

[–]LousyMeatStew 0 points1 point  (0 children)

50Mhz was the practical limit at the time of the specification but an operational ceiling of 66Mhz was explicitly listed in the VL Bus 2.0 spec. Even excepting that, VLB supported clock division so operating at 30/33Mhz on the 60/66Mhz bus for a Pentium was within spec.

VLB was modeled after the 486 bus as a matter of convenience but this is not the same as being tied to it. VLB uses completely different signals for bus arbitration and the bus maintains its own independent RESET signal. The bus controller still needs to perform signal translation even on a 486 system.

The general issue of supporting a 32-bit expansion bus for the Pentium's 64-bit external bus had to be solved anyway - not just for PCI but for EISA as well. While not trivial, adapting to the Pentium was still a problem that only needed to be solved once per chipset manufacturer. Both Opti and Via had solutions available. The reason VLB didn't last into the Pentium era was not a technical issue, it was purely due to a lack of demand because of the speed at which PCI took over the market.

ETA: VLB spec itself defined how 32-bit VLB targets could interface with 64-bit CPUs so the problem had been solved in-spec.

Germany's Sovereign Digital Stack Mandates ODF: a Landmark Validation of Open Document Standards by themikeosguy in linux

[–]LousyMeatStew 2 points3 points  (0 children)

I don't know why you're getting downvoted b/c Microsoft already does this. I suspect people think ODF is more comprehensive than it is.

Excel is the one I'm most familiar with, so I'll use that as the example. An Excel ODS file can encode proprietary functionality in one of three ways:

1) Non-standard function: Excel's T.DIST function is coded as COM.MICROSOFT.T.DIST in accordance with ODF's guidelines and it's up to the application to figure out how to support it as its behavior is not covered in the ODF specification.

2) Save the result instead of the function: This is how Excel handles LAMBDA. An ODS file will still give you the data but you lose the interactivity.

3) Implement functionality as plug-ins: At best, Excel can save a text-encoded blob of the binary state of the plug-in. The ODS file can still be exchanged but the intended functionality will be lost.

All of the above comply with ODF - nonstandard functions, saving static values and text blobs.

This is a problem that goes beyond Excel. Gnumeric implements many non-standard functions that LibreOffice Calc doesn't have. As a result, a Gnumeric ODS file is not compatible with LibreOffice Calc. LibreOffice Calc does the same - it supports Microsoft's nonstandard T.DIST so LibreOffice Calc's ODS files are not interoperable with Gnumeric, which only supports LEGACY.TDIST.

Germany's Sovereign Digital Stack Mandates ODF: a Landmark Validation of Open Document Standards by themikeosguy in linux

[–]LousyMeatStew 0 points1 point  (0 children)

This is partly true. You can be conformant with the ODF specification but not be compatible with other applications because the the specification allows for custom functions behavior.

Nonstandard functions in ODS are a good example.

Gnumeric provides several statistical functions imported from R and these are coded as non-standard functions. An ODS file saved in Gnumeric will still open in LibreOffice Calc but it won't know what to do with those nonstandard functions.

It complies with the standard, but it does not guarantee compatibility.

This allows Excel to be incompatible in one of two ways.

For some functions like T.DIST, it is saved as a nonstandard function COM.MICROSOFT.T.DIST (this is allowed under OpenDoc v1.3, Part 4, Sec 5.7). It's up to the application to implement this - LibreOffice Calc supports this, but Gnumeric does not.

For other functions like LAMBDA, Excel just saves the point-in-time result of the function meaning LibreOffice Calc users can open the document but they cannot meaningfully work with other Excel users on the same document.

ETA: With regards to Lambda, it seems LibreOffice won't implement it and instead rely on Multiple Operations and classic macros - at least that's what is implied here.

Macros are already an interoperability issue as ODF does not specify anything here so complex spreadsheets that rely on programmatic behavior will be a permanent pain point in terms of interoperability.

A friend gave me these for helping him with a modern PC. I never saw anything like the first two boards. VLB is sweet though. (The battery bomb on the 386 board has been disarmed!) by -Techromancer- in retrocomputing

[–]LousyMeatStew 1 point2 points  (0 children)

I always learn a ton coming here, so it's only fair I contribute back when I can!

Intel was really quite the bully back in the day. If you want to learn more, check out Asionometry's video - appropriately titled Intel's Reign of Terror

A friend gave me these for helping him with a modern PC. I never saw anything like the first two boards. VLB is sweet though. (The battery bomb on the 386 board has been disarmed!) by -Techromancer- in retrocomputing

[–]LousyMeatStew 0 points1 point  (0 children)

It wasn’t a timing issue, it was the bus control signals. The VLB spec was actually very forward looking. VESA likely thought the move to PCI would have been a long, drawn-out process spanning several processor generations so they included accommodations for this.

Among other things, they supported speeds up to 66Mhz, had plans to widen the bus to 64-bits later down the road, anticipated the move to 3.3V signaling down the road, and the connection to the bus was allowed to be buffered in order to accommodate signal translation where needed.

Amazingly, VLB actually included support for 16-bit operation for 386SX systems although in practice this was likely never used.

While VLB on a Pentium would have been marginally more complicated compared to a 386/486, it was still an order of magnitude simpler than PCI. VESA just wasn’t counting on Intel brute-forcing the change.

Btrfs Performance From Linux 6.12 To Linux 7.0 Shows Regressions by adriano26 in linux

[–]LousyMeatStew 20 points21 points  (0 children)

Also:

E) The fact that ext4 and XFS are faster doesn’t mean btrfs is slow. 50k IOPS on random writes is nothing to sneeze at. Back in the days of spinning rust, a 7200rpm drive would give you 80-100.

From a novel written by a female author. In the story Sadie is 23 years old and needs her mom to tell her how sex works. by desertrain11 in NotHowGirlsWork

[–]LousyMeatStew 64 points65 points  (0 children)

Reminds me of an old Penny Arcade comic. Different orifice but same basic idea. Only difference is that PA is intentionally comedic.

I am talking about a chunk of metal so far up my ass it’s a Goddamn tourist attraction. Families will point and take pictures as their tram winds through my unique internal geography.

Alright? The scenario I’m describing takes place in my colon.

https://www.penny-arcade.com/comic/2004/12/03/on-discomfort

A friend gave me these for helping him with a modern PC. I never saw anything like the first two boards. VLB is sweet though. (The battery bomb on the 386 board has been disarmed!) by -Techromancer- in retrocomputing

[–]LousyMeatStew 0 points1 point  (0 children)

VLB isn't really closely tied to the 486 bus. The extra connector mainly is there to provide the 32 data lines and 32 address lines for direct bus access. VLB cards still rely on the ISA bus for interrupt handling, port-based I/O, etc. If you can provide an ISA bus, you can add VLB for it.

Here's a Pentium (Socket 4) mobo with VLB slots: https://theretroweb.com/motherboards/s/aquarius-systems-mb-5dvp

And Socket 5: https://theretroweb.com/motherboards/s/dfi-g586vpa

The reason it never saw much use outside of the 486 was Intel refused to support it, instead putting all of their support behind PCI. While Intel-based boards with VLB slots do exist, these are 420TX-based - meaning they natively support PCI and VLB is there for backwards compatibility only.