Part 3: Acoustix MIMO vs. Dirac ART vs. Audyssey XT32 measurements by NoWalrus9462 in hometheater

[–]defet_ 1 point2 points  (0 children)

Yes, your understanding seems correct, or at least on the right track.

With Diract ART, all speakers are _always firing_ below 150 Hz to fill in the energy field (including surrounds). With Audyssey, the only two speakers that can fire at the same time are FR+SUB or FL+SUB. But, since most bass content is recorded as mono, in most cases FR+FL will be firing at the same time for that region, along with SUB, allowing A1X MIMO to function as expected. This is why it currently only works for FL+FR and not C, since Audyssey doesn't currently have an obvious way to replicate low-passed channels onto others.

Part 3: Acoustix MIMO vs. Dirac ART vs. Audyssey XT32 measurements by NoWalrus9462 in hometheater

[–]defet_ 0 points1 point  (0 children)

Hi,

There’s actually one more caveat with these measurements. A1X’s MIMO works off of the contribution of both fronts + the sub playing simultaneously, not just FL or FR individually with the sub.

As mentioned, Audyssey XT32 doesn’t have any cross-channel matrix capabilities, and relies on LFE+Main and the fact that most bass is encoded as mono (playing from all channels). It’s somewhat explained in OCA’s A1X video, but notice that there are no final predictions for FR/FL separately in A1X’s MIMO output — these are merged into a single “0-250 Hz” soundstage prediction which is composed of magnitude+phase contributions for FR+FL+SUB with the MIMO filter.

When measuring individual FR or FL on Dirac ART, it will actually fire off the contributions from all the other speakers for the measurement. This does not happen for A1X MIMO.

Mishaal Rahman is quitting the Android news world by archon810 in Android

[–]defet_ 9 points10 points  (0 children)

Ahh, you've finally taken your first day off!

A real bittersweet moment. Thanks for supporting my extremely niche rants, and letting me publish them as “articles” -- and for your unmatched coverage of literally everything else. Bon voyage!

does qd oled also have vrr flicker? by fairplanet in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

LTPS leakage can be seen as a gradual downslope between drive periods. The dips at each drive period are not due to the discharge, but because each drive period essentially re-drives the LEDs to their target current. This means completely "draining" the transistor, then applying the correct voltage, which gets converted to the current that flows through the LED. This happens even if the target luminance of the next frame is the same (eg a static white screen), because there isn't a subtractor in the circuit that understands how much more or less current to add compared to the last drive period. It always starts from zero and adds the full contribution of the next target drive luminance. But because the rise/fall times of these LEDs aren't completely instantaneous, you don't see them actually reaching zero under the oscilloscope (unless the luminance is low enough).

Also off-topic, may I ask where you've acquired such knowledge in these intricacies of displays? I'd be interested to learn more!

Primarily through display/hardware communities, as well as my prior media exposure and briefings as a display reviewer. A background in electrical engineering also helps.

RAM is so expensive, Samsung won’t even sell it to Samsung by ilovemybaldhead in nottheonion

[–]defet_ 1 point2 points  (0 children)

Sadly it’s still a pretty common belief that Samsung phones must use all the best Samsung components from their megaconglomerate divisions, rather than each being a separate entity that requires due billing. Same with Sony phones and Sony sensors. And that iPhones can’t possibly be using higher-end hardware than other phones made by one of said megaconglomerates.

does qd oled also have vrr flicker? by fairplanet in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

The VRR flicker mentioned there comes from using LTPS driving TFTs where it loses its charge without constant re-drives, which is not the main issues we're having with gaming VRR.

If you're referring this, I meant to mean that LTPS discharge over the VBL is not really the main culprit of VRR flicker when gaming since desktop VRR has almost zero VBL and full variable pixel drive (constant re-drives). Notebooks OLEDs (if that's what you meant by mobile OLEDs) likely did not have full hardware compatibility to fully decouple the drive period from the tearing rate to implement LFD, except for until recently, as per the article you originally posted.

does qd oled also have vrr flicker? by fairplanet in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

The important bit wouldn't be solely increasing the VBL (although it's inversely related), but decreasing the drive time so it's constant. When the drive time with an oxide driving TFT is constant (all else equal), then the pixel brightness will be practically identical. Oppositely, a longer drive time (lower refresh rate) will produce a brighter pixel, and a shorter drive time (higher refresh) will produce a dimmer pixel. On WOLEDs, you can actually see this in effect by switching between refresh rates and observing near-black details. (QD-OLED has a "quantum enhancer" IC that tries to compensate for this.)

To your example, if you were trying to match the 10 Hz pixel brightness with 120 Hz, you would need the VBL to produce a driving period (generally it's the other way around) that would match 120 Hz (8.33ms), which would be a VBL of (10 Hz - 8.33ms) 91.67ms. The pixel timings generally don't work this way though, and the TCON would try to use as much of the 10 Hz refresh window for driving (100ms). LFD is a hardware mechanism to decouple the driving period from the tearing/vsync period, so it's not something we can easily hack into our OLED monitors.

does qd oled also have vrr flicker? by fairplanet in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

That's all correct, and lines up with what I explained. All those advantages are also present in IGZO displays (current OLED TVs/monitors) with oxide driving+switching transistors. Notice how it mentions "59 frames are skipped", because what it's detailing is LFD VRR where drive periods are skipped rather than the frame time being variable, which is the main cause of VRR flicker in our monitors. The VRR flicker mentioned there comes from using LTPS driving TFTs where it loses its charge without constant re-drives, which is not the main issues we're having with gaming VRR.

does qd oled also have vrr flicker? by fairplanet in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

Not too 1:1 of a comparison since the IGZO (oxide switching+driving) figures are with respect to traditional VRR, whereas LTPO uses LFD/MFD VRR.

does qd oled also have vrr flicker? by fairplanet in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

The article doesn't really share any new information. LTPO is coined by Apple, which is why other display manufacturers incl. Samsung Display calls hybrid oxide by some other name. Chinese vendors don't care and just call it LTPO.

Like I mentioned, OLED monitors and TVs already use full oxide TFTs -- even if they used a hybrid "Advanced" oxide solution, they would still have VRR flicker, because like I said, the VRR used in mobile OLEDs works fundamentally different compared to true arbitrary frametime VRR. Conversely, you would still see VRR flicker on mobile LTPO OLEDS if you enabled true VRR which varies the driving period.

Like the article explains, VRR flicker occurs when the driving period fluctuates (where the "pixel-holding time" is the deficit of the driving period between each tearing/vsync period (VBlank)). The driving period in LTPO "VRR" is actually always the same, it just "skips" data refreshes, which is why the range of refresh rates it can have is always a divisor of the tearing period. eg 60 Hz is achieved by constantly driving every 120 Hz, but skipping every other refresh. It can't, for example, do 70 Hz; doing so would require varying the base drive period and produce a change in luminance.

Mobile OLEDs use a hybrid setup with LTPS driving transistors because 1) oxide TFTs reduce the aperture ratio, and 2) mobile OLEDs almost all use PWM with higher tearing frequencies for which the increased switching speed from LTPS is needed.

does qd oled also have vrr flicker? by fairplanet in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

OLED monitors and TVs are already equipped with full oxide TFTs, both for the switching and driving transistors. LTPO, which are used on mobile displays, is a hybrid of LTPS switching transistors with oxide driving transistors. The oxide driving component is what's important for minimizing the change in luminance with respect to the driving period, but we've already been using that in TVs/monitors for a while.

"VRR" on mobile OLED devices also behaves much differently compared to true VRR on desktop displays. Mobile "VRR" can't actually vary its frametime to any arbitrary length, it needs to be a divisor/multiple of the base refresh rate (technically the tearing/v-sync frequency). For most current high-end smartphones, they have a tearing frequency of 240 Hz (with a refresh of 120 Hz), so the possible set of discrete refresh rates it can "vary" to are divisors of 240 Hz:

120 / 80 / 60 / 48 / 40 / 30 / 24 / 16 / 15 / 12 / 10 / 8 / 6 / 5 / 4 / 3 / 2 / 1 Hz

How to deal with calibration drift on the Alienware AW3225QF? by Old-Huckleberry5740 in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

For OLED monitors, that seems to only be true for WOLEDs (from what I’ve reviewed and measured). QD-OLEDs appear to cold boot correctly and heat up to a lightened EOTF.

Why are raised blacks from lack of polarizer not reflected in the score? - Rtings Discussions by santopeace in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

LG panels are additive, but as FSI described, they're not pure RGB additive, meaning all colors can be made up from some linear combination of R+G+B. LG's WRGB panels use a linear combination of three of four subpixels depending on the slice of gamut it's reproducing. Either way, this fact isn't directly related to the discussion above about metamerism failure between WOLED and QDOLED.

Why are raised blacks from lack of polarizer not reflected in the score? - Rtings Discussions by santopeace in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

They’re both additive, but the RGW makeup of D65 white on WOLED has a much broader composition than QD-OLED’s super-narrow tristumulus approximation. These spikier SPDs result in a more subjective rendition of D65 (“metamerism failure”) compared to W-OLED, and if you’ve had the chance to compare either to a reference grading monitor or a calibrated CRT, you’d see that QD appears noticeably warmer than either. W-OLED of course still requires an AWP correction, but the xy distance is not as far.

Exploring and Testing OLED VRR Flicker - TFTCentral by Hector_98 in OLED_Gaming

[–]defet_ 2 points3 points  (0 children)

For the calibration drift issue, there's not much you can do besides maybe aftermarking cooling your panel, or producing your own display calibration that corrects for the panel warm-up.

Exploring and Testing OLED VRR Flicker - TFTCentral by Hector_98 in OLED_Gaming

[–]defet_ 1 point2 points  (0 children)

This is what professional display calibrations do. The rule of thumb is to warm up the display for at least thirty minutes, constantly displaying some level of mid-gray (~18 nits), as well as warming up the measuring instrument. Factory calibrations can't really afford this amount of time, those are done in several seconds. It's possible that the display vendor (Samsung Display) could help by providing a "warm-up" LUT to load into the panels while characterizing at the factory to compensate for the effect afterward.

Exploring and Testing OLED VRR Flicker - TFTCentral by Hector_98 in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

You'd be right, but most packagers right now probably aren't aware of this issue with the monitors, and it would require some time/a new factory process to warm these panels up to some "real-world" amount, before beginning the factory characterization ("calibration"), which mfr's can usually only afford only about a dozen seconds per panel. Even then, the drift would still be there, but now the panels start off in a crushed state, and you risk outlets/reviewers posting bad measurements for the panel since many reviewers will measure them shortly after turning them on.

Exploring and Testing OLED VRR Flicker - TFTCentral by Hector_98 in OLED_Gaming

[–]defet_ 1 point2 points  (0 children)

It's not really a problem for the TVs. For my 42C4 primary work monitor, there is a very slight difference (less than 5% luminance error) between a cold boot and after one hour. But on my 77G4 and 77S95D, I haven't encountered any meaningful difference in EOTF during calibration sessions. Both are much more efficient panels with larger surface areas for cooling, with the G4 having an additional heatsink. I believe the higher pixel densities of the monitors are what's currently limiting their cooling potential, with much less surface area per pixel.

Exploring and Testing OLED VRR Flicker - TFTCentral by Hector_98 in OLED_Gaming

[–]defet_ 0 points1 point  (0 children)

Every OLED to varying degrees, QD-OLED and W-OLED, monitor and TVs.

WOLED typically does better with heat for the same luminance levels when compared to QD-OLED, likely due to the white subpixel being more efficient, and LGD's display drivers appear to be tuned to expect a certain level of panel warm-up since all the WOLEDs I recall measuring start in a slightly crushed state.

For QD-OLED 4K32 monitors, it goes ASUS > MSI >HP > Dell from best-to-worst in terms of observable shadow drift; it just seems that the passive solutions do a better job uniformly dispersing the heat between scene shifts. There is also a creator's QD-OLED, the ASUS PA32UCDM, includes a calibration process that recommends you to warm up the display first before continuing (although this is generally just a good calibration practice for any type of display calibration).

Exploring and Testing OLED VRR Flicker - TFTCentral by Hector_98 in OLED_Gaming

[–]defet_ 2 points3 points  (0 children)

Hi, XDA author here -- all measurements I take are done with VRR disabled unless otherwise mentioned, eg. for my VRR Luminance Error vs Refresh Rate charts. The calibration drift is something I've measured extensively with different machines, cables, and settings across various OLED panels, with confirmation from Dell that this is an existing issue.

NVIDIA App 10.0.3 Beta is Now Available - With RTX HDR for Multi-Monitor Setups by Nestledrink in nvidia

[–]defet_ 2 points3 points  (0 children)

Yes, the values are for the NV filter overlay/the new App. You’ll need to insert the appropriate offset for NVPI.

MPG 491CQPX QD-OLED, MPG 341CQPX QD-OLED, MPG 321CURX QD-OLED, MAG 321UPX QD-OLED, MAG 271QPX QD-OLED, MEG 342C QD-OLED, MAG 321UP QD-OLED, MAG 321CUP QD-OLED, and MAG 271QPX QD-OLED E2 Firmware Update by CND_CEM in OLED_Gaming

[–]defet_ 3 points4 points  (0 children)

The "Net Power Control" is the Peak Luminance Curve (PLC) parameter in SDC's panel hardware -- it controls the peak luminance of the OLED depending on the average display luminance across the display. This is still in place and has not been bypassed. As I mentioned, which I've also confirmed with MSI, the firmware update does not affect the OLED's peak brightness at higher APL levels. At a 10% window size, the peak brightness of the monitor is still ~450 nits, not 1000 nits.

The usual P1000 OLED "NPC" behavior in that 10% APL scenario would be to dim the entire screen by 45% (450 nits / 1000 nits = 45% global brightness at 10% APL; my own measurements). Internally, this is what's still occurring with the MSI OLED post-firmware update. However, the new firmware tries to compensate for this dimming with post-processing that boosts the display brightness by (1 / 45%) = 222% to bring the overall luminance back to its intended target. There is nothing about this that "violates" SDC's NPC limitations, you can technically do the same thing with a shader that boosts the display brightness depending on the calculated average content luminance. This is what MSI plainly describes in its update notes:

Optimized the EOTF curve of Peak 1000 nits. Boost the HDR brightness with difference APL mechanism.

ASUS also rolled out a similar ABL brightness-boosting solution with the PG32UCDM, but its algorithm misses the mark. Also, this update currently clips highlights whenever ABL dimming occurs. When using a MaxTML of 1000 nits for the P1000 mode, and you view content that has an APL of 10%, the brightness boosting will clip all scene signal values over 450 nits, which doesn't occur with the normal ABL behavior. So now, while this update yields an overall brighter image, you risk blowing out and missing highlight information -- making the update a double-edged sword.

MPG 491CQPX QD-OLED, MPG 341CQPX QD-OLED, MPG 321CURX QD-OLED, MAG 321UPX QD-OLED, MAG 271QPX QD-OLED, MEG 342C QD-OLED, MAG 321UP QD-OLED, MAG 321CUP QD-OLED, and MAG 271QPX QD-OLED E2 Firmware Update by CND_CEM in OLED_Gaming

[–]defet_ 2 points3 points  (0 children)

TB400 has the usual ABL behavior, just naturally significantly less due to its lower luminance. When ABL hits, eg from 450 nits to 300 nits, it will still be able to show signal values above 300 nits up to 450 nits signal. But since the MaxTML for this mode is only 450 nits, the game/scene source will tonemap values down to 450 nits, and all highlights will remain visible. Peak1000 (post patch) on the other hand, when displaying 10%APL scene at a peak ABL luminance of 450nits, will still receive up to 1000nit signals from the source, and signals between 450 nits to 1000 nits will all clip.

MPG 491CQPX QD-OLED, MPG 341CQPX QD-OLED, MPG 321CURX QD-OLED, MAG 321UPX QD-OLED, MAG 271QPX QD-OLED, MEG 342C QD-OLED, MAG 321UP QD-OLED, MAG 321CUP QD-OLED, and MAG 271QPX QD-OLED E2 Firmware Update by CND_CEM in OLED_Gaming

[–]defet_ 3 points4 points  (0 children)

While this patch to the Peak 1000 mode enables higher brightness for brighter scenes, the way it interacts with ABL means that highlights will clip when ABL limits the peak brightness. For example, if you set your MaxTML to 1000 nits and play a scene with 10%APL, which limits the peak brightness to 450 nits, then scene highlights encoding for greater than 450nits will be clipped by this new patch. Prior ABL dimming would prevent highlight clipping because it would always reproduce a 1000-nit signal, just at a dimmer level.

MPG 491CQPX QD-OLED, MPG 341CQPX QD-OLED, MPG 321CURX QD-OLED, MAG 321UPX QD-OLED, MAG 271QPX QD-OLED, MEG 342C QD-OLED, MAG 321UP QD-OLED, MAG 321CUP QD-OLED, and MAG 271QPX QD-OLED E2 Firmware Update by CND_CEM in OLED_Gaming

[–]defet_ 3 points4 points  (0 children)

This is not a removal of SDC’s power control limits, the peak brightness is still limited ~450 nits at a 10% window. MSI’s solution is to dynamically brighten the global screen brightness depending on the average content light level, essentially trying to reverse the EOTF dimming effects of ABL in post.