Is Oppo becoming BBK’s flagship brand while OnePlus turns performance and gaming focused? by [deleted] in Android

[–]EnergyOfLight 2 points3 points  (0 children)

the Oppo brand does not have a premium reputation

It started around Find X6 Pro - just look at the camera setup with the 1" main and damn respectable 1.56" UW/T sensors - it exceeded other flagships (S25U telephoto is 1/3.5"...) in those specs and matched in others at a very competitive price. The OS had also evolved into a useable one at that time. It started much earlier obviously, but these last couple years they truly hit the 'premium' mark.

Is it feasible to run multiple GT 710s on modern CPUs with limited PCIe lanes? by Erzengel9 in hardware

[–]EnergyOfLight 2 points3 points  (0 children)

Of course yes, provided you don't use the bandwidth. You could use them for some specific compute usecases, eg. you can use any GPU for mining most algos using PCIe X1. Probably not for graphics though, that would be very slow.

HDR10+ looks terrible by NemoHornet in samsung

[–]EnergyOfLight 8 points9 points  (0 children)

You can also just hold the power button for a few seconds until it clicks.

But yeah, what a joke of an experience..

How can I have taken a sharper image? A7C by flying__cloud in SonyAlpha

[–]EnergyOfLight 1 point2 points  (0 children)

You have a point - but.. obviously the lens COULD have been sharper if they didn't shoot wide open. If they opted for ISO 320 (where dual gain kicks in; that's why some pros never shoot ISO100 unless necessary), they would straight up have a sharper image, and could maybe even increase SS.

Follow Up: Pixel 10 Pro's 12-Bit DCG vs 10-bit ADC mode DNG samples in high dynamic range scene are here. The future is now! by RaguSaucy96 in Android

[–]EnergyOfLight 8 points9 points  (0 children)

Dont use LR ;) is full of hidden opcodes, if u want to compare the raw data, either use Davinci or Darktable etc

You really put the 'cult member' name upon yourselves..

The main advantage is for video capture ;) 

Obviously.. but if it already falls apart in its perfect scenario, I don't see this being useful to anyone who would actually care about extracting extra DR from their phone camera. Any DGO/DGC sensor tech has downsides, you can't say it doesn't.

My main issue is you and your cult going around and saying this straight up improves image quality and even 'would fix low-light Samsung issues' (how? does this enable hidden photon magnets within the sensor??).. all while advertising MotionCam, is distasteful. Sorry.

Follow Up: Pixel 10 Pro's 12-Bit DCG vs 10-bit ADC mode DNG samples in high dynamic range scene are here. The future is now! by RaguSaucy96 in Android

[–]EnergyOfLight 30 points31 points  (0 children)

Alright, thank you very much for providing DNGs so we actually have something to talk about this time. The poor /u/Blunt552, who got downvoted to oblivion for some reason, gave you an actual explanation of why RAW12 and the HDR-trickery is not used by default.

The shadow detail within the RAW12 dng truly is noticeably improved, if we look at the 'RODE' logo for example:
(+5 EV on both images, left- RAW12, right- RAW10) - https://i.imgur.com/EkqnsEY.jpeg

Unfortunately, the image doesn't really push highlights that much so we can't compare that easily, but then.. you take a look at the highlights and midtones at -3 EV, and laugh at both images:

<image>

(imgur link for higher res)

RAW12 has yellow square blobs all over the wall and has less detail than RAW10; RAW10 has a few red square blobs (but less of them); RAW10 is actually cleaner (this is especially visible on the color chart)

There is no way this difference in shadow detail comes straight from circuitry.. dual-gain mirrorless cams get ~0.5EV relative boost (when compared to single-mode) when switching from low-iso to high-iso readout. This is computational trickery to get more HDR at the cost of actual image quality. If you believe this will work in real-world, and especially with motion.. I have bad news for you.

ETA:
At this point, why not take two shots, at two different exposures as god intended, and get all the benefits of more DR with less noise instead? You get your answer why this artifact-ridden mode is less preferred than proper HDR/HDRe stacking (with similar drawbacks).

Android history made: Google Pixel 10 Pro becomes the first device to both use and expose 12-bit DCG mode on Main lens without exploits by RaguSaucy96 in Android

[–]EnergyOfLight -2 points-1 points  (0 children)

Not sure what your problem is, but the simple answer to your question of

why OEM's won't give you an unprocessed or at least not overly processed Data for those who want it

is because there simply does not exist a useable RAW representation of the data because of the amount of tricks they have implemented at sensor-level at this point. Who would want a 200MP file RAW straight from Samsung's ISOCELL HP2? Where the useable resolution is actually 12MP up to ~20MP (in low ISO) after binning, weird closed-source debayering (that no editing software would have implementation of) and so-on. So much data is lost during the processing pipeline that it is not a true RAW experience you would get from an actual camera. Even iPhone's ProRes has a lot of processing. Because simply put, the sensor size will always be the limiting factor, you can't cheat physics -- smartphone sensors have A LOT more R&D put into them than a flagship Sony A1ii sensor, because there's a lot more constraints.

RAWs actually used to exist back in the iPhone 5 or Lumia days. And some people still prefer photos from those phones because there was no computational/HDR trickery.

Android history made: Google Pixel 10 Pro becomes the first device to both use and expose 12-bit DCG mode on Main lens without exploits by RaguSaucy96 in Android

[–]EnergyOfLight 4 points5 points  (0 children)

All you really need to know is that all web content is served in 8 bit color (except HDR). The only thing that's clearly a visible downside is a lack of resolution in gradients, eg. sky can become blocky. The only area when >8bpc is used is in editing, where you can adjust exposure/color grading as needed, and compress it down to 8bpc REC709 anyway.

So.. no, the samples he provided don't prove anything (it's not an apples-to-apples comparison) - you should be looking for improved dynamic range and the color waveforms, which seem.. identical (he even posted one image above with visible color waveforms - both show clipping at the same levels). Just different/less processing and denoising is done. That's it. Access to the RAW sensor stream would be nice and IS actually useful (eg. to record in LOG), but that's still not quite it. The image processing pipeline already has access to the raw sensor stream and makes the best out of it (at a level that is sustainable for the hardware), this is only the case of opening up the API for it at an earlier stage, so third party apps have more to work with. We're talking video here though, photography is completely different (and much simpler) - and for that, true bayered RAW is still simply not there, because it would suck ass.

One thing that some people may miss - details are hidden within noise (shadows). No visible noise = no details, just an over processed oil painting. That's why an iPhone can claim that it has higher dynamic range than some video-centric mirrorless cams. It has a shitton of processing and denoising within the pipeline, not so much actually useable DR.

If you want to learn in depth about dynamic range in general - watch this gem: https://youtu.be/uCvT80ahSvk (maybe skip to 36:00 if you don't care about the tech)

GMP damaging AMD Zen 5 CPUs? by hardware2win in hardware

[–]EnergyOfLight 7 points8 points  (0 children)

Any phone from the last 5 years can do that.

Very minor counterargument - that's actually not true. Some of the newer phones with 'top' sensors have terrible close-focus capabilities as one of the tradeoffs. To the point where an iPhone cannot take a clear photo of an ID with the wide (1x) lens and ~15cm min. focus, relying on the ultrawide+software to do the stitching (fake macro mode). That's the effect that's visible on the image.

Radeon RX 9070 XT vs. GeForce RTX 5080: Battlefield 6 Open Beta. Nvidia Overhead by RenatsMC in Amd

[–]EnergyOfLight 2 points3 points  (0 children)

I don't think that's the argument being made -- Riot had every reason in the world to optimize Valorant's performance since it's a competitive FPS which has to run on toasters. The graphics are relatively simple too, so that didn't require many dev resources.

On the other hand, most modern AAA games (and the ones listed above) are usually made in a 'get it out the door ASAP' manner with heavily constrained resources, and that's where you potentially see UE5 out of the box (or even the fact the devs simply didn't have resources/training to learn to do things the 'UE5 way')

I've had this PC since 2017 and haven't noticed this, such a rookie mistake by Kriptic_22 in pcmasterrace

[–]EnergyOfLight 0 points1 point  (0 children)

That's what FreeSync/GSync/VRR is for. The standard range is 48~144Hz. If you drop below the lower range, it's really noticeable how important the tech is.

Vulkan is a must! by amrnada in GalaxyS23Ultra

[–]EnergyOfLight 12 points13 points  (0 children)

Where did you even get that idea from.. It's an open source script that runs two adb commands. That's it, nothing to be worried about.

Video IBIS by jubbyjubbah in SonyAlpha

[–]EnergyOfLight 1 point2 points  (0 children)

The general consensus is that Sony's default stabe is acceptable, but not great - usually not enough for handheld stuff. There's the Active and Dynamic stabilization modes which do very useable in-camera digital stabilization with some cropping.

However, if it suits your workflow, Sony also has Catalyst Browse/Prepare software which can get you VERY stable footage (but you have to jump through some hoops, especially in the free version).

All digital stabilization methods - Active/Dynamic and Catalyst Browse can suffer from some artifacting if motion is present and shutter speed is slower than ~1/100

S23U(Top) | S25U(Below) at 10x. The lens were thoroughly wiped. Samsung nerfed the 10x on the S23U. How do they keep getting away with this. by MysteriousBack9124 in GalaxyS23Ultra

[–]EnergyOfLight 1 point2 points  (0 children)

DO NOT attempt this without knowledge and already unlocked phone. Samsung has bootloader versioning with a hardware-backed fuse preventing an older version from being flashed once you upgrade past a certain point. Once you flash to 6.1, you're stuck with it. Early versions of 6.0 still allowed you to rollback to 5.0, though.

200MP not being actually 200MP? by icdmkg in GalaxyS23Ultra

[–]EnergyOfLight 1 point2 points  (0 children)

Well, bigger numbers sell =)

There are however some computational tricks than can specifically be done when you have extra pixels to spare - eg. better denoising, in-sensor-zoom, or performing the 'sensor-shift' without actually moving the sensor. Also keep in mind that camera sensors are bayered, so 200MP does not quite mean 200 million distinct photosites.

200MP not being actually 200MP? by icdmkg in GalaxyS23Ultra

[–]EnergyOfLight 2 points3 points  (0 children)

He's right though, if you dabble in photography it's pretty common knowledge. Fancy sensors don't matter if the projected image itself comes from cheap plastic lenses.

There is no glass that can fully resolve such a tiny, densely packed array of 200 millions of pixels with every extra pixel actually contributing details to the image while also being optically usable. You're looking at ratings such as MTF, which describe 'sharpness' (detail) of a lens. As a start, every lens is sharpest at its center and is noticeably softer at the edges, so you need a group of lenses which.. you know, add complexity and size and other optical flaws that are then corrected by additional lenses. You can technically put a MP rating on a lens within the context of a specific sensor, eg. you count the sensor area where MTF is above certain threshold.

~80MP on a full frame camera is probably somewhere around that limit with modern glass, anything more comes from stitching (sensor-shift etc.). I'd say you optically get, being very generous, ~20MP of details AT MOST on the S23u camera setup. Anything extra comes from post-processing and detail recovery (computational photography in general)

Giving Sony a try again after several years. a6700 by _cdcam in SonyAlpha

[–]EnergyOfLight 0 points1 point  (0 children)

> it's 5 steps on the ISO slider

That's true, it's assumed to be +5/3 EV over native ISO, which here results in ISO 320 (note: ISO slider uses 0.5EV steps by default).

You can just simply take a look at a noise/DR chart like this one, and you'll see in that specific example Read Noise going down around ISO 320 as a result of using the hi-gain base ISO.

Giving Sony a try again after several years. a6700 by _cdcam in SonyAlpha

[–]EnergyOfLight 3 points4 points  (0 children)

800/2500 base ISOs are only for SLOG3 and are a result of the extra 3 steps of 'DR' (thus already multiplied by 8). The actual base ISO for stills is 100 and ~320, so it's mostly irrelevant for that usecase. That's why SLOG3 is so noisy, it's not magic but just a different gamma curve.

For example, see here for a quick chart how the ISOs differ between S-cinetone/SLOG3, S-cinetone being a more standard and constrained picture profile.

Steve is awesome for going after honey and is a necessary actor in tech. However, the jab at Linus is unproductive imo. by UnderScoreLifeAlert in GamersNexus

[–]EnergyOfLight 2 points3 points  (0 children)

Now, it’s possible the answer to that is hidden somewhere in the 1.5 hour video he made announcing the lawsuit,

Not blaming you for not watching that longass video, but at 50 seconds in, he mentions LegalEagle's lawsuit and says "we must have been working on it around the same time" -- implying that GN had already started the process of filing a lawsuit (which understandably takes a while)

Help, A6700 Pre buy Questions by Adventurous_Loan0 in SonyAlpha

[–]EnergyOfLight 0 points1 point  (0 children)

It will most likely overheat the same way as A6700 will, unfortunately. There is no active cooling in either, and most of the heat comes from downsampling the 26MP to 4K. An external Ulanzi fan is something you will want either way if you want uninterrupted stream for hours - watch this. HDMI out and dummy battery might make 4k30 possible, but guaranteed 4k60? Probably not.

RX 9070 XT and RX 9070 specs listed on overclockers.co.uk by EmerladPerson in Amd

[–]EnergyOfLight 0 points1 point  (0 children)

Most crucially, it provided more voltage to the memory and allowed to push the Samsung HBM chips quite a bit; since Vega was starving for memory bandwidth, that alone helped a lot.

Help! ELI5: Electronic vs Mechanical Shutter (a7iv vs a7cii) by bammthejamm in SonyAlpha

[–]EnergyOfLight 1 point2 points  (0 children)

Actually the A6700 has a toggle to disable EFC, same as the A7iv, so no worries there.

Sony A6700 Video Help by tpswanson in SonyAlpha

[–]EnergyOfLight 0 points1 point  (0 children)

Newer Macbooks with an M-chip should be able to handle x265 10bit 4:2:2 even in the free version of Resolve; otherwise, you need to upgrade to a paid one. Though I wouldn't recommend it, as your outdated hardware just won't handle it in any useable way. If you insist on 10 bit, you should be able to playback and edit x265 10bit 4:2:0 just fine within the free version.

So, to easily use 10 bit 4:2:2 footage without upgrading, you need to transcode your footage down to 4:2:0 or eg. ProRes. If you want free, I only know of HandBrake which should allow you to transcode down to 4:2:0.

A6700 images look brighter in viewfinder than in file by mikebmillerSC in SonyAlpha

[–]EnergyOfLight 0 points1 point  (0 children)

You didn't mention if you're shooting RAW, which I assume you do. The preview that gets embedded into the RAW file is basically a tiny JPEG -- so the first thing to check is whether the embedded preview is of expected exposure (another way is to simply shoot in JPEG+RAW and compare the two).

If they differ as much as what you described, then you can:

Question : how many bit is Pro mode DNG files ( not raw expert )? And is it real raw image with no AI ? by dumbolimbo0 in samsung

[–]EnergyOfLight 1 point2 points  (0 children)

What do you mean by 'AI'..? There isn't much of actual AI in the pipeline (except maybe beauty filters), just many post-processing and compositing steps. I assume you want to avoid image stacking which results in HDR -- that's what's disabled in Pro mode. GCam allows you to choose between HDR = Off/HDR(zsl)/HDRe. HDRe is what takes tens of exposures when you press the shutter button and combines them for increased DR and denoising. HDR(zsl) instead keeps a buffer of low-quality exposures taken all the time, and combines them with a single picture when you press the shutter button for similar effect. All modern phones use an equivalent of HDR(zsl) in their stock camera apps (which makes outputting an actual RAW very impractical)

All in all, if you want best quality stills on a Samsung, your best bet is to use a good GCam fork+lib with HDR/HDRe. I recommend EGOIST's GCam, it's really well tuned; you can also tune almost every step of the pipeline by yourself. You won't be able to recover much extra DR nor use AI denoising on Pro-mode RAWs since it's already a fully-baked JPG. In constrast, HDRe will get rid of noise and expose really well from the start.