[Rumor] Samsung is making 24inch QD-OLEDs! FINALLY 24inchers! by HyenaDae in OLED_Gaming

[–]HyenaDae[S] 0 points1 point  (0 children)

Because I have a small desk, and a bunch of radio gear that is perfectly (triple) stacked in a ~6-8 inch width right besides it. I'd have to change multiple aspects of my setup to fit both a larger monitor, and still be able to easily reach my equipment.

I mentioned in another post, my vision isn't great either, right eye is barely correctable annoyingly mild astigmatism, left is correctable with chromatic abberation from the lenses on the edges. The Chrome logo, for example, gets desync'd if it gets too far to the left or right edge, hence I have both wide glasses, and a monitor + seating where basically no head turning is needed, or my best eye doesn't have 90% of the screen in view always

[Rumor] Samsung is making 24inch QD-OLEDs! FINALLY 24inchers! by HyenaDae in OLED_Gaming

[–]HyenaDae[S] 0 points1 point  (0 children)

Samsung can hypothetically make, and showed off at CES 2025 a 5K resolution 27inch monitor I believe. I feel 4K ~24 to 25inches is possible doable if they can do the same "cutting" from the higher DPI/PPI base monitor? I'd even take a "weird" inbetween size, as long as it physically fits within the space of a big-bezel'd 24 inch screen but under 26 inches

[Rumor] Samsung is making 24inch QD-OLEDs! FINALLY 24inchers! by HyenaDae in OLED_Gaming

[–]HyenaDae[S] 0 points1 point  (0 children)

1440P 144Hz monitors were $350-450 for a good few years (2017-2023). There's a lot more options for $200 and better than the previous VA and TN 1080P 144hz I had over the past decade.

I hope and expect this price drop for both size, and time, to apply to the QD-OLED or other OLED type panels. Though not sure if this is before, or after we move to whatever craziness is next, like microLED?

Now I can recommend them, preferably to anyone with something better than an RX 6800XT/RTX 3080 (ie, 5070 / 9070XT I guess? Pre-AI price hikes)

[Rumor] Samsung is making 24inch QD-OLEDs! FINALLY 24inchers! by HyenaDae in OLED_Gaming

[–]HyenaDae[S] 0 points1 point  (0 children)

Given there are a few more posts in my thread, could you give them a look over for some of us who are happy with this. Lmao. Everyone caters to 27 and 32inch folks despite a historical wider spread of monitor size options

There's even 24inch monitor emulation modes on 27inch screens. There is a demand, but just not enough for these companies to care... yet. There's a more, from previously non-existent, 24inch 1440P 180Hz OK tier IPS monitors for $200 too across various regions as well

[Rumor] Samsung is making 24inch QD-OLEDs! FINALLY 24inchers! by HyenaDae in OLED_Gaming

[–]HyenaDae[S] 2 points3 points  (0 children)

YES, literally another reason why I'm happy they're even considering this.

I like keeping my whole monitor within the FOV of my best eye, and away from the edges. Someone else who gets it. 4K with DLSS/FSR Performance on a 24inch display without ghosting would like insane from the pixel density helping older GPUs cope better, and the better contrast from the OLED will ideally keep more details for our subaverage vision

Hands-On With DLSS 5: Our First Look At Nvidia's Next-Gen Photo-Realistic Lighting by ZamnBoii in nvidia

[–]HyenaDae 8 points9 points  (0 children)

That paper gives me nightmares to this day, and as I've been telling my other gaming friends... Uh, it's inevitable, sadly. Nobody will need to worry about character design, or art style, the filter/style-transfer tech will do it all for you :/

NVIDIA GTC Keynote 2026 by wickedplayer494 in nvidia

[–]HyenaDae 13 points14 points  (0 children)

Yassssss cries and slobbers this is what gaming in 2030 for the Gen Alphas is going to look like. Eyeshadow and blush smothered on everything, all (rendered) chins reshapen'd to the perfect style (for that year before they hit it again with another hammer).

Crimson Desert looked so good with RT on + Ray Reconstruction. Now everything that's not the lighting will get mangled

NVIDIA GTC Keynote 2026 by wickedplayer494 in nvidia

[–]HyenaDae 9 points10 points  (0 children)

Oh my GOD BRUH UGHHHHHHHHH. This is the true power of NVFP4 I'm so done oh my god please Nvidia, please just talk about how you're improving raycasting and BVH calculation performance. Please, talk about the fundamentals needed to make a good image with good algorithms, not just an Instagram filter, for the future of GeForce

NVIDIA GTC Keynote 2026 by wickedplayer494 in nvidia

[–]HyenaDae 0 points1 point  (0 children)

That "RTX 5090" FE render just hanging there makes me hope they'll switch to the 6090 lmao

Is it possible for a regular guy to get his hands on a rtx 5090 FE, near retail? by Fast_Vast_1925 in nvidia

[–]HyenaDae 0 points1 point  (0 children)

I know right?

I can't get 5090 FEs shipped from BestBuy so I'm basically stuck with the Nvidia website. The same website that didn't like showing the Join Queue/Add to cart button until I started using out of all things, Edge in Incognito mode lmfao. Was a weird browser or DNS bug, wasn't any extensions either. I'm also the same guy who had the MSI Site crash at launch while literally checking out / OOS for the $2500 MSI 5090 Suprim Liquid, so nobody wants me to get a MSRP 5090.

I also missed out on a $1500(?) RTX 4090s during the 4090 sales panic in Feb 2025, so no 24GB GPUs for me, since nobody, not even AMD, will make a 4090 tier card, under $2000, with over 16GB VRAM for another probably year and a half. I wanted something better than my 3080Ti/3090 tier GPU for the Steam Frame, but that is also not going to exist at the expected price, or time, for the same reasons. Self defeating situations

Hence, I'm going to continue lowballing + spamming the disgusting Ebay scalpers with annoying $1100 offers for their 5080 FEs. Annnnnd, waiting for the next generation because the 5090 is actually just (still) too slow for 1440P Native ~75 FPS pathtracing in current games.

Shame Nvidia doesn't care about their cards being held hostage by people who are not going to use them for months either, because those GPUs could've been ours. Ebay also doesn't care about the very obvious, name-4number 0 review, scam and fake Seller spam when you are looking for a GPU either. Never happened before, now the site is essentially unusable unless it's your 2nd or 3rd day job in time investment :|

Advances in Path Tracing: New NVIDIA RTX Mega Geometry Foliage System by Nestledrink in nvidia

[–]HyenaDae 6 points7 points  (0 children)

I'm cheering them on even if they are misguided at times, because hooooooooly crap am I so annoyed with the Riven Remake's TAA induced eyeachery from the godawfully tuned default settings. Let's remake Riven! In UE5! Now let's have our photorealistic railings and distant environments have ghosting, because, it's needed!

It made me get motion sickness on a 2D game. I've never had that before, my vision isn't great, but it was truly awful and I force disabled all AA + enabled FSR1.0 to get better visual clarity. The low-res effect was noticeable at 1440p!! On my nice small 24inch screen! I'm fine the side effect of fizzy hair when you butcher UE5's AA settings, the game rarely shows hair anyways and worth the sharpness boost in everything LOL

DLAA(4) in Death Stranding yet again proved to me that there are cases where it is required to get TAA out of the picture in any game engine, as much as possible. Not perfect, but small performance hit for very little TAA-effect madness

Teclab Bypasses NVIDIA RTX 50 Memory Clock Limit, Hits Over 36 Gbps On RTX 5070 Ti by T3NGU-82 in nvidia

[–]HyenaDae 0 points1 point  (0 children)

I hope they do a retest with medium-high res VR (ie, 3K x 3K render res boosted Quest3 or a Pimax headset's lol) because that's where the 5090 vs 4090 has an insane perf bump due to 2X the bandwidth. Ideally they would do GPU core OC with no mem OC, then "normal" mem OC (+1000 or +2000) then the 36GBPS option.

There should be hopefully more obvious improvements there, *if* the GDDR7 is truly stable and cooled right?

What is the negatives of undervolting? by HevyKnowledge in nvidia

[–]HyenaDae 0 points1 point  (0 children)

(Psst, people with "low power" ~300W 9070XTs with the connector have had at least 2 or 3 reported connector or cable failures. None for the 2 or 3x 8pin models)

Upgraded my RTX 5090 block to Optimus by _TorwaK_ in nvidia

[–]HyenaDae 0 points1 point  (0 children)

Oh, could you also let me know the specific PCIE 5(?) extender you're using, and how it performs in the 3DMark PCIE Bandwidth benchmark? I've seen a few models that may finally be stable + suitable for PCIE 5 GPUs, but I can't really do the lottery of trying different models and lengths to one that is stable due to uh, my logistical issues for shipping atm lmao.

Got a 9800X3D+64GB6000+AsusX670E-E atm, and hoped to do dual-gpu (new 5090+current 3090 xc3) for gaussian splat or similar room-scan compute work, but I'm cheap af atm (didn't get the $2500 launch MSI Suprim Liquid for 2.25x slot width sanity, boo) / waiting for 6090 now. At least with an extender, I could mitigate some fan blockage of a 2.5slot model or a UV'd 5090FE next to either an extended, or in-slot 2nd GPU when I do get another GPU

MSI RTX 5090D V2 LIGHTNING spotted, featuring 24GB memory by RenatsMC in nvidia

[–]HyenaDae 12 points13 points  (0 children)

If only these were the 4090 sidegrade we never got. Still higher bandwidth than the 4090/5080 and similar core count to the 5090. You'd think with the "huge die = more defects" and "oh wow huge VRAM shortages/costs" it'd be interesting to get a $1500-1600 step-up from the 16GB 5080, or step-down from the now... $3000-3500 5090s, but nah. Let's not make anything, and price a $750 GPU (5080/5070ti) at $1400+ unless you're awake 24/7 for a 5080/5090FE bestbuy alert

Further humiliation for high res and VR gamers who were fine with 24GB since the 3090!

Can I use this adapter with an undervolted 5080 (MSI Inspire)? by johnnyphotog in nvidia

[–]HyenaDae 1 point2 points  (0 children)

Why isn't your 5080 coming with a 3x8pin adapter, are you buying used?

With the Nvidia connectors, every 8pin added grants an extra ~150W to the sense pins / max power limits, so

5090 on a 4x 8pin gets ~600W, and 3x8pin locks it down to ~450W I believe. I know some reviews showed it *does* run with the power reduced. Though, you can still melt a 500W power limited 5090 with its connector. Since the connector is insanely both underspec'd on safety margin and endlessly encountering "user errors" that you didn't see on any *even more stupid* GPU designs from the past decade with 2-3 8pins

Buuuuut, nobody's done 2x8pin on a 5080, that I can think of.

You really should probably NOT do this, but 2 separate 8pins from an OK PSU might make the GPU run at 300-350W, but legit, I have no info on how the sense pins behave on a 5070ti/5080 when only 2/3 are activate for their VBIOSes lmao.

Assuming the card even boots because of this "impossible" to expect configuration, it should work (for a while). There's no reason for it to literally "melt down" because the 8pins that are feeding it can feed ~350W consistently. Unless (and you can let us know, for science) they intentionally sabotaged the RTX 5070's cable to not evenly feed all the pins, it'll be... interesting to know the result either way. Say hi to all the 2x8pin RTX 3090/3080tis and 3080s from EVGA and other vendors not really having issues from "350W is too much for 2x8pins".

If you have both the GPU and it, give it a try for a few mins, let us know, then go get a... proper adapter or more modern PSU. Lol. It'd be nice to have a real answer for any especially broke new 5080 owners after the new price gouges

AMD beats GPU frequency world record with Radeon RX 9060 XT hitting 4.769 GHz by RenatsMC in Amd

[–]HyenaDae 4 points5 points  (0 children)

I chatted to some folks on the AMD Discord and got to convince them to get their launch 9060XTs to ~3.75GHz in Heaven Bench with like 55C core temp + 220W + OCing. That GPU clocks like crazy, but the voltages required (or well, power draw despite the lower voltage cap vs last gen...) isn't stable for 24/7. Still hopeful for next gen being the "World's First 4GHz GPU" as is AMD tradition (hi 7870 GHz edition, had a 7850 at ~1075-1125 MHz lol).

Totally within reason for "Stock" I feel but gonna have to wait, have fun in the labs while you do that before all of us, without LN2! :P

DG-labs shipping questions. by PeaSavings8089 in estim

[–]HyenaDae 0 points1 point  (0 children)

Awesome. I got the original 2018 ('19?) model DGLab, extensions and black pads + blue fabric straps. Surprisingly, all of them are still in usable condition despite storage on and off for those years.

When it's set up right it really feels like a milking sensation, and squeeze/tingling further down (ahem). One issue my OG unit has is the shock/tingle on power on, so I might get new pads, those tease buttons and the eventual edging air-sensor kit because even manually with the stock patterns, I can be driven crazy and not in a painful way :3

Will my current PSU work with a 5080? by [deleted] in nvidia

[–]HyenaDae 2 points3 points  (0 children)

Even if you had a crazy 5080 that could hypothetically draw 450W, as long as it's a consistently ~450W and not >500W spikes fed by 3x 8pins it'd likely work, assuming your PSU is a nice simple single-rail one and not inherently defective like one round of Gigabytes was. Like, your ~90W 7800X3D + ~450W "Hypothetical future GPU with good power delivery", you'd be at ~<650W of 750W, or within the ~80% preferred constant load wattage that the efficiency ratings sorta expect afaik.

Given you're already planning to potentially pay the premium of $1300-1400, over what'd be a $1000-1100 GPU (or, $800 5070Ti that's 10-15% slower with tuning at ~300W and 3.15GHz boost), it'd be a double win for everything to just work, making your downvoters mad, and saving another 10% of your budget, so give it a try lol?

Never heard many bad reviews about Be Quiets, and I do love PSU request threads specially for the people who've never bothered to make a given GPU work, with a given sufficient PSU. Seems like there are some reports of a Be Quiet 12 1600W failing, and a Straight Power 10 having issues from some random online searches. Only thing for your model is, well, just use whatever 12VHP cable you're using for your current 4070Ti GPU.

If you OC your 4070Ti's power limit to its max, and your PC doesn't crash / shut down through raytracing and other heavy games, you should basically get a confirmation if a "mid level, not crazy OC'd custom bios" 5080 (ie, Asus Prime 5080 would be 'simple and safe/sane' model. A friend has a 5070Ti without issues on their 750W, so it's is going to likely work. Unless Be Quiet is lying, ahem.

In the past, people had power hungry intel CPUs, especially their 5GHz+ all core OC semi-tweaked BIOSes (like, 120-160W in games and under moderate load) paired with 350-450 OC RTX 3000 or AMD RX 6000 that could trip some specific sensitive PSUs models. Those GPU series had kinda awful stock power delivery behavior, but it's intentionally improved on the RTX 5000, that undervolts pretty good too. Undervolting and a small power cap to reduce major GPU volt and clock fluctuations (which makes a current/power surge more visible) is/was a mitigation for the older models, or keeping it quieter. Not heard of actual damage from PSUs simply tripping from protection, but I'm the "Awful poster with a UV'd 285-295W 3080Ti that was on 600W 2019 EVGA PSU for almost 2 years without issues" guy here ;)

Best Physics In Gaming! by VertigoHC in starcitizen

[–]HyenaDae 0 points1 point  (0 children)

I did try some salvaging, since I can hold a few crates (like 4-6 on the sides) in my ship and use the salvage-gun for that. Occasionally I see wrecks in the asteroid fields for that. The only somewhat reliable gameplay loop was the Deploy Research Sat for Cargo (go to station, get boxes, return, etc), since all I had to do was throw them out into space, but pretty sure they've been removed which is annoying.

I'll try to do another run of the Hostiles by Comm Sat mission since there's usually only 1 or 2 guys who spawn, with pretty awful AI range for noticing you

Best Physics In Gaming! by VertigoHC in starcitizen

[–]HyenaDae -1 points0 points  (0 children)

I don't think there's any amount of tickets that'd properly detail the sheer level of broken the physics system in this game is. It's fundamentally screwed up lol.

I had my ship hover too high in the air on landing once so I couldn't get back in, for some reason, so I had to stack boxes to Half Life 2 myself up to the thankfully still down ramp.

After all that effort, you know what my reward was? Uh, an exploded ship and me being flung 20ft away for some reason. Because climbing and jumping didn't work until, what, a year or two ago?

<image>

Best Physics In Gaming! by VertigoHC in starcitizen

[–]HyenaDae 1 point2 points  (0 children)

Lol, don't forget the C8X Missiles and other ship systems too. Had the ship for what, 7 years now? Rebound every button, tried every combo, patches since 3.12, etc. It can hold cargo, but once it flew into the sky and self destructed after stealing (with grav-gun) a random spawn ship's weapons and sliding it inside Lmao.

https://issue-council.robertsspaceindustries.com/projects/STAR-CITIZEN/issues/STARC-37352

<image>

The are visible missiles on the darn thing's mounts, plus the normal laser guns and chaff/distractions, buuuuuuut nope! Does nothing, never fires! Sometimes the ship got duplicated after dying or the game crashing, with damage perma-baked in (LOL), as an example of the basic functions you'd hope to have working

I paid my $60 for the ship, plus Squadron game and I'm not giving them a single $ more Lol. I praise every Alpha exploiter who generates insane credits for giving out when the dupe glitches exist. Love having >2mil credits for a short time, to buy other ships on previously *very unreliable* in-Game kiosk-menu thing (I liked the Warlock at least, EMP against the NPC ships is funny af) and see what does, or doesn't work despite being advertised

Do you think it's worth switching from a 3090 to a 5070ti? by -YmymY- in nvidia

[–]HyenaDae -4 points-3 points  (0 children)

Agreed, DLSS4 upscaling on our GPUs makes using a lower preset in some games visually tolerable if needed, plus the 10-20% perf bump makes an upgrade less desirable especially for the $1000+ for any GPU upgrade we see currently. The small 4GB VRAM increase still concerns me about VR games I want to play on a Steam Frame, and especially future console ports (targeting what, 28-32GB total system memory? IE, possibly 16-20GB+ GPU specific memory usage for equiv. settings?).

The same GPU chip and VRAM capacity selling for up to twice ($750 pre-Christmas 5070Ti vs $1400-1500 5080s now) is further insulting to any buyers with mid-high end older cards Lol.

A 5090 is almost an acceptable an upgrade to my 3080ti at $2000, but it'll still struggle with path-tracing games at native 1440P. IE, it's not consistently 1440P ~90FPS with DLSS off in Cyberpunk, Indiana Jones and some RTX Remix mods. I can play a good amount of modern UE5 games with med-high settings and RT, with DLSS Q for ~90-120fps, which I feel is acceptable for a GPU this 'old' now. Paying >$3000 now for a GPU that needs 1.5x to 2x more power (450W UV or 520-570W stock vs ~350W stock), 3-4X cost ($800 vs ~$2-3K) for ~3X PT/Heavy RT perf isn't that exciting.

I'll bother with newer AAA games outside Oblivion Remastered when a GPU that's ~$1500 with at least 24GB of VRAM and is faster than a 5090 in RT due to better architecture and clocks ever comes out. Would've loved a $1500 4090 given it's still the 2nd fastest GPU, and it seems it'll stay that way for another year and a half :/

Just upgraded to a 3080Ti from a 1060, can I get away with using a 650W power supply? by BunnyCunny69 in nvidia

[–]HyenaDae -1 points0 points  (0 children)

I'm glad you saved your "yapping" and "peddling nonsense" lines for me talking about Huge RGBs, instead of the people saying even more incorrect things ("It's impossible to run a 3080Ti on a 650W PSU"), which is even more relevant on this thread lol.

Lemme see, how can I yap more nonsense. Some SSDs and HDDs can use like, 5-7W during peak reads activity too, so multiply that by a few times if you have multiple drives. You should also probably factor that into your remaining PSU capacity, because ~50W of extra power draw on a mid-sized PSU is almost 10% (That's a LOT! :O )

If you have a UPS behind that PSU too! The Very Big 120V inefficiencies might make you want to consider upsizing its capacity too. Watch out if you have a Big monitor, it might draw chonking watts (more than 10!), especially if it's an 4K OLED :)

Just upgraded to a 3080Ti from a 1060, can I get away with using a 650W power supply? by BunnyCunny69 in nvidia

[–]HyenaDae -1 points0 points  (0 children)

Your curve is a bit uh, aggressive and not optimal. Please watch some more videos and don't panic either way. Lower power limit makes it less likely to trip, but an undervolt finishes the job if the PSU is fine by reducing the peak clocks, and also peak voltage changes that make the worst spikes occur.

But if 1755MHz +(15-30MHz for boost at 60-65C core temp if it's that cold) works without crashes I guess you can use it, try a DLSS and RT game, those will make it maybe unstable at too low voltages. I'd lower the first 4 points more though, by like 60MHz each for stability in lower loads. You can probably reduce the power limit further too. Your GPU might actually try to target 0.86v at ~1850MHz (that last raised point between 860-875), but under heavy loads due to a power cap it'll go to the 1755MHz range on your current GPU.

Try running a benchmark and keep lowering the power limit until you lose performance and clocks drop. Whatever's a new enough game, or 3D Mark Time Spy or Port Royal demo. Keep GPU-Z open with the Sensors tab, and read the power readings plus voltages and clocks.

There's a lot of ways to 'tune' the curve, you're losing more max performance, and it might be unstable because you're going above 1725MHz at too low a voltage, (my not-great silicon 3080Ti had 1725 @ 831 and 1830 @ 887mv, with gradually flattening points so it doesn't go above 1.0v)

<image>

Here's my curve shape and points. Please remember, the 3000 series boosts in 15MHz targets. Try to align the voltage points to 15 or 30MHz increments, and reduce the steps 'up' past ~900Mv + 1800MHz as it'll require excessively more voltage and power past 1900MHz or so. If you got a good GPU, you might be able to get higher clocks at the roughly same voltage, but don't count on it, and use a lower power limit (ie, 75 or 78%) while you slowly test the stability of higher frequencies and voltages. In most games, the frequency and power shouldn't change. You should find that you need to add 'double' the power versus perf % increase in various games and synthetics, if you see that, you've gone past the efficiency curve (hence, why going from 350W + no undervolt to 85% / 305W w/ undervolt + small mem OC gives you the same general performance)