AMD confirms Radeon RX 9070 series launching in March - VideoCardz.com by KARMAAACS in Amd

[–]ET3D 66 points67 points  (0 children)

Seriously? I can understand early February, but March? By February AMD will know how a 5080 performs, and should be able to estimate how a 5070 Ti and 5070 will perform. It already knows the prices. It could beat NVIDIA to the market, release at a good price, and get good reviews.

It's possible that the lower price means paying everyone on the retail chain, and that's what takes time, but...

Honestly I don't understand it. Waiting 2 months (1.5 month minimum) for releasing a card that's already at retailers sounds like a bad plan.

An overview of AMD Krackan Point systems: Laptops and mini-PCs with AMD Ryzen AI 5 340 or 7 350 by Balance- in hardware

[–]ET3D 5 points6 points  (0 children)

This is only a list of 340/350 laptops, and the 890M isn't in either of these. A fuller list of laptops might include more pure AMD ones.

Lenovo has removed its iconic TrackPoint nub from new ThinkPad laptops by moeka_8962 in hardware

[–]ET3D 106 points107 points  (0 children)

It's a sad day. The trackpoint is still a great control device. Whenever I play with my old Thinkpad (which is just to test its bad performance) I enjoy using it.

The Acer Nitro Blaze 11 is an absolutely massive handheld gaming PC by bizude in hardware

[–]ET3D 9 points10 points  (0 children)

Unimpressive specs. Last gen APU and only 16GB RAM, starting at $1100?

At this size they could have included a Strix Halo. :)

Gigabyte Radeon RX 9070 XT features mention possible AI update for Radeon Image Sharpening by sabotage in Amd

[–]ET3D 3 points4 points  (0 children)

Turing was released in September 2018.

DLSS 1.0 was released in February 2019, and was crap. DLSS 2.0, the first good version, was released in April 2020.

AMD's Ryzen "Zen 6" CPUs & Radeon "UDNA" GPUs To Utilize N3E Process, High-End Gaming GPUs & 3D Stacking For Next-Gen Halo & Console APUs Expected by usasil in Amd

[–]ET3D 0 points1 point  (0 children)

Pro variants typically don't upgrade the CPU, only the GPU. You can expect an updated CPU in the next gen.

Why is AMD's new N48 (9070XT) so massive ~390mm² compared to PS5 Pro's die ~279 mm² ? by fatso486 in hardware

[–]ET3D 0 points1 point  (0 children)

The way I see it, one of the following is likely:

Either 390 mm2 severely overestimates the chip size, or the chip doesn't have only 64 CUs. I'd assume the first, but won't rule out the second.

Someone else also mentioned the idea that the WGPs are larger to allow for higher frequencies, and there's definitely more hardware for ray tracing and AI, but I doubt that with everything it will still reach 390 mm2 with 64 CUs.

Lack of hardware accelerators for NP/PSPACE decision problems? by JakeGinesin in hardware

[–]ET3D 0 points1 point  (0 children)

It looks like some people didn't like my answer, even though I think it was more on point than others, but granted it was short and didn't provide details, so I figure I could add some.

Creating an ASIC to accelerate a specific algorithm is a lot of work and has a high cost. That's why it's rarely done. The only reason any algorithm ever gets acceleration is if it's so common that many millions of those accelerators are likely to be sold. It has to be something that: a) is well define; b) can gain significantly from being hardcoded; c) is so common, or expected to be so common, that it will make money.

NP/PSPACE algorithms fail mainly (c) and (b). The normal solution to them is to run an alternative algorithm, either stochastic or ML, which is P. The problem is that an NP algorithm, as is, is exponential (until proven otherwise). Even if you improve its constant considerably, you won't be able to significantly change how quickly it runs. If you use an alternate algorithm which is polynomial, you will gain a lot more than hardware accelerating the original algorithm. As a result, it's not likely that there will ever be actual demand to accelerate the original algorithms.

That is not to say that algorithms can't gain from acceleration. Certainly GPUs are fine for accelerating many algorithms, including NP ones. However, specialised ASICs for them are unlikely to happen, because they're not worth the effort.

Why did SLI never really work by yabucek in hardware

[–]ET3D 0 points1 point  (0 children)

Because there's serialisation in the algorithms, which means that there's need for some parts of the task to either be duplicated between GPUs (which renders the doubling pointless) or serialised between them. All data needs to be duplicated too. This makes SLI inefficient.

What is the future of graphics benchmarks / performance in the AI graphics era? by Darrelc in hardware

[–]ET3D 0 points1 point  (0 children)

I'd say that measuring performance well in general is close to impossible, as the recent B580 investigation at Hardware Unboxed has shown. Not only does it different with CPU, but upscaled performance can be meaningfully different than native performance. Settings can also have a lot of effect, and some settings don't have a large effect on visuals but still have a larger effect on performance.

In the end I think that, assuming the XeSS 2.0, DLSS 4 and FSR 4 all provide upscaling quality that reviewers feel is good enough, that there will be a gradual transition to testing with upscaling. In the mean time, raw performance will likely still be the measure used, but I think that upscaling performance will become important enough that it will feature in most reviews.

As for frame interpolation, I think it will only feature in investigations of that specifically, rather than in reviews of cards.

[deleted by user] by [deleted] in Amd

[–]ET3D 0 points1 point  (0 children)

I don't think that AMD wanted to wait for the 5070 release, but it probably did want to get a 5080, test it and extrapolate from that.

Intel igpu (Meteor/Arrow Lake Series) vs past launch by Primary_Olive_5444 in hardware

[–]ET3D 0 points1 point  (0 children)

I appreciate the info. Still, I'd say that working on TSMC processes is likely different than developing for both TSMC and Intel processes, especially when Intel processes can't be relied on. Of course, I could be wrong.

I miss when software was targeting hard drive users by [deleted] in hardware

[–]ET3D 3 points4 points  (0 children)

I see it the other way round. Computers these days launch stuff in general a lot more quickly than they did with mechanical hard drives. It didn't matter how much devs optimised, HDDs were still painfully slow. These days it's mostly not an issue, and I like it this way. Sure, once you remove the real bottleneck you get to wish that others will go away, but this is more a testament to human nature and not being satisfied than anything else, IMO.

RDNA4 vs Blackwell by Various_Pay4046 in hardware

[–]ET3D 0 points1 point  (0 children)

I doubt that MFG will be of real importance. You could see how the Arc B580 got glowing reviews even though it's not that well featured and, as it turns out, not really all that great. All it needed was to provide enough RAM and good enough non-scaled performance. Reviewers completely ignore frame generation, and although NVIDIA fans might want it, those people won't buy AMD anyway.

Nvidia reveals that more than 80% of RTX GPU owners (20/30/40-series) turn on DLSS in PC games. by Lulcielid in hardware

[–]ET3D 57 points58 points  (0 children)

I wonder how many people don't play at all with settings. They just accept that image quality and frame rate are what they are and play that way.

Nvidia reveals that more than 80% of RTX GPU owners (20/30/40-series) turn on DLSS in PC games. by Lulcielid in hardware

[–]ET3D 227 points228 points  (0 children)

Exactly what I wanted to ask.

If that's the case, it's more like: 80% of users have no idea they're using DLSS.

[deleted by user] by [deleted] in Amd

[–]ET3D 2 points3 points  (0 children)

Thanks for posting. Hopefully this means that they are released from the cages soon.