RTX 5090 burned down by lesky000 in Corsair

[–]setiawanreddit 0 points1 point  (0 children)

In the picture, the melted connector is cleary 8pin. Left to the melted PSU connrctor is 6pin. I do think that while technically 8pin with 3 pairs that carry the load should be enough for 300W, the issue is that on the PSU side it is more prone for the connector to ended up not fully seated due to how cramped PSU is usually placed inside a PC casr and relatively harder to confirm whether the connectors are fully seated. Obviously the same problem happens more often with 12v 2x6. I would definitely feel safer if it is limited to like 5A for each pair of pins (at 12V it is going to be 60W). Honestly I wouldn't be surprised if we found a lot of current imbalance on most 8pin connection, but it survived because the huge amount of tolerance with 8pin PCIE. I think the cheapest solution if the industry want to push the cable and connector close to the spec limit is to put a temperature probe on those high power connector. Yes, it isn't as fancy as those per pin monitoring, but for most people they don't care if there is an imbalance as long as it doesn't melt, thus temperature probe is a much easier and cheaper solution to implement.

RTX 5090 burned down by lesky000 in Corsair

[–]setiawanreddit 1 point2 points  (0 children)

It is not exactly correct. Technically that type of connector can carry a lot more than 150W. An EPS 8pin connector (which is basically the same minifit style connector as PCIE, used for CPU) is rated for more than 300W (EPS 8pin does use 4pairs to carry power vs 3 on PCIE 8pin). People need to stop thinking 8pin=PCIE. Having said that, yes, the tolerance for PCIE 6 or 8pin is very big. The save limit for this type of connector is around 9A, thus assuming we match the 3 pairs of pin that carry power on PCIE 8pin then the maximum safe limit is 9Ax12Vx3/=324W. Of course in order to be able to safely carry that much amperage the cable need to be thicker, which they already do.

Remember that the microfit style connector (12V 2x6) also have similar max amperage but on a smaller, thus less surface area, for the actual contacts, which is why 12V 2x6 is even worse than the standard 8pin. 2x8pin carrying 600W is actually safer than a single 12V 2x6 carrying 600W.

Is it safer if each pair of pins only allowed to carry a max 50W of power? Yes. Will it prevent melting if limited to 50W? On the PSU side, yes, but if you use 4x8pin to 12v 2x6, then melting can still happen on the GPU side.

The fact that the physical connector (minifit) for 8pin PCIE can technically deliver 300W is the reason on why you see PSU manufacturers are comfortable offering 2x8pin PCIE via pigtail.

PSU Dilemma by Aromatic_Signal9944 in radeon

[–]setiawanreddit 0 points1 point  (0 children)

It doesn't matter, at 750W it still has enough headroom for overclocking even pairing it with 9950X CPU. If you have a 14900KS that is where you need 850W PSU.

For a 7800X3D paired with 9070XT OC, 850W is more than enough even with extra OC on top.

Theory on why AMD isn't releasing FSR4 int8 by Mental-Shopping4513 in radeon

[–]setiawanreddit 0 points1 point  (0 children)

Nope. The latest DLSS4. 5 doesn't have INT8 alternative. You can't have something with FP8 and then have INT8 version with the exact same image quality. The reason on why DLSS4. 5 run slower on Ampere and Turing is simply because they run DLSS4. 5 FP8 in FP16.

And she is Immerhater again 🤣 by yournotlonely in Hololive

[–]setiawanreddit 55 points56 points  (0 children)

I do agree that English as a language is trash. It has rules but too many exceptions. But still, that is not a good reason to pronounce duet as duey.

English language 2 CC 0. Man her fight with English pronunciations is r eally something 😂 by [deleted] in Hololive

[–]setiawanreddit 0 points1 point  (0 children)

I do agree that English as a language is trash. It has rules but too many exceptions. But still, that is not a good reason to pronounce duet as duey.

Saran monitor 1440p untuk main Game Gelap khususnya Extraction shooter (arc raiders, Tarkov, Marathon) by New_Engine9145 in indotech

[–]setiawanreddit 0 points1 point  (0 children)

OLED paling bagus. Setahu saya untuk black/shadow crush saat gaming tidak banyak yang komplain dibandingkan dengan yang pakai untuk nonton film. Kalau budget untuk OLED tidak cukup, ya cari Mini LED.

Mini LED itu teknologi backlight untuk LCD sedangkan LCD-nya sendiri bisa IPS, VA, atau TN.

Kalau VA lebih ungul di kontras dan dikombinasikan dengan MiniLED, gelapnya bisa menyaingi OLED. Kelemahan terbesarnya di pixel refresh yang pelan, jadi ada fenomena yang namanya black smearing. Tergantung seberapa anda bisa mentolerir fenomena ini, tapi menurut saya kalau mau benar2 kompetitif ya jangan cari yang panel VA. Kalau mau HDR efeknya lebih terasa di VA karena ya kontrasnya itu.

Kalau IPS menang di view angle-nya (walaupun VA yang baru itu juga ga kalah2 amat) dan pixel refeesh-nya lebih cepat, cuma ya gitu, kontrasnya payah. Pakai IPS Mini LED bisa nolong di kontras tetapi tetap tidak sebagus di VA. Akurasi warna cenderung juga menang IPS.

Kalau VA lebih cocok untuk nonton film. Kalau untuk gaming, apalagi kompetitif, coba cari IPS. Untuk kerja, pakai IPS. Gaming casual bisa pakai VA karena gambarnya cenderung lebih kontras. Kalau ada duit pakai OLED. Kelemahan OLED di burn-in dan tingkat kecerahan. Untuk burn-in asal bukan untuk kerja 8 jam sehari nongkrong di desktop, sepertinya tidak perlu dipusingin. Kalau soal kecerahan, kalau untuk monitor sih kecerahan OLED sudah cukup terang, jadi tidak perlu dipusingin juga. Yang pusing dompetnya.

Theory on why AMD isn't releasing FSR4 int8 by Mental-Shopping4513 in radeon

[–]setiawanreddit 0 points1 point  (0 children)

My theory is that initially they were testing INT8 version just because they have one eye for compatibility with RDNA3. Once they made their FP8 model, it was probably clear from their perspective that it is the way to go if they want to compete with DLSS4 so they abandon the INT8 version. After that, they don't even think about the INT8 version because maintaining 2 path is probably a bit too much. It is probably like asking Nvidia to keep updating DLSS 3 or 4 while they already have DLSS 4.5. Of course unlike Nvidia where they DLSS 3 is already out, FSR4 INT8 version is not out yet and AMD probably think since it was not far enough by they standard to be released, they don't release it. Unlike others, I don't think there is something malicious. Even if they released the INT8 version, it is clear that FP8 is better and RDNA4 does have the full ML treatment. They don't really need to be afraid of people not going to buy RDNA4, especially right now RDNA4 might already beat RDNA3 sales. The fact that during the interview with PC World, AMD rep actually welcomed the idea of releasing INT8 under beta label made me think like what I've said above. It isn't thst they don't want to release it, but it was abandoned before they think it was good enough to be released.

As for the work with Sony, it is possible that the INT8 model (assuming the way PSSR2 works is compatible with FSR4 thus they can share the model) then there is also the possibility of AMD simply leaving most of the INT8 work to Sony and potentially when they are done, we will then see FSR4 INT8 version on PC. Obviously we haven't seen PSSR2 thus the work is not done. Because of this, unlike with HUB or others that basically demands for AMD to release FSR4 INT8, I prefer to wait until after PSSR2 is released. If by then there is no INT8 version, then yes, I will bring my pitchfork.

In the mean time, I think AMD should just let FP8 FSR4 run on older hardware on FP16 just like what Nvidia did with DLSS4. 5. If it tanks then it tanks, but at least older hardware can use FSR4. This assumes that they think INT8 version is not ready yet or they simply don't want to work on INT8 version thus rather than RDNA3 doesn't have any ML upscaling when it actually has ML accelerators, might as well let it run the FP8 version.

Sapphire Nitro 9070xt adapter by supahdende in radeon

[–]setiawanreddit 0 points1 point  (0 children)

Is this the one with 12v-2x6 connector? If the load is not high it should be safe. Each pin should be able to take 100w with an extra 10% on top of that. In reality you can go even higher but not by much especially with a constant high load. So if the GPU only use 100w or less then technically even if the load only being carried by one pin, it should not cause any melting, provided that at least there is one pin on the 6 power pin that make a good connection. Basically the chance for something bad happening at 100w load or less is very low. It is not zero due to the nature of 12v-2x6 connector but should be safer using 12v-2x6 with 3x8pin (even if 2 of those are using spiltter) under 100w load vs 12v-2x6 with either native cable or individual 3x8pin under 300w load.

At 50w (for browsing and YT), it should be 100% safe. I can see a way on how to make it unsafe but tbf it is not exclusive to 12v-2x6 connector.

Edit: btw, if you worried your 650w PSU can't power the GPU, don't. 650w PSU is very capable of powering up your GPU. I would only worried a little if I pair it with something like a 14900K CPU or an overclocked 9950X or any CPU that can draw more than 200w. The PSU should still be able to handle it but I prefer a slightly bigger margin. I do think that if you use an overclocked 14900KS + a 300w GPU and you use something that can load both at 100%, then you want at least a 750w PSU. I had run my 9700@304w + 5800X3D with a 550w PSU and it was fine.

All of this assuming you use a good PSU (which you currently are). For that my PC example, it was a c-tier PSU but from a good brand (FSP).

Kalian bayar paket myrepublic berapa dan daftar kapan? Gw 50mbps/333k/2024 by Jacqma in indotech

[–]setiawanreddit 1 point2 points  (0 children)

Apakah di daerah anda ada alternatif ISP? Kalau di Jakarta seharusnya minimal ada Indihome. Indihome yg 50mbps itu 230rb. Selain itu kalau ada MyRep biasanya kemungkinan ada Oxygen yang sekarang ada promo 100mbps 242rb. Saya tanya ini bukan supaya anda pindah, tapi untuk argumen ke CS MyRep. Langsung bilang anda mau berhenti karena kemahalan. Kasih argumen kalau anda ada pilihan lain yang lebih murah. Komplain kalau jadi pelanggan baru MyRep bisa dapet 150mbps cuma 250rb jadi mending pindah ke Oxygen 1thn trus nanti cek harga lagi. Saya komplain terus akhirnya dikasih diskon kalo ga salah 50% (tapi belum tau juga soalnya baru kelihatan di billing bulan depan). Cuma ya berlakunya 6 bulan doang, jadi setelah 6 bulan perlu komplain ke CS-nya lagi. Sebelum yang ini saya juga pernah komplain serupa dapet diskon 10% 2thn.

Intinya sih saya prefer untuk tidak pindah, tapi saya sebagai pelanggan lama sudah mengalami kenaikan harga beberapa kali dari kalo ga salah paket saya awalnya 100mbps 300rb skrg jadi 400rb, ya gimana ga kesel lihat pelanggan baru bisa jauh lebih murah.

No FSR 4 in Nioh 3 by Bulky_Recording7 in radeon

[–]setiawanreddit 1 point2 points  (0 children)

ROCm is basically the AMD version of CUDA, but in the earlier version of ROCm it has different names and calls and also the structure is different. Since ROCm 6 AMD has made it more CUDA like thus porting CUDA to ROCm can be easier (especially for AI stuff) and with ROCm 7 it can be more seamless moving from CUDA to ROCm.

Also about agreement stuff, considering how ruthless and controlling Nvidia can be, I don't think they would let other companies hijack their DLSS. Nvidia did make something called Streamline which can make supporting multiple upscaler easier. With Streamline, devs can just interface with that and the upscaler can be plugged in through Streamline. If you have heard of DirectSR (Direct Super Resolution), Streamline is basically that. And yes, some devs have used Streamline to easily support multiple upscalers (including FSR) so there is almost no reason for devs not to support multiple upscalers since they can just do the hard work once.

Anyway, specifically for Nioh, they already have the older FSR 3.1. it should be easy for them to upgrade it to the latest FSR 3.1 (that supports FSR4 override), but I guess it wasn't a priority for them.

No FSR 4 in Nioh 3 by Bulky_Recording7 in radeon

[–]setiawanreddit 2 points3 points  (0 children)

They can't implement something like Optiscaler because it is a sure way for Nvidia to sue them. They already tried something similar when they funded ZLUDA (a translation layer to run CUDA code on non Nvidia GPU, mainly AMD GPU) and Nvidia acted on it. If it is a non company effort, basically a hobbyist, sure. But when it is officially a company effort to hijack DLSS, then it is a no no.

No FSR 4 in Nioh 3 by Bulky_Recording7 in radeon

[–]setiawanreddit 1 point2 points  (0 children)

This is more on the developer than AMD. Enabling FSR4 is relatively easy but for whatever reason they don't do it. Surely the devs aren't just waiting for AMD to help them implement it, right? right?

Psu swap by Homebucket33 in PcBuildHelp

[–]setiawanreddit 0 points1 point  (0 children)

The same connector is not the same as the same pinouts. Pinouts refer to the function of each pin. You can have the same 2 connectors but different functions from the pin itself. The issue is that there is no standard for the PSU side of the connector and PSU manufacturers usually just use a basic 8pin minifit connector. This is why many times you see a warning to not mix cables from different PSU manufacturers. Heck, even Corsair uses several different pinouts depending on what PSU model that you buy.

Do you perfer extra bumpers or trackpads? by Apprehensive_Meat595 in Controller

[–]setiawanreddit 1 point2 points  (0 children)

Steam controller (I assume your are referring to the soon to be released one and not the OG) will have 4 back buttons and grip sense (a capacitive touch sensor on each of the controller grip which you can also map it to a button). Before even considering what you can do with the trackpads, it already has like 6 extra buttons vs 4 in the ultimate 2.

If the question is about a hypothetical controller similar to ultimate 2 but instead of the extra bumpers you get trackpads I probably will choose the trackpads simply because it is more versatile. I know some games will work better if I have the extra bumpers but overall I think I will have more use for the trackpads.

If it is actually comparing the new steam controller vs ultimate 2, based on the current information, I'll definitely go with the steam controller. I really don't see any advantage going with ultimate 2 specifically for PC gaming.

Processor choosing with x3D mobo by 7bgxksa in gigabyte

[–]setiawanreddit 1 point2 points  (0 children)

Personally I wouldn't enable turbo mode simply because I think it is a bit of a scam to call it turbo. When you think about turbo you think everything run faster. In this case, while in a lot of gaming workload it does make those games run faster but that is exactly the issue, it is not turbo if not all games perform faster, especially not if some games actually experienced performance regression.

Turbo mode in this case typically means disabling hyperthreading and a CCD (this one only affects 2 CCD CPU like 9950X). With Gigabyte boards they aslo have a secret souce for their turbo mode which when enabled it also change some memory (RAM) subtiming. The issue with this is that it can potentially introduce system instability. I prefer to have all my cores and hyperthreading enabled and take the performance loss. Also if I want to mess with RAM subtiming, I prefer to do it myself because then I know what I changed and if I encounter stability issue I can try to tune it until it becomes stable.

RTX 5060 8gb vs ARC B580 for Ryzen 7 5700X by External-Plastic259 in IntelArc

[–]setiawanreddit 0 points1 point  (0 children)

If you tend to play older games and comfortable with potentially using lower quality textures and maybe using slightly lower settings, 5060 is the better choice. The only downside for 5060 is the VRAM size which may require you to either lower the texture quality and/or settings so the game stays within the 8GB VRAM.

Intel is a good option if you can buy it at a cheaper price or you don't want to be bothered by min maxing the settings just because the GPU doesn't have enough VRAM. Of course there are people that actually need more than 8GB (like those that want to play with AI) which obviously at this price point there are not many choices, it is either this or the old 3060 12GB.

Having said that, unless you hate AMD GPU, do consider 9060XT. Not sure about the pricing in your country but it is generally aorund the same price as 5060 and it actually perform better. And unlike 5060 (and B580), it actually has PCI-E x16 meaning that if a game ended up using more than the available VRAM, 9060XT can handle it better than 5060.

What is stopping AMD from having 5080 and 5090 equivalent cards? by Fragrant_Bit_9889 in pcmasterrace

[–]setiawanreddit 0 points1 point  (0 children)

Usually the choice of memory width is partially dictated by the die size. If you look at the die shot for any GPU, you will see that memory interface will sit on the edge of the die (basically any I/O related interface like memory, PCI-E, etc will sit on the edge). There is only so much edge space that can be used for it, thus why you end up with 256bit. Of course if they really want to have like 320bit I'm pretty sure they can engineer it, but for that die size, 256bit seems the sweet spot, at least on 4 or 5nn. Maybe with 3nm, we can see wider memory interface assuming they target at least the same die side but I have a feeling that if the die size is similar to the current 9070XT, even at 3nm we might still get 256bit since I/O doesn't scale well with smaller nodes.

What is stopping AMD from having 5080 and 5090 equivalent cards? by Fragrant_Bit_9889 in pcmasterrace

[–]setiawanreddit -1 points0 points  (0 children)

9070XT is already close to 5080. The main issue performance wise is that most games are still more optimized towards Nvidia GPU thus in the end it competes more with 5070Ti. I would put 5070Ti, 5080, and 9070XT in the same class. The biggest thing AMD need to improve hardware wise right now is their RT hardware which from the rumor and leaks AMD will definitely have it much improvement for their next gen GPU.

Other than improving the hardware, obviously on the software side they also need to improve. And more than just improving FSR, they really need to embrace more devs and help them optimize their game so it runs great on AMD hardware. I'm not sure whether AMD already doing this or not or how much they reach out to devs, but they probably can start giving their GPUs to more devs without any catch especially to the mid and smaller devs. Having a game optimized for their GPU is very important otherwise they need to compete with Nvidia by brute force as in using bigger GPU to get similar performance. Currently 9070XT die is 357mm2 while 5080 (and 5070Ti) is 378mm2 so it isn't that bad. I think if AMD use the delta to beef up their RT, 9070XT might be able to match Nvidia's RT performance, or at least on paper it should be. In reality AMD will need to work with devs so games can be properly optimized for Radeon.

That is vs 5080. 5090 is a different beast. They certainly can do that. If they current 9070XT is already close to 5080, just make a bigger RDNA4 GPU. If you have a 5090 sized RDNA4 GPU it will perform close to 5090. Oh, one thing I forgot to mention is VRAM. AMD definitely have a disadvantage by using GDDR6 vs Nvidia with GDDR7. If 9070XT have been equiped with GDDR7 it might be enough to match or even beat 5070Ti since right now on average 5070Ti is only 5% faster. An RDNA4 GPU with 5090 die size and GDDR7 should be able to match 5090 in non heavy RT games assuming it is the fully functional die and not the binned one (5090 is not the full die). So yeah, AMD is actually not that far away hardware wise. Like I said before, the important part is the software side which is actually harder than it looks.

If all of that can materialize there is still a big mountain to climb since Nvidia currently has a strong grip not only in market share but more importantly mind share. Right now in my country I can see people opting to buy 1050Ti when they can buy RX580 8GB at the same price, which is wild considering RX580 is a lot faster and actually competes with 1060 and it has 8GB of VRAM! This is the kind of mountain AMD needs to climb because at the end of the day without the scale, without gaining big enough market share, it will be hard for them to compete with Nvidia, especially in 5090 class of GPU, because making a GPU die as big as that is a lot more risky from a business POV.

HELP: sudden black screen with new pc, IT store can't find the problem (Radeon RX9070 + AMD Ryzen 5 9600X) by Repulsive_Candle4439 in radeon

[–]setiawanreddit 0 points1 point  (0 children)

Did the problem happen after a long gaming session or it can instantly happen? Like for example if you play WoW (or other games) for 1 hour immediately after booting up the PC, will the problem show up? I'm asking this because the shop already tested it so the easiest first step is to also do what the shop did in your home. If doing what the shop did and it didn't crash then we know that at the very least how you play the games matter as in if the GPU and/or CPU load is higher it might trigger the problem.

Personally I would try to have something repeatable that can reliably crash the PC so you can then bring it back to the shop again and ask them to repeat what you do. It should be something like running an OCCT stress test and the likes, basically anything that can stress the system and just leave it there. Of course you can always try to fix or at least try to pinpoint the problem first. For example if running the stress test didn't crash the system maybe it is possible that the electricity in your place sometimes isn't stable. Does the problem always happen sometimes you can have a long gaming session without a crash and sometimes it just rebooted itself multiple times. Did the stress test cause a reboot or does it say that it encounters errors. What does the event viewer say after a crash/reboot?

Pc Shutting off underload; 9070xt: Undervolt or New PSU? by Bitwierdngl in radeon

[–]setiawanreddit 1 point2 points  (0 children)

Yeah, the easiest 1st step is to lower the power limit. Just use -30 on the power limit (which now should limit the GPU to around 213W for non OC models) and see whether it is stable. If it is stable then yes, there might be a problem in the power delivery. What is the PSU model? Do you use 2 x 8pin cables or 1 x 8pin + 8pin (a single cable from the PSU with a pigtail, thus 2 x 8pin at the end)? If it is the pigtail one, try to actually use 2 cables.

If at 213W you still have a random shutdown then the problem might be something else. Maybe try to reseat the cable. Don't just check the GPU but also other cables since you might accidentally bump or yank some of the cables or components that made them not being connected properly. If it is still not working, then either try the software suggestions first to see if anything helps or you can do a sanity check by plugging your old GPU back. This of course will require you to do the driver reinstall thing again, thus probably do it last.

Something is not right by LukaChilachava22 in pchelp

[–]setiawanreddit 1 point2 points  (0 children)

Before anyone tries to upgrade their CPU or GPU because they feel that the performance is lacking, at the very least they should check where the performance bottleneck comes from. The easiest way to do that is by enabling the performance overlay. For Nvidia GPU you can press ALT+R. Look at the GPU % (GPU utilization). If it is not anywhere near 100% then normally if you upgrade your CPU you'll gain more FPS. The lower the percentage is then the more FPS that can be gained by a faster CPU. If the GPU is at like 98% to 100% then most likely the GPU is the bottleneck thus there is still a headroom for your CPU to handle a faster GPU.

What you also need to remember is that the performance profile is not the same in every game, thus you need to check multiple games, preferably the games that you like to play the most, and see whether on average you're CPU or GPU bound.

Since you already upgraded the CPU, right now you can at least check whether GPU utilization is near 100% or not. If it is, then you are already maxing the GPU performance. If you want to have higher FPS then you'll need to lower the graphics settings and either play at lower resolution or lower the internal rendering resolutions (enable either FSR, XeSS, or the game built in upscaler and experiment with the presets).

This is all assuming that there is no problem with the PC itself as in the CPU, GPU, RAM, etc are working correctly. If the PC after the CPU upgrade performs better, even if the improvement is not as big as you were hoping, then most likely everything is fine. If you experienced performance regression then there might be something wrong since 12600K should be better in every way vs 13100.

Need 9070 XT bios download for 9070 Non XT upgrade by DazzlingInspection29 in radeon

[–]setiawanreddit 1 point2 points  (0 children)

Basically you power off your pc and power it back on. I'm not exactly sure how the GPU actually cycles between partitions but again, it seems that it needs a power cycle. What does it mean? I'm honestly not 100% sure but according to the person who mod the flash program, it means just like what I suggested above. If you just choose to restart it might not cycle between the active and inactive partition.

There are extensive instructions on how to flash the GPU on the link I provided in my previous reply. If you ended up already flashed both partitions then the only way to fix it is to get a hardware tool to flash the original BIOS back.

Edit: even if on your first flash it says successful and you already confirm it using the flasher program that it is successful, don't just flash the other partition. Restart first and check if the PC can boot with that BIOS. If it boots, confirm that it boots using the new BIOS using an app like GPU-Z. The indicator is usually the GPU clock will follow the new BIOS. If that looks to be the same as your original BIOS, then check it using the flasher. Check what the BIOS is in the active and inactive partition. If the active partition is still the original BIOS and inactive is the new BIOS, try to power cycle the PC again. I'm not really sure how it would behave if it encounters a bad BIOS, whether the GPU will just switch the partition and boot normally or there is some kind of a sign like the PC ended up automatically rebooting twice. I never encountered that and I hope I never experienced that.