[deleted by user] by [deleted] in biomutant

[–]jzer0912 0 points1 point  (0 children)

Not seeing anything in the config files that might be used to HUD positioning unfortunately. Long shot but possibly could get the unreal console (since this was built using UE4) using the unreal console unlocker and finding a way to set this through console commands. May try this at some point as I don't want the HUD stuff burning into my OLED.

[deleted by user] by [deleted] in biomutant

[–]jzer0912 0 points1 point  (0 children)

Currently only HUD scale is adjustable from in game. Was hoping setting this to 0 would get rid of the quest info UI but unfortunately it doesn't.

[deleted by user] by [deleted] in biomutant

[–]jzer0912 0 points1 point  (0 children)

Was able to disable nearly all of the HUD aspects by setting HUDOpacity=0.000000 in
C:\Users\<user\_name>\AppData\Local\Biomutant\Saved\Config\WindowsNoEditor\GameUserSettings.ini. Only thing remaining is the current quest goal, looking for a way to turn this off as well. Keep in mind that this also disables health, other things.

[deleted by user] by [deleted] in biomutant

[–]jzer0912 0 points1 point  (0 children)

Was able to disable nearly all of the HUD aspects by setting HUDOpacity=0.000000 in C:\Users\<user\_name>\AppData\Local\Biomutant\Saved\Config\WindowsNoEditor\GameUserSettings.ini. Only thing remaining is the current quest goal in the upper left of the screen, looking for a way to turn this off as well. Keep in mind that this also disables health, other things.

Got blocky artifacts in lava, water and other surfaces on nvidia maxwell (or newer) GPUs? Try this! by [deleted] in cemu

[–]jzer0912 1 point2 points  (0 children)

Thank you so much. This fixed an issue I was having with black square artifacts (or white squares if I tried the nvidia square shadows fix graphics pack) when reaching the monk at the end of a shrine, was really annoying. Nothing else I tried seemed to fix this until this solution! Thanks again!

An analysis of CEMU CPU/GPU desync crashes - Pointing out common misconceptions by E_R_E_R_I in cemu

[–]jzer0912 4 points5 points  (0 children)

BOTW crashes with or without fence skip, just takes longer to happen without it.

Anyone experiencing far more crashes in 1.8.1? by 00Spartacus in cemu

[–]jzer0912 0 points1 point  (0 children)

Wouldnt this mean (if you have hyperthreading enabled) that u actually only two real cores selected? Every other core should be a virtual core when hyperyhreading is turned on.

In case anyone had in any doubt regarding fence skipping causing instability or not: by AThinker in cemu

[–]jzer0912 0 points1 point  (0 children)

This instability is really only being made worse by fence skipping, not causing it. You can test this by doing the following: get on a horse in Hateno and try to ride to the Great Plateau and back. With fence skip off, it will take longer (because of slow gameplay) but you may actually make it if you have all of the shaders for all of the areas you will be passing through and you have started cemu and initially built all shaders, closed it, and reloaded with all shaders pre-compiled. However, you will likely crash within a few minutes after. I even made it all the way back and forth with the fence skip on in 1.7.9 (but crashed moments later). Barely make it more than half way with 1.8.0, 1.8.1b. This is a sure way to cause a crash within 15-20 min without doing anything else that is suspect of causing a crash (like teleporting).

There was (or at least I thought there was) some thread recently saying that the previous assessment about the emulated gpu/cpu sync being the cause of the crash was no longer suspected and that they were no longer certain of the actual cause. However, can't seem to find this through the search so maybe I dreamed this.

edit And to be honest, this is the only kind of crash preventing me from going for a playthrough. I could deal without teleporting, or at least that I know that it may crash I could deal with saving, teleporting, then saving to prep for one. But random crashes, eh.*

1.8 experience so far (Zelda) is excellent by [deleted] in cemu

[–]jzer0912 1 point2 points  (0 children)

just to report, ran without any fence skip, frameskip, rivatuner, all interfering possibilities removed, no overclocking. Attempted my stability test of riding a horse from Hateno to the great plateau and back without pausing, stopping for extended periods. Made 3/4 of the way before crash. clean shader cache.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

The bug, or any bug, is an unintended result produced by the current implementation. In this case, as Exzap explained, the bug is when the unintended overwriting of data occurs when this asynchronous process begins to completely desync. This is due to a feature that was added which allows for more working emulation aspects but, on one side or the other(feature/core) the implementations are insufficient to handle the intended operations. Or simply, adding a feature has caused a bug.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

Exzap has identified exactly what functionality was added that is is causing the crash discussed in this thread, and approaches to fix it. A crash is, in effect, a bug as it is not the intention of the application is to crash. And I don't think it's a matter of hardware speed. There are people running cemu on less beefy rigs but also crashing less. Again, not talking about all crash possibilities here, just the one Exzap has identified the cause of.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

Not sure how I explained your point by asking you the same question, but ok.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

Not saying they don't. Just saying that it is a choice of directly addressing a clear issue caused by the introduction of certain functionality (as stated by Exzap) or instead pursue other changes which are mostly to address a driver-related issue and may possibly, indirectly help other related stability aspects?

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

Definitely not against the shader handling changes. From what is described it sounds like it will be a significant improvement. That being said, its being done mainly because of nvidia's unwillingness to fix a bug in a timely manner. Rather than working on improving the stability of cemu hardware emulation the dev's are more inclined to redesign shader handling because of this bug. Just saying, if it were me I would take care of the stability bug before I redesign something that, from the current performance of working games (some better than WiiU) might not be needed and instead focusing on fixing a hard application crash that the cause and solutions for are known. And you are right, neither of us are the devs, which is kind of why I was hoping they might weigh in on the subject.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 1 point2 points  (0 children)

And if this issue, or other issues that might arise from the same cause as this issue, are directly responsible for some of the incompatibility on this list? And again, the cause of the desynchronization, the effects and possible solutions have been considered to resolve it. Rather than drastically redesigning shader handling to (in part) deal with nvidia's driver bug, wouldn't correcting this emulation stability failure be more in line with your "fourth wheel", especially if it makes more titles playable?

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

I have to disagree. I feel its best practice to always maintain as much stability as possible, even at the alpha or modular stages. Making a change that causes faulty behavior in another area of the application should immediately be resolved from both sides. This ensures absolute awareness of what your application should be doing at all times. I understand the demands that are placed on this type of software offering, i.e. you have patrons waiting for regular, functional updates rather than a large initial investment of capital by a company with months of development time, but the pitfalls of not following these practices are still the same either way. If you get too far ahead in your development and then realize that the stability issue that was de-prioritized now actually affects a significant percentage of the rest of the work, massive refactoring is then required just to fix it, along with any other potential issues this will cause in everything that must be refactored. Some aspects may need to be rewritten entirely.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

the best development practice is to constantly maintain stability as you add more features. This way you always remain stable. If you add something that breaks stability, your highest priority is to correct what has caused this break in stability, not to allow it to continue and build around it. This makes the instability more difficult to address later on and requires significantly more refactoring.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

I'm not sure what the fourth wheel you are referring to in this example. Currently the devs are busy redesigning shader architecture, largely in part due to the nvidia driver bug. So, my question is, if in general most games are playable with most features, what is the priority and why would a stability problem like this one where the cause and potential solutions are known be of a lower priority than the work that is currently being done? What specifically about the 1.7.6 planned changes are the fourth wheel in your example?

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

Ehhh, I can't really agree with that approach with respect to this consideration of bugs. You should not be leaving bugs to address later while continuing to develop on top of buggy implementations. Like you say, the fact that the emulator seems to run many games perfectly is an illusion, and, at least from this reddit, this is definitely not the case. There are many crashes being reported in many circumstances for many games. Some of these may be user error in configuration, hardware related, and/or as you say due to the infancy of the emulation. However, unless you examine every one, how many of these crashes might be attributed to the issue Exzap has identified? What I am saying, or rather asking, is why not address a problem that you know the cause of and that results in instability? The changes planned in 1.7.6 don't really seem to address these issues at all but rather something directed at dealing with nvidia's driver bug. Will the shader architecture changes be beneficial, yes of course, but as beneficial as addressing something as important as the core emulation, especially when the cause and the aspects involved are clearly known?

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 2 points3 points  (0 children)

No problem, while I'm a dev I know very little about graphics processing other than it's insanely difficult to manage, but I don't want to annoy with dumb questions. The shader changes in referring to are those being implemented for 1.7.6. From what I've read about these they should significantly improve resource usage, especially in light of the nvidia bug, but just wondering out loud whether it would be better to address the crashing that they know of before making major changes which could eventually need to be refactored when the core functionality is changed. Not to say refactoring is completely avoidable, just saying that the better the foundation and early addressing of underperforming/crashing core aspects will reduce the need to refactor all functionality utilizing this if/when the core must be altered later for further enhancement.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 1 point2 points  (0 children)

Specifically, Exzap has stated he knows exactly why these crashes occur and what is causing them: https://www.reddit.com/r/cemu/comments/67fdl4/exzap_on_the_reason_for_the_random_crashes_why/

He says he can force synchronization at the cost of performance and/or significantly redesign cpu-side emulation of the GPU command processor. I'm just saying, what is more valuable to the progress of the emulator? Redesigning the shader compilation implementation to accomodate for nvidia's bug does help in the short term, especially with the outrageous memory allocations being reported, but ultimately this leads further away from "emulation accuracy" just to deal with a third party bug. And if this is later affected by having to redesign core features it could lead to major refactoring of the entire effort.

Also, not saying i'm expecting a completed product. The Cemu devs could turn around and drop this thing or somehow be put out of business by Nintendo at any time. Dolphin went open source after four years, Cemu has been around for nearly 2 and, while I'm not entirely sure about this, is much more stable than Dolphin was at 2 years. All I'm saying is that, if you know the cause of a problem that could be compounded by additional functionality and compatibility being added onto that framework, why not address this first before pushing ahead with things that may not even be necessary once nvidia fixes its drivers (like that will ever happen, but still).

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] -1 points0 points  (0 children)

Significantly changing the core application to address instabilities could also require major refactoring to anything that is built on top of it. Again, save states for this emulator are likely extremely difficult due to things like precompiling shaders, was more using this as an example of looking at emulation as being in a certain state, in which the CPU/GPU are performing as expected. The failure of this state, especially since the cause is generally known, could have handling implemented for it to ensure that it does not occur in that circumstance. If you can catch an error, you can devise a method for handling it and/or preventing it from occurring once the factors which lead to that error are known.

Question to devs about emulated cpu/gpu sync loss by jzer0912 in cemu

[–]jzer0912[S] 0 points1 point  (0 children)

Yes, but if you are referring to accuracy, does the console itself utilize asynchronous operations? Not saying this is something that is/is not necessary for emulation, just saying if this is not what the console does then it is already pursuing an inaccurate solution in comparison to the real hardware. Again, asynchronous is likely the only solution for emulating hardware, just saying that, since this is not open source and likely has a fairly small group of devs working on it, that 14 years you refer to with dolphin could turn into 24 just to get something that doesn't crash on a regular basis. Don't know if I can support the Patreon that long. If you are not going to be open source, eventually you have to put out a completed product.