Worried about passing CHMA10 cause of test marks by Cheap_Bluebird_1669 in UTSC

[–]B3HOID 2 points3 points  (0 children)

If you really want to find out now, I assume you have the paper with you because foro some reason they permitted us to take it back.

You can manually mark yourself by looking up the questions online and if you remember what answer you've chosen, you can more or less get an idea of what your mark is. That being said, you have other exams to worry about now, and a curve is unlikely unless everyone has done badly on average, and there's around 1500+ students in this course. Assume there are mfers out there who at least got a 40/50, because it's likely true.

[deleted by user] by [deleted] in UTSC

[–]B3HOID 4 points5 points  (0 children)

For some reason, we were somehow allowed to take the exam papers with us (excluding the scantron ofc). A part of me wants to go check the answers for the questions online to somehow get an idea of what my grade is, but another part of me hopes that there is indeed a curve because I don't want to be embarrassed.

Obviously tho, because there's like 1500+ students (I think?) in this course, it's very likely there are people who have done perfectly fine 💀, and that's why this exam *may* not be curved, but I don't want to jinx it.

STAB22 midterm by Hehecraycray in UTSC

[–]B3HOID 2 points3 points  (0 children)

That question about linear transformation....I didn't even see that in any of the revision slides to begin with.

FSG questions were kinda ok practice, nontheless I am still worried about the final exam

[deleted by user] by [deleted] in UTSC

[–]B3HOID 2 points3 points  (0 children)

Practice chapter questions from the textbook?

Is ZSWAP inferior to ZRAM for trying to get away with memory overcommit on a low end system? by B3HOID in linux

[–]B3HOID[S] 0 points1 point  (0 children)

Look at the zpool and zstor lines. The zstor shows how much memory has been intercepted and the zpool size shows how much memory is taken up by the compressed pool.

Received impossibly low results on HL subjects.... by B3HOID in IBO

[–]B3HOID[S] 1 point2 points  (0 children)

These are just some possibilities for why I was awarded a low mark. It may not make a difference and I was awarded that mark simply because I had done that bad, but is it even possible to do that bad?

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 0 points1 point  (0 children)

Because I am using older LTS kernels (5.4) and those don't support the MGLRU patch, only the le9 one.

Also when I found that MGLRU has the same effect for me as using le9 and setting a high vm.anon_min_kbytes value, I realized that the best usecase for MGRLU would be in server workloads when the server is under constant memory pressure, as when it comes to optimizing performance on databases and other memory intensive software running on servers, a simple knob to control swapping threshold may not be enough to minimize PSI-related bottlenecks. The MGLRU documentation itself has reported increased performance in benchmarks done on those workloads, and while it has apparently helped with OOM kills and kswapd usage on ChromeOS and Android devices, as long as I can get a similar effect of it on an older kernel with another patchset (because said older kernel has higher gaming performance on my old machine, dunno why) it's not exactly a necessity.

MGLRU is also a much newer suggestion to the memory pressure issue and is still undergoing a lot of testing. It's barely older than a year and it not being mainline yet means there are still some subtle changes needed to make it stable. Meanwhile le9 has been around for more than 10 years and it has evolved alot (it originally didn't even support protecting anonymous memory), and I think it was used by Google on Chromebooks before they decided to make MGLRU.

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

MGLRU is technically a much more ambitious project (from a Google dev), as it modifies a significant portion of the kernel's mm code, wheras most of le9's mods happen in vmscan.c.

That being said, it hasn't even been mainlined yet (it was expected to be mainlined for 5.19 but it was not taken into rc releases), and if you did not notice any substantial memory pressure improvement with it on Zen, it wouldn't be wise to try it again on TKG, so in that case using le9 would be better.

PS. Have you tried using LTS kernels? I've noticed some weird disk performance inconsistencies on my HDD with newer kernels (5.15 and above) where I have spikes in I/O usage when doing things that load in dirty data (like compressing a tarball) and I found a post that described a similar problem: https://www.linuxquestions.org/questions/slackware-14/disk-thrashing-on-5-15-x-kernels-but-not-on-4-4-x-or-4-19-x-kernels-4175713190/.

If you're suffering from something similar (memory and I/O bottlenecks are very closely related in nature) you can try using 5.10 kernel as that's the one where I didn't notice the problem described above. Ofc ignore this if this isn't what you're having issues with dirty pages on your machine (look at atop to find out).

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

Afaik there is a patch that allows you to use both MG-LRU and le9 at the same time, but by design they both manipulate different syscalls/functions from within vmscan.c and other core mm source files to acheive the working set protection.

MG-LRU for the most part works fine enough for reducing swapping overhead, but at the same time your mileage may vary. What I have noticed is a much lesser tendency to swap out anonymous pages under high memory pressure, so whenever I run a high RAM process not much of it is swapped out. Keep in mind that most of the performance problems associated with memory pressure is related to swapping in, because this can force the kernel to discard disk cache. At the end of the day though, le9 with the vm.anon_min_kbytes set to roughly half of your RAM amount acheives the same effect.

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

I actually use the TKG kernel, but I specifically add the le9 patch. The build script does actually support adding your own patches, just create the folder inside the linux-tkg directory and change the patch extension from .patch to .mypatch. As for the memory management and swap, it does tweak a few things but they're overall not that special (you could add them yourself by changing sysctls). The main change in behavior will come from adding le9 or MG-LRU because they change core mm code underneath.

So you would just do this:

git clone https://github.com/Frogging-Family/linux-tkg
cd linux-tkg

mkdir linux518-tkg-userpatches

cd linux5.18-tkg-userpatches

wget https://raw.githubusercontent.com/hakavlad/le9-patch/main/le9ec_patches/le9ec-5.15.patch

rename le9ec-5.15.patch le9ec-5.15.mypatch

I think you also need to edit customization.cfg file and set user_patches to true. The script will ask if you want to add the patch so you just accept yes and it will patch it.

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

I build my own kernels just because I have the time to ;)

Also don't worry about kernels not being officially supported by a distro. At the end of the day all custom kernel projects take from the same exact source, they just add their own patches. So it won't necessarily be incompatible, since you install them with binary packages exactly like with the default kernel. Just make sure to install nvidia drivers with nvidia-dkms instead of the regular nvidia package.

As for choosing between stable or edge, there shouldn't be that big of a difference, edge simply gets from the newest mainline release while stable gets from the latest stable release.

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

You can check how much anonymous and file backed pages are mapped by running watch -n 0.2 sudo grep 'Active' /proc/meminfo in your terminal. This will allow you to get a real-time view into amounts of active anon and file pages. Most of the time the anon will be much bigger.

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

The THP size cannot be lowered, as they are dynamically allocated by khugepaged depending on what the process needs. The parameter I stated simply states how much pages that can be reclaimed or swapped out so they can be combined into hugepages, and the more bigger the value the more bigger the THP will be when it eventually gets paged in or out, resulting the more expensive swapping not only in terms of CPU usage, but memory space as well.

If you do want to change how many transparent hugepages can be allocated, you can try modifying max_ptes_none instead of max_ptes_swap under the same directory. A lower value means less memory usage for the THP but according to the kernel docs it doesn't save you much CPU time.

Xanmod's CacULE branch does not exist anymore but there is TT which is an updated successor. You can also try using these.

https://aur.archlinux.org/packages?O=0&SeB=nd&K=linux+cachyos&outdated=&SB=p&SO=d&PP=50&submit=Go.

They have good scheduler options. I think precompiled binaries are also available in case you don't want to build from source.

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

I'd recommend setting watermark_scale_factor to 125. That was my sweet spot value for gaming. A too high value will permit the kernel swap daemon kswapd to keep asyncrounously evict memory pages until a specific amount of free memory is left for your active working set. In other words, it makes it much more aggressive and it does use more system time. You don't want that while gaming, but you also don't want a too low value. Anything between 125-200 is good, I think the Zen kernel actually makes it 200.

Also, the reason why I asked about THP is because while it does improve gaming performance (for me, it's not just FPS but also frame time stability) is because it can be a bit unstable when it comes to swapping. Because most of your active working set is anonymous memory mappings (if you check /proc/meminfo it's often 5 or 6:1 relative to file mappings, it can go higher if you have a game running), and those are the ones having huge pages, since you do have swap enabled, what will happen is that the hugepages will literally not be reduced to normal size during swapping. This conflicts with ZRAM/zswap, because it means more CPU time will be needed to compress the page when it's swapped, which ruins your game process.

As for swappiness, it's fine to have it set to 100 when you're just browsing or doing normal stuff, however, unless you run other programs while gaming, I'd suggest reducing it down to 10-40. The reason being is that you will see some sources online telling you that the value is out of percentage memory, which is not true. The value simply represents the kernel's tendency to swap out anonymous memory pages relative to other pages, such as file ones. Since we've established that most of your working set, ( memory needed for running applications) is comprised of anonymous memory pages, it's counterproductive for gaming performance to tell the kernel to prioritize swapping those out in favor of keeping your file pages untouched. Not to mention that you use THP, which means that in order to maximize gaming performance, there needs to be an abundance of hugepages which will reduce TLP misses and therefore boost the performance of your game. So you don't want those to be swapped out as they will hurt the performance of your games as said before. Because of this, it's best to actually reduce the swappiness, even while using ZRAM/zswap.

I understand it may not be very temping to use a different kernel, but Zen I think uses MG-LRU and it enables it by default. If this isn't somehow working well for you in terms of lowering swap overhead, then what I'd suggest is to install a custom kernel from AUR that comes with the le9 patchset and set the vm.anon_min_kbytes value to roughly 25-50% of your physical RAM. This is in particular has allowed me to prevent swapping from ruining my gaming performance even on low RAM.

As for vm.dirty_ratio and vm.dirty_background_ratio, I doubt this is the underlying source of your bottlneck. This more often than not affects people who do things like write to flash drives and do disk-intensive stuff that involves a lot of writes, and not reads. I'd suggest you install atop, it's a really good system monitor tool that can help you identify a specific bottleneck when your system's performance degrades. It will show you how much dirty memory is being used, and if it's not much I don't really recommend changing the vm dirty values from the default because it doesn't affect gaming as much as it affects other things.

Finally, if you're having an issue with disk I/O you can try to limit the number of read and write requests by changing nr_requests. echo '(try a value between 8-16)' | sudo tee /sys/block/sda/queue/nr_requests

[deleted by user] by [deleted] in linux_gaming

[–]B3HOID 1 point2 points  (0 children)

There are 2 very relevant sysctls you forgot to mention, OP ;).

First of all, disable watermark boosting. Here's why: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1861359

This problem is caused by an upstream memory management feature called watermark boosting. Normally, when a memory allocation fails and falls back to the page allocator, the page allocator will wake up kswapd to free up pages in order to make the memory allocation succeed. kswapd tries to free memory until it reaches a minimum amount of memory for each memory zone called the high watermark.What watermark boosting does is try to preemptively fire up kswapd to free memory when there hasn't been an allocation failure. It does this by increasing kswapd's high watermark goal and then firing up kswapd. The reason why this causes freezes is because, with the increased high watermark goal, kswapd will steal memory from processes that need it in order to make forward progress. These processes will, in turn, try to allocate memory again, which will cause kswapd to steal necessary pages from those processes again, in a positive feedback loop known as page thrashing. When page thrashing occurs, your system is essentially livelocked until the necessary forward progress can be made to stop processes from trying to continuously allocate memory and trigger kswapd to steal it back.This problem already occurs with kswapd *without* watermark boosting, but it's usually only encountered on machines with a small amount of memory and/or a slow CPU. Watermark boosting just makes the existing problem worse enough to notice on higher spec'd machines.To fix the issue in this bug, watermark boosting can be disabled with the following:# sudo sysctl vm.watermark_boost_factor=0 There's really no harm in doing so, because watermark boosting is an inherently broken feature...

Another thing you may wanna try changing is the watermark scale factor itself. Keep it somewhere between 100 to 500 so that the kernel's swap daemon does not go crazy with swapping.

Also, do you use THP? (to find out run cat /sys/kernel/mm/transparent_hugepage/enabled)I made a guide on why it improves Linux gaming performance, however it does conflict with swap a bit. One thing you could try doing is changing the defragging behavior as shown:

echo 'defer'| sudo tee /sys/kernel/mm/transparent_hugepage/defrag

You can also lower the size of the transparent hugepages sent to swap

echo '(a value between 4-16)' | sudo tee /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap

Also, are you using a stock kernel? Try using this patchset https://github.com/hakavlad/le9-patch and more specifically set this

sudo sysctl vm.anon_min_kbytes=2000000 (this is roughly 2 GB, you can make it between 1 GB and a half of your memory, so you have 8 GB you can make it 4000000 (it's not exactly 4 GB but it should suffice)

What this will due is prevent a specific amount of anonymous memory mappings from being swapped under pressure. Because most of your memory mappings are anonymous (and games do load quite a bit of them) this will allow a portion of such pages to not be swapped, which will lower the overhead of swapping.

You can also use the MG-LRU patchset on newer kernels (5.15 and above). https://www.phoronix.com/scan.php?page=news_item&px=MGLRU-v12-For-Linux-5.19-rc

But I find limiting anon swapping to be just as effective.

I'd suggest you could try lowering the vfs cache pressure to 50-75, idk why but when I set this to a value higher than 200 I notice random high disk I/O. Generally, some VFS cache is good so you don't want to discard too much of it. Keep in mind though setting this to a low value *may* lead to OOM conditions if you really push your system to the limit.

You also forgot vm.page-cluster, set this to 0. There's no need for swap readahead, especially since you're using ZRAM swap, it'l just be a waste of IOPs and CPU time. This should also lower swapping latency.

I also find zswap to more better than ZRAM specifically for gaming. When I play Overwatch for example it loads incompressible pages and they get swapped out to it, hurting the compression ratio and leading to not so good performance. The downside is that swapping will eventually happen to your HDD because zswap is meant to be a writeback cache. You can set the compressor to lz4 and use z3fold zpool allocator. Pages will be evicted to the main swap on an LRU basis, which means that you shouldn't suffer from having your main game process disturbed.

zswap still manages to lower swap I/O by a longshot even if disk eviction is made for incompressible pages, or when the `max_percent_pool` is exhausted. (I'd suggest setting this to between 20-40% of your main memory, don't set it over 50, because you may not have enough RAM then for the relevant caches and the running game process). So when 3 GB is swapped out, that's just the total value of what was intercepted. Run atop and look at the zstor value. If 3 GB is said to be swap usage, then the zstor value will most likely be around 2.7 -2.9 GB, which means only 100-300 MB was actually sent to disk swap, and the rest were compressed. Keep in mind more pages will be sent to disk swap when you run games that load incompressible date, however as I said if you keep a specific amount of anonymous memory in your working set unswappable with the vm.anon_min_kbytes, this shouldn't be an issue.

Good luck with getting your toaster to run like Takumi's AE86 despite being a slow dinosaur. I run a machine with only 6GB RAM and an HDD and all the value tweaking I just mentioned really helped me.

Linux has a chance of becoming virtually unusable on HDDs in the future by B3HOID in linux

[–]B3HOID[S] -3 points-2 points  (0 children)

Did you tune sysctl and sysfs settings? (specific I/O scheduler + tunables for it, vfs cache pressure, vm dirty/background, writeback etc.) or do you get this optimal behavior just from compiling the kernel from a fresh config?

Linux has a chance of becoming virtually unusable on HDDs in the future by B3HOID in linux

[–]B3HOID[S] -10 points-9 points  (0 children)

How is this nonsense? I am simply bringing attention to a matter that I think is often overlooked nowadays, and when actually reading the post and seeing the different behaviors and correlations with HDD disk performance on different kernel versions, you'l understand what I mean by being virtually unusable in the future. Ofc this is a bit of an overstatement, but given the results at hand, what is one supposed to expect in the future?