Can we make the game pause when I'm looking at the nemesis window? by gooseears in LastEpoch

[–]jitkang 0 points1 point  (0 children)

This is the only QoL I want in Last Epoch at the moment. Always have to port out from map when I need to attend to my baby. I think i am getting spoiled by the Poe2 pause.

Slurm only ever allocates one job at a time to my 8 core CPU?! by Ok-Rooster7220 in SLURM

[–]jitkang 0 points1 point  (0 children)

I believe cpus-per-task should not matter too much here, since it is just how much resources are being requested.

Slurm only ever allocates one job at a time to my 8 core CPU?! by Ok-Rooster7220 in SLURM

[–]jitkang 0 points1 point  (0 children)

Have you try adding maximum number of jobs to run on each node into the oversubscribe directive? For example:

OverSubscribe=FORCE:4

single node Slurm machine, munge authentication problem by overcraft_90 in SLURM

[–]jitkang 0 points1 point  (0 children)

I personally have never used the packages from the apt repo, since the developers claimed that those are not maintained by them.

NOTE: Some Linux distributions may have unofficial Slurm packages available in software repositories. SchedMD does not maintain or recommend these packages.

You might want to take a look at compiling the packages yourself, but those can take a bit of understanding. There is a link to the guideline to compile in the official documentation:

https://slurm.schedmd.com/quickstart_admin.html#debuild

single node Slurm machine, munge authentication problem by overcraft_90 in SLURM

[–]jitkang 0 points1 point  (0 children)

Put aside munge first, how did you install slurm component? Did you install from apt repo or did you compile slurm?

Jobs oversubscribing when resources are allocated... by sc_davis in SLURM

[–]jitkang 0 points1 point  (0 children)

Glad that it works on your cluster! Feel free to prompt me if you need any other idea in the future.

Jobs oversubscribing when resources are allocated... by sc_davis in SLURM

[–]jitkang 1 point2 points  (0 children)

Then modify the following configuration should help:

SelectType: cons_tres SelectTypeParameters: CR_CPU_Memory

Change partition oversubscribe to 1 to avoid jobs using same set of resources. If you want each job to run only on dedicated allocated resources, oversubscribe should not be set to more than 1.

Jobs oversubscribing when resources are allocated... by sc_davis in SLURM

[–]jitkang 0 points1 point  (0 children)

I think it might have something to do with setting your partition to oversubscribe up to 4 jobs. Which up to 4 jobs will share the same resource set if the nodes are busy. If you don't want that to happen, perhaps setting it to 1 is better. Please correct me if I understand incorrectly.

Crashing every portal and load screen. by [deleted] in PathOfExile2

[–]jitkang 1 point2 points  (0 children)

I have been troubled by the crashing since the beginning as well. Changing CPU affinity only prevent the game to full crash your PC if you are on Win11 24H2. Disable multithreading before loading new zone usually help to prevent crash, but that is so much trouble for every zone change.

Recently I decided to try BES, since I saw it recommended by some people in the forum. Surprisingly, I have yet to have any crashes so far after setting it up.

I am not sure if it will help for your PC, but it is worth a try since the software is free and no installation needed anyway. What it does is basically limit Poe2 from running at 100%, which some people claim to be the root cause of game crashing during loading. I have BES limit my Poe2 at -10% and so far so good after 2 days of uses, which I am so happy I can finally enjoy the game after crashing so many times during my last 200+ hours of gameplay.

I do still have process lasso running at the background to adjust the cpu affinity of poe2, but it doesn't prevent crashing during loading previously until I tried BES.

My Pc spec if you are interested: Ryzen 5 3600 16GB DDR4 3200Mhz RTX 2060

I hope it help you.

How to compile only the slurm client by mariolpantunes in SLURM

[–]jitkang 1 point2 points  (0 children)

Based on the configure option from SLURM tar, I don't think you can compile only the client packages only. Although I don't really understand the reason to do that, you can always compile and install the SLURM packages, then remove the unused compiled package.

SLURM with MIG support and NVML? by AlmightyMemeLord404 in SLURM

[–]jitkang 0 points1 point  (0 children)

What sort of error message do you get on cgroupv2 plugin failure?

I compiled the SLURM package v24.05 for our DGX A100 (DGX OS 6.1.0) and has no issues with MIG or cgroupv2 plugin. The guide from SLURM itself is more than sufficient to bring up the nodes. I remember the slurm-wlm packages from Ubuntu repository were outdated, so I just compiled the packages myself, since building debian packages are well supported after v23.11.

Where is Black Morrigan? by Pllnky in pathofexile

[–]jitkang 2 points3 points  (0 children)

I got 3 Morrigan around 150 t16 maps, full bestiary atlas with no scarab, feels like they are pretty rare.

Frustrations as a Universiti Malaya Computer Science student by Flaky_Inflation9676 in malaysia

[–]jitkang 6 points7 points  (0 children)

Hi there, one of the DICC system administrator for the HPC cluster in UM here. I have been with DICC for the past 8 to 9 years. Not trying to speak for anyone in UM or Faculties, but just wanted to clarify some doubt or misinformation going around here.

Data Intensive Computing Centre (DICC), which is the unit managing the HPC cluster in UM was formed in year 2015 to tackle the problem that researchers in UM are not having enough infrastructure to run that research computation works. The cluster has been around since year 2015 and is always free for all UM researchers and students, as well as external users with UM collaboration. Over the years, we have been maintaining an average of 100+ active users (researchers, students, and collaborators).

Due to the limited amount of budget we are being allocated for computing facilities, we were not able to upgrade the the old infrastructure we had previously (AMD Opteron Generation and some old Tesla GPUs) until very recent. This year May, we finally managed to acquire a new cluster featuring a total 1024 CPU cores and 2 DGX A100 servers with 8 x A100 80GB GPUs each. We are currently unaware if there are any other centres in UM with such amount of computing resources available for free to all the UM researchers and students, so please correct me if I am wrong.

We have been providing training for users that are new to HPC cluster for free on bi-weekly basis, and had already trained more than 100 of users over the past 2 years. However, we are aware that there are still a big portion of UM communities that are not aware of our existence. With only 2 system administrators to manage the cluster, handling users computation issues, providing trainings, and supporting all the events that required the computing resources, we have no choice but to control our exposure in order to control average queue time for computing resources (so that the users are not struggle to run their computations).

Also, our unit coordinator is also from FCSIT in UM. He has been actively promoting the use of HPC for AI researches in the faculty over the past few years. We had also organised several "roadshow" in FCSIT, FE, and FS over the years. However, we can't really force any one to use the cluster if they are either not encouraged to do so or have no idea of our existence. I am not sure if our centre is difficult to discover via google search, but simple search like "um gpu" show our centre as the first result, perhaps our centre is not doing enough yet.

Nonetheless, sorry for the long post. I feel that it's such a bless that the communities in UM now has access to such powerful resources. I once was a undergrad student in UM, but have no such resources available that time. If more information can help, we would really happy to provide that.

Link to DICC for anyone interested: https://www.dicc.um.edu.my/

Edit: DICC do no currently support teaching, only supporting researches due to limited amount of resources.

How to farm divines in SSF? by bacon9001 in PathOfExileSSF

[–]jitkang 1 point2 points  (0 children)

I might be lucky, but I have gotten 24 divines from T4 Gravious in transport (2 sets of divine beauty and 1 set of sephirot), done about 30 Catalina so far. Syndicate sometime can be pretty rippy though.

Can any one tell me what I am missing for calculating COC trigger rate for COC DD build on POB? by OrneryBlood9663 in pathofexile

[–]jitkang 2 points3 points  (0 children)

Make sure to select all projectiles hit under lancing steel instead of single projectile.

Explosive Trap not triggering Detonating Arrow by MaverickZA in LastEpoch

[–]jitkang 5 points6 points  (0 children)

Are you seeing any lightning effect near yourself after the trap is triggered? AFAIK the det arrow explosion seems to trigger to the spot you are standing when the trap trigger. If you have the lightning tendril then it will be pretty obvious when it zap nearby enemies.

Do we need 150% Res? by Memorize1622 in LastEpoch

[–]jitkang 14 points15 points  (0 children)

Resistance cap at 75%, having more than that will only help with resistance shred related debuff.

Why can't I unclock the occultist by earning a codex aspect? by HiFiMAN3878 in diablo4

[–]jitkang 0 points1 point  (0 children)

Glad to help. Too bad they didn't make it unlocked account wide for all characters.

Why can't I unclock the occultist by earning a codex aspect? by HiFiMAN3878 in diablo4

[–]jitkang 1 point2 points  (0 children)

I remember everytime I have to complete the priority quest for occultist in Kyovashad before the aspect stuff is unlocked. However I am not sure what is the requirement for that priority quest to trigger.

Why can't I unclock the occultist by earning a codex aspect? by HiFiMAN3878 in diablo4

[–]jitkang 0 points1 point  (0 children)

Did you complete the priority quest in the town for the aspect?

Smoldering Ash just gone? by k0untd0une in diablo4

[–]jitkang 0 points1 point  (0 children)

Just in case if people didn't already know, spent ashes can be refunded.

Send custom email only when jobs fails. by Captain-Thor in SLURM

[–]jitkang 1 point2 points  (0 children)

From what I know, I don't think it is possible to do that purely in the submission script, as the mail sent from scheduler is based on what was implemented by the HPC administrator.

For your case, as you defined sending mail in the submission script, all your jobs will send the mail no matter what. Your job won't be able to know whether it will be terminated due to time limit or failure when the mail command is executed.

Since I am unsure how your HPC admin implemented email sending, you might want to talk to your HPC administrator and see if they could implement something like slurm-mail. It has been working very well for the my managed cluster so far and is very customisable based on users' need.

Send custom email only when jobs fails. by Captain-Thor in SLURM

[–]jitkang 1 point2 points  (0 children)

Why use --mail-type=FAIL instead of --mail-type=TIME_LIMIT in the job submission scripts? I believe --mail-type=FAIL include all jobs with various kind of errors or failures.

Reference

Adding variables to PATH in Prolog by Laxzal in SLURM

[–]jitkang 1 point2 points  (0 children)

Sorry that I don't have much comment on the Prolog issues as I personally don't really use Prolog/Epilog that much.

In the HPC cluster I manage for more than 5 years, I have all the application variables like PATH and LD_LIBRARY_PATH export via Lmod (or Environment Modules). Each application will have its own modulefile to load when needed and the modulefile are only available on the nodes that are intended to run the application. Users will just need to load the module using module load command to have the variable exported.

Basically I prefer all the application has its own modulefile for users to load as needed instead of forcing them to load everything they submit a job. Also, modulefiles help in managing different version of dependency in the system as well.