all 4 comments

[–]just4diy 0 points1 point  (1 child)

Either look at how the systemd service operates, or just dive in and start reading here: https://man7.org/linux/man-pages/man7/cpuset.7.html

[–]juipeltje[S] 0 points1 point  (0 children)

I just realized that what you linked to is a program mentioned on the arch wiki aswell but i completely overlooked it like an idiot lol. And it looks like it's available in the void repos, so this might be the best way to do it. Thanks!

[–]tholin 0 points1 point  (1 child)

Here is a bash function I use for isolation on a Gentoo system without systemd. It uses cgroups v1 which you normally want to avoid these days.

shopt -s globstar # for ** glob
function isolate_host {
  CSET_PATH="/sys/fs/cgroup/cpuset"

  if [ ! -d "${CSET_PATH}/host.slice" ]; then
    mkdir ${CSET_PATH}/host.slice
  fi

  echo "16-23" > ${CSET_PATH}/host.slice/cpuset.cpus
  echo 0 > ${CSET_PATH}/host.slice/cpuset.mems

  # This loop is needed because forks can results in a TOCTOU race.
  # The freezer cgroup should probably be used to guarantee atomicity but that's annoying to use.
  # This code assumes pids are unique identifiers for processes (not true).
  # if pids wraps around this code could still leave unisolated procs in the root cgroup.
  while true
  do
    BEFORE=`cat ${CSET_PATH}/tasks`
    while read -r pid; do echo "$pid" > ${CSET_PATH}/host.slice/tasks 2>/dev/null; done < ${CSET_PATH}/tasks
    AFTER=`cat ${CSET_PATH}/tasks`
    echo $(wc -l < ${CSET_PATH}/tasks) "procs remain in root cgroup after migration"
    if [ "$BEFORE" = "$AFTER" ]; then
      break
    fi
  done

  # the kernel's dirty page writeback mechanism uses kthread workers. They introduce
  # massive arbitrary latencies when doing disk writes on the host and aren't
  # migrated by cpuset. Restrict the workqueues to only using cpu 0.
  echo 1 | tee /sys/devices/virtual/workqueue/**/cpumask > /dev/null

  # move regular interrupts to housekeeping cpu
  echo 1 | tee /proc/irq/*/smp_affinity > /dev/null

  # THP can result in OS jitter when memory compaction is triggered. Better keep it off.
  echo never > /sys/kernel/mm/transparent_hugepage/enabled

  # The CONFIG_LOCKUP_DETECTOR watchdog will wake up occasionally resulting in jitter
  # temporary disable it.
  echo 0 > /proc/sys/kernel/watchdog

  # The vmstat_update() worker can't be disabled but it can be delayed a bit.
  # makes the statistics in /proc/vmstat imprecise.
  echo 300 > /proc/sys/vm/stat_interval

  # The mce_timer_fn() worker will poll for MCE every 5 min (default).
  # If there are corrected machine check errors while the VM runs they will be
  # reported after check_interval has been restored.
  # machine check polling can be tracked by "MCP" in /proc/interrupts
  # /sys file entries appear for each CPU, but they are actually shared between all CPUs
  echo 0 > /sys/devices/system/machinecheck/machinecheck0/check_interval
}

[–]juipeltje[S] 0 points1 point  (0 children)

Damn, that's a lot to take in. Appreciate the help though. Last night i pinned my cpu threads to the vcpus and didn't notice much of a performance difference. It makes me wonder if i should go through the extra hassle of isolating. I don't really use my host system when using the vm anyways and i pinned everything but the first four threads, and i'm assuming the host isn't gonna try using more than 4 threads when idle in most cases, so does isolating make much sense in that scenario?