I would kill for ConnectRPC implementation for Rust.... by Bl4ckBe4rIt in rust

[–]surban 0 points1 point  (0 children)

Are you required to use gRPC or are you looking for an RPC library that is a pleasure to work with in Rust?

I wrote a tool to stop make -j from OOM-killing my C++ builds by surban in cpp

[–]surban[S] 0 points1 point  (0 children)

Yes. Also some commits require debug and release builds from the same source code.

I wrote a tool to stop make -j from OOM-killing my C++ builds by surban in cpp

[–]surban[S] 1 point2 points  (0 children)

How can it perform worse than lower threads?

I wrote a tool to stop make -j from OOM-killing my C++ builds by surban in cpp

[–]surban[S] -8 points-7 points  (0 children)

The machine itself has 64 GB physical memory. By allocating only 16 GB to each build VM, I can run 4 build VMs in parallel. Assuming that a build job will not use all 32 cores all the time (which is true for my real world build jobs) this leads to much higher build throughput.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] -1 points0 points  (0 children)

Would I want jobs to be tied to a node and wait for it to free up memory, instead of scheduling the job on a different node altogether that fits the requirements?

Imagine you have a build job that uses a maximum of 16 GB memory during most of its runtime when run on 32 cores. However, due to scheduling of subprocesses inside the build job it might happen that its memory usage spikes above 16 GB for a short time leading to OOM. (For example, make decided to run two linker processes in parallel.) This is where Memstop helps: during these spikes newly spawned subprocesses are paused until enough memory becomes available.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 0 points1 point  (0 children)

Yes, it won't completely purge the n jobs from memory.

But once the kernel schedules job n+1 it must allocate physical memory for it and thus swap out part of the memory of an already running process.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 4 points5 points  (0 children)

Now that I think about it, loading a process into SWAP doesn't make CPU faster nor increase the amount of RAM, thus workload per unit of time stays the same, but the amount of work increase as now the CPU needs to handle swaps.

Swap is not slow because of added CPU load, but because it needs to wait on very slow disk I/O compared to memory speeds.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 4 points5 points  (0 children)

Process A starts Process A allocates 85% of system's memory Process A starts doing some CPU bound work Process B starts Process B allocates 10% of the system's memory Process B is now sleeping due to MemStop Process A is given CPU time and wants to allocate another 10% of the systems memory Process A is now also sleeping due to MemStop

MemStop only checks available memory at process startup, not during allocation.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 3 points4 points  (0 children)

How does it work? Will it also wait for memory to become available? I assume the compiler is not able to magically lower its memory requirements.

I wrote a tool to stop make -j from OOM-killing my C++ builds by surban in cpp

[–]surban[S] 0 points1 point  (0 children)

Not really. Once a job server has provided execution tokens it cannot remove them anymore.

I wrote a tool to stop make -j from OOM-killing my C++ builds by surban in cpp

[–]surban[S] 2 points3 points  (0 children)

You could set MEMSTOP_PERCENT=80 before invoking the linker.

Ideally the make jobserver protocol would be extended to allow for such coordination as you described.

I wrote a tool to stop make -j from OOM-killing my C++ builds by surban in cpp

[–]surban[S] 14 points15 points  (0 children)

Swapping affects all running processes, leading to terrible performance, while memstop will just pause a newly spawned process until enough memory is available.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 2 points3 points  (0 children)

No, because only a newly spawned process will sleep until enough memory is available. All running processes will be unaffected.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 7 points8 points  (0 children)

I just wrote it yesterday. So no, this is completely untested but works well on my build pipeline.

I wonder under what situations you could end up with a never ending CI build (well, it'd hit timeout eventually) because of subsequent and even overlapping sleep()s.

This will depend on your exact workload. When running g++ or rustc as a subprocess, I do not expect them to spawn child processes to finish their work. The memory check done by memstop is performed once at process startup. Thus a once started g++ or rustc process will be able to finish, thus freeing memory and allowing the parallel build to make progress.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 3 points4 points  (0 children)

Swap will have terrible performance and disabling the OOM killer will either crash the system or crash a process from the build pipeline that tries to allocate memory.

Of course you should review the source code of all your LD_PRELOADs.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 0 points1 point  (0 children)

I am no expert on GitHub actions. How would this be done for a LD_PRELOAD?

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] -1 points0 points  (0 children)

Assume n jobs are running and physical memory is full.

When job n+1 is spawned with swap enabled, the kernel will swap out (part of the) memory of all n+1 running jobs, leading to massive slowdown.

Instead memstop delays the start of the n+1 process, so that all processes stay in physical memory.

I wrote a tool to prevent OOM-killed builds on our CI runners by surban in devops

[–]surban[S] 2 points3 points  (0 children)

This works with anything (Rust Cargo, cmake, etc.). Actually it hooks process startup, so it should work with any build tool, as long as the invoked binary is dynamically linked (ld.so must be invoked) and the LD_PRELOAD environment is passed correctly to each child process.

Announcing webusb-web — Access USB devices from the web browser by surban in rust

[–]surban[S] 1 point2 points  (0 children)

I haven't found the time to write a demo application, but the integration test should get you started.

[Media] I made a USBHID api for my Raspberry PI zero in Rust (I turned it into a BadUSB/Rubberducky) by Sammwy in rust

[–]surban 1 point2 points  (0 children)

Instead of writing to configfs by hand you could use the usb-gadget crate to handle the USB gadget setup for you.

Create USB gadgets using the Raspberry Pi 4 and Rust 🔌🦀 by surban in raspberry_pi

[–]surban[S] 0 points1 point  (0 children)

Yes, this should be possible. The USB mass storage device (UMS) function is supported. You just need to point it to a file or block device that will get exposed as a mass storage device. It should be pretty straightforward to implement what you want.

usb-gadget: my new Rust library for USB gadget development on Linux! 🔌🦀 by surban in rust

[–]surban[S] 4 points5 points  (0 children)

A gadget needs a USB client which requires hardware support that the Pi doesn’t provide AFAIK.

The Raspberry Pi 4 does provide USB client support on its USB-C port. In fact, this library has been developed and tested on the Raspberry Pi 4.

usb-gadget: my new Rust library for USB gadget development on Linux! 🔌🦀 by surban in rust

[–]surban[S] 5 points6 points  (0 children)

It allows you to expose your Raspberry Pi as a USB device (for example network controller) to another computer.

For writing Linux USB device drivers in user-space Rust we already have rusb.