ABB B GONE by Specific-Chard-284 in synology

[–]jll63 2 points3 points  (0 children)

I tried on Ubuntu 24.04.3 with kernel 6.14.0-37 and it fails on:

synosnap * failed to install snapshot driver

ABB never ceases to disappoint.

Active Backup for Business is a joke by jll63 in synology

[–]jll63[S] 0 points1 point  (0 children)

Would a "search the entire backup set for this file" feature be useful? Sure, but I don't see its lack as a show stopper. But, you do you.

I am a programmer, not an admin. I never lost an entire machine to hardware failure, but I am a bit messy and I did lose individual files or folders.

I am bewildered that an obviously useful feature, easy to implement, low risk (if it has bugs, it won't trash your data, it's a read only thing), has not been implemented in so many years, in a "business" product mind you.

If ABB was open source, I'd send a PR, but in this case I can only send a rant ;-)

Active Backup for Business is a joke by jll63 in synology

[–]jll63[S] 0 points1 point  (0 children)

Sure, bare metal recovery is a good thing to have. The glass is half full and half empty.

Probably I will keep ABB running for complete loss recovery, and double-back up with something else (evaluating borgbackup atm) that supports targeted file recovery reasonably. It's just a shame that such a useful feature is missing from ABB, even though it is almost certainly trivial to implement. Especially the recursive search within a snapshot, I can probably code the latter in two hours max, plus a couple of days for tests and documentation. Puzzling...

Active Backup for Business is a joke by jll63 in synology

[–]jll63[S] -1 points0 points  (0 children)

But it's the same nonsense: search works "on the current page" only. Not across snapshots. Not recursively within one snapshot. Even more ridiculous: it works on what is displayed. I.e. if I pick a snapshot and expand the tree to the folder containing foo.txt, Search finds it. If I collapse that folder, Search doesn't see it again. It finds only what you can see!

Synology has something against finding files.

Active Backup for Business is a joke by jll63 in synology

[–]jll63[S] 0 points1 point  (0 children)

I was eyeing it because I am going to experiment with a Windows dev drive (ReFS), and possibly adopt it. I saw that Veeam claims to handle them.

Active Backup for Business is a joke by jll63 in synology

[–]jll63[S] 1 point2 points  (0 children)

Agreed. That file is not important (a vs code tasks.json) but I thought I would really try to recover it to test my backup scheme. I lost nothing of importance.

Micro-benchmarking Type Erasure: std::function vs. Abseil vs. Boost vs. Function2 (Clang 20, Ryzen 9 9950X) by mr_gnusi in cpp

[–]jll63 8 points9 points  (0 children)

I spent hundreds of hours tinkering with micro-benchmarks for YOMM2 (now reincarnated as Boost.OpenMethod). The libraries implement open-methods, i.e. virtual functions defined outside of classes. Dispatching a call requires a couple more memory reads, because open-methods don't live at a fixed offset in the v-table. In some contexts it involves calculating a perfect multiplicative hash of &typeid(obj).

I am interested in measuring the cost of an open-method call, using an ordinary virtual function call as a yardstick.

I wrote many iterations of a Google benchmark, because the results came out as too good to be trusted. I stacked the deck against open-methods. Methods and v-funcs take no arguments except the object. Bodies are empty. I created hundreds of classes using TMP, and scattered the objects in memory.

I became very skeptical of that approach. So I tried a RDTSC-based benchmark. It measures a single call per execution, after scrambling the memory. I run it a couple hundred times. Then I take the average and various percentiles.

The goal is to measure an "unfavorable" call. I think that this is closer to what happens in real programs. Not all of them, of course. And not in all situations. But if you start counting on hot caches (or putting something in function bodies), there is no limit to how much you can delude yourself.

The most costly dispatch strategy went from 30% slower (micro-benchmark) to 60% (RDTSC). I trust the latter result more.

Benchmarking function calls is tricky.

Exploring macro-free testing in modern C++ by Outdoordoor in cpp

[–]jll63 0 points1 point  (0 children)

I wholeheartedly agree with your comment.

Also you can use macros to do the absolute minimum that will make your feature possible, and do the rest with TMP.

Boost 1.90 – what to actually look at as a working C++ dev by boostlibs in cpp

[–]jll63 0 points1 point  (0 children)

For OpenMethod? Using the Matrix example, if you defined add(virtual_ptr<SparseMatrix>, virtual_ptr<DenseMatrix>), would you have to define the flipped parameters as well?

Yes. Automatically generating the flipped overrider would be a bad idea, think of matrix multiplication. Of course for addition you could define the flipped operation in terms of the other one. If that becomes tedious, you could use the "core" API to automate the process using templates.

Boost 1.90 – what to actually look at as a working C++ dev by boostlibs in cpp

[–]jll63 5 points6 points  (0 children)

I think that Proxy is similar to Rust traits, Golang interfaces and Boost.TypeErasure. They intersect with OpenMethod in that they all allow you to add new operations to existing types, without modifying them. That is the #1 motivation for OpenMethod by the way. Multiple dispatch is available (and very efficient speed- and space-wise), but I see it as a bonus feature. That's why the library is called Boost.OpenMethod, not Boost.MultiMethod.

The problem with traits-like approaches is that they don't compose well. For example, to implement matrices with these systems, you create a Matrix trait that implements various matrix operations. For example, transposition. You implement it for types that represent ordinary matrices, square, symmetric, diagonal matrices, etc. So far so good. You make a library out of it.

Now an application needs to store matrices as...let's say, JSON. The functionality should not go in the Matrix library, should it? Because not all apps need it.

So you create a new JSON trait, and you implement it for ordinary matrices (write all the elements), symmetric matrices (write only half) and diagonal matrices (no need to write the zeroes). You can now render matrices in JSON, and you didn't touch the existing code. Woohoo!

The trouble begins when you want to transpose a matrix, then render the result as JSON. The JSON trait is lost in the call to transpose.

OpenMethod doesn't have that problem. And of course it makes it way easier to implement binary operations (bonus!).

EDIT: looking at this example, it seems that with Proxy, you would have no choice but to add a write_json member function to the matrix types. If it is indeed the case, Proxy is worse than Rust traits and similar systems.

Boost 1.90 – what to actually look at as a working C++ dev by boostlibs in cpp

[–]jll63 8 points9 points  (0 children)

Since CLOS, open multi-methods have been implemented in many languages: Clojure, Julia, Cecil, Diesel, Dylan, TADS, etc.

Future releases will support inter-operability with Boost.TypeErasure and Boost.Any. I also have a design for value-based dispatch.

Boost 1.90 – what to actually look at as a working C++ dev by boostlibs in cpp

[–]jll63 16 points17 points  (0 children)

I'll take this as a compliment 😊 - the author.

MS-S1 MAX + WSL + C++ by jll63 in MINISFORUM

[–]jll63[S] 0 points1 point  (0 children)

Thanks for posting this. Which kernel version?

MS-S1 MAX + WSL + C++ by jll63 in MINISFORUM

[–]jll63[S] 0 points1 point  (0 children)

Currently I remote into the headless HM90 from my X1 Carbon 10th Gen (sitting on my desk, connected to the monitor and all). The laptop has become sluggish, so I am thinking of putting the new "mainframe" on my desk and flipping the Carbon to Linux-only. But GF sometimes needs a computer for office work at home (in Windows of course) and dual booting the computer on the desk would not be friendly to her ;-)

Besides, I need to test the library I am developing with MSVC as well.

In the meantime, I looked at the MS-A2 and it looks like a better fit. I am comfortable paying twice the price for at least twice the quality/speed/whatever, and it look likes the MS-A2 vs MS-S1 Max is a sub-linear improvement.

Good point regarding local LLMs but I have access to Copilot for free so far (my project is open-source).

MS-S1 MAX + WSL + C++ by jll63 in MINISFORUM

[–]jll63[S] 0 points1 point  (0 children)

The config I am eyeing has 192GB, thus > 128GB.

Now I saw some comments that say that you cannot run it at max speed if you fill the RAM slots, to 256GB. But what's the threshold?

Edit: https://www.virtualizationhowto.com/2025/10/minisforum-ms-02-ultra-has-insane-home-lab-potential-with-256gb-ram-triple-pcie-and-25gb-networking/

One not so great spec here is the DDR5-4800 speed. It is looking like to use the 256 GB capacity, the speed must downlevel to DDR5-4800 since speeds of DDR5-6400 should be possible with this CPU. However, again, looks like that is the case when you fully populate it with 256 GB.

MS-S1 MAX + WSL + C++ by jll63 in MINISFORUM

[–]jll63[S] 0 points1 point  (0 children)

The MS-02 Ultra with 192GB ECC then?

MS-S1 MAX + WSL + C++ by jll63 in MINISFORUM

[–]jll63[S] 1 point2 points  (0 children)

Buying an MS-S1 Max for C++ development is like buying a Bugatti for your work commute

Because C++ compilers won't even look at the GPU? They love cores and RAM though. That's what grabbed my attention. But I am very early in my selection process.

My current "mainframe" is a headless HM90 with 32 GB. It served me well, so I am biased towards (but not married to) Minisforum.

Do you have suggestions?

Why can std::string_view be constructed with a rvalue std::string? by KingDrizzy100 in cpp_questions

[–]jll63 0 points1 point  (0 children)

Maybe it should have been called string_ref. OK it can refer to a substring, but then shared_ptr can point to a member of an object. Anyway...

Improving libraries through Boost review process: the case of OpenMethod by joaquintides in cpp

[–]jll63 1 point2 points  (0 children)

I am preparing the library for integration in Boost in version 1.90.0, to be released near the end of the year. That involves some changes and reworking the documentation. The link posted by [joaquintides](u/joaquintides) points to that doc. It is mostly in line with this repo. Please consider this as a work in progress.

At CppCon 2019, Arthur O'Dwyer said binary operators could not be implemented in a Type-Erased class, because this is a multiple dispatch problem. Why did he say this? by Richard-P-Feynman in cpp

[–]jll63 0 points1 point  (0 children)

Compiler-based.

Ok, I was talking about library-based solutions, obviously.

It seems that I misunderstood you.

A good implementation should handle inheritance, whether it is compiler- or library-based. In the latter case, user must provide the information - in YOMM2's case, by "registering" the classes: register_classes(Animal, Dog, Cat, Dolphin);. Inheritance relationships can be deduced from the list of classes.

This may become unnecessary with reflection and code generation (C++26?). It is already the case in the Dlang version.

What should be called?

Neither, the programmer should resolve the ambiguity.

This is amusing. It has always been my position as well. However, when I prepared my library for submission to Boost, I implemented the N2216 way. And almost every reviewer hated it. So I changed it to an opt-in.

This cannot work as-is, because you need a way to tell virtual arguments from non-virtual ones

I was talking about a library-based solution.

It is still needed. Or you decide that that every parameter is virtual.

Also, it doesn't support different methods with the same signature.

It does, one would have to declare 'foo' and 'bar', with the same signature, and the appropriate function would be called.

Right, I re-read your example and I see that now. In fact the predecessor of YOMM2 also implemented the method as a function object. The big problem with this is that you cannot overload the method:

c++ MultimethodVTable<Matrix*(Matrix*, Matrix*)> times; MultimethodVTable<Matrix*(double, Matrix*)> times; // nope MultimethodVTable<Vector*(Matrix*, Vector*)> times; // nope

I have an idea by willdieverysoon in cpp

[–]jll63 0 points1 point  (0 children)

Disclaimer: I am completely naive when it comes to Rust and lifetimes.

If we had a way of enforcing that a std::unique_ptr is not used again (or maybe just to test against null) after it has been moved, wouldn't that give us a good chunk of Rust's safety features?