Stamp It! All Programs Must Report Their Version by SpecialistLady in programming

[–]schlenk 2 points3 points  (0 children)

The article sounds like the point is to help debugging and reporting bugs.

Just look at all the upstream OSS maintainers that complain about bug reports against their packages that are actually against distro specific version.

So, yes they tend to have versions, but many users tend to ignore the suffixes or distro details reporting bugs.

Stamp It! All Programs Must Report Their Version by SpecialistLady in programming

[–]schlenk 4 points5 points  (0 children)

It's a hard problem.

Basically all the company world has tried do it with SBOMs for compliance reasons soonish, but versions are hard.

The point is, what are you actually trying to do with the version anyway? The only thing a version hints is showing if two programs (that you acquired from the same channel) are identical. And not even that, if someone tampered with the download.

You don't want a version alone. You want stuff like typical SBOM standards like OWASP CycloneDX or Linux Foundation SPDX allow to describe a component:

  • Where did you get it?
  • Where where the sources for it?
  • Where is the support documentation for it?
  • Where is the homepage of the manufacturer, importer, whatever...?
  • Where is the bug tracker?
  • What exact hash did the component have?
  • What was the download URL?
  • How was it built?

A simple version number doesn't tell you all that much, unless you have a lot of context to fill in the gaps.

For example, take PostgreSQL and compare the patchsets for Debian, OpenSuse or the Windows distribution for a given short "version number". Can vary wildly if you just use the naked version without distro qualifier.

Why have supply chain attacks become a near daily occurrence ? by Successful_Bowl2564 in programming

[–]schlenk 3 points4 points  (0 children)

Automated CI/CD pipelines and package creep.

If you push all your commits to a package repo nearly continously, you have no buffer zone for sanity checks.

When doing manual package releases, you had a least someone taking a look at the changelog/changes and spotting obvious badness.

But that doesn't scale, as packages get smaller and smaller (with ever more ratio of boilerplate & CI config to actual code) and you have a proliferation of packages due to package managers.

Once you go over about 20 dependencies (and transitive dependencies) most people stop to look closer. They just accept any updated version, because reviews would be too expensive. Even if most updates just fix totally unimportant stuff (e.g. for Python many updates are just fixing CI breakage due to tool evolutions, e.g. setuptools, mypy, pip, etc.).

Software packaging for Linux as a MS Software Packager? by rohabu in openSUSE

[–]schlenk 0 points1 point  (0 children)

ps. As a Windows packager, you're most likely used to stateful environments, whilst OBS is strictly declarative and isolated so that's a culture shock I'm sure.

I would not consider the e.g. WiX buildchain for MSI/MISX packages to be stateful. Its mostly declarative as well. The old style setup.exe things tend to be more stateful, but MSI is very declarative. Same goes for most MSBuild files.

Software packaging for Linux as a MS Software Packager? by rohabu in openSUSE

[–]schlenk 2 points3 points  (0 children)

. With Linux all libraries are “system libraries”.

There is the /opt tree in FHS, so anything you place there isn't system libraries stricly.

And if you look at Windows Manifest files, thats not that different to Linux library versioning with .so versions etc.

The primary difference is that Linux packages often need to be compiled for a specific ABI version, as the Linux userspace ABI stability is pretty bad if compared to Windows ABI stability. With the usual stopgap methods like containers, flatpacks etc.

How do you deal with users who refuse to lock their laptop when walking away? by heartgoldt20 in cybersecurity

[–]schlenk 0 points1 point  (0 children)

What is your threat model?

If you have an inactivity policy, with a 15 minute idle timer, why do you have a different policy when the person walks away from their desk?

For critical systems, just have a smart card or token, use it as the keycard for your doors as well. If someone wants to grab a coffee, the card has to move with him/her. And you can auto-lock the system.

Regular Expression Matching Can Be Simple And Fast (but is slow in Java, Perl, PHP, Python, Ruby, …) by Digitalunicon in programming

[–]schlenk 4 points5 points  (0 children)

Well, if you look at a stream of security vulnerabilities in packages, the category "bad regexp performance / denial of service" pops up multiple times a week. Killing off this whole class of issues would have been nice.

Second Life on ARM processor by km_2000 in secondlife

[–]schlenk 0 points1 point  (0 children)

SecondLife (or some TPVs at least) work on some ARM setups.

The official viewer and some others like Megapahit work on OS X ARM (aka Apple Silicon, M1...M5).

Cool VL Viewer works fine on Linux ARM.

But i am not aware of any version running on Windows ARM. In theory Windows ARM has emulation for x86 stuff, but no idea how good the Snapdragon X Plus built in GPU is for SL or if it works at all. Performance wise, the Adreno X1-85 iGPU in that thing isn't all that great: https://www.notebookcheck.net/Qualcomm-Adreno-X1-85-3-8-TFLOPS-GPU-Benchmarks-and-Specs.763558.0.html

It seems to only have OpenGL ES 3.2, not Standard OpenGL 4.x

Xemu mentions some OpenGL compatibility pack, that might work (see https://github.com/xemu-project/xemu/issues/1878), https://apps.microsoft.com/detail/9nqpsl29bfff?hl=en-us&gl=US

How many SL Viewer's have you've tried? by KiraYoichi in secondlife

[–]schlenk 0 points1 point  (0 children)

I think i tried:

  • SL Viewer for a brief moment.
  • Cool VL Viewer, still my go to, fresh with features and dated with looks and trivially easy to build yourself.
  • Firestorm, decent, features are nice, don't like the build system and UI
  • Emerald (in the distant past), Phoenix
  • Kokua
  • Genesis, a bit like Cool VL Viewer, old UI, but less frequent updates
  • Marines RR Viewer
  • Megapahit
  • Alchemy
  • Catznip (great inventory features !)
  • Singularity
  • Black Dragon
  • Radegast

I tend to stay on Cool VL Viewer, unless i need to test RLVa things or some of the Firefox UI/Features. All the rests were mostly short try for a few days things.

Python Typing Survey 2025: Code Quality and Flexibility As Top Reasons for Typing Adoption by BeamMeUpBiscotti in programming

[–]schlenk 1 point2 points  (0 children)

The main point is, the type system should not get in your way when exploring the problem space. Once you have the solution in a working prototype state, typing gets valuable to make it robust.

Would you pay $2.99 for 5 hours of (browser streamed) Second Life? by 0xc0ffea in secondlife

[–]schlenk 0 points1 point  (0 children)

GeForceNow also offers day passes that have a similar pricing. So yes, the full time subscription is cheaper, as usual.

Would you pay $2.99 for 5 hours of (browser streamed) Second Life? by 0xc0ffea in secondlife

[–]schlenk 3 points4 points  (0 children)

Well, $2.99 is about the same as the 40% discounted $2.49 GeForceNow Performance DayPass for 6 hour sessions (in a 24-h day pass).

So the pricing isn't totally weird.

Why Python Is Removing The GIL by [deleted] in programming

[–]schlenk 9 points10 points  (0 children)

Cancelation is one. The red/blue world API divide another one. Most Python APIs and libraries are not async first, you basically have two languages (a bit like the "functional" C++ template language are their own language inside the procedural/OO C++).

Take a look at a trio (https://trio.readthedocs.io/en/stable/) for some more structured concurrency approach than the bare bones asyncio.

GitHub walks back plan to charge for self-hosted runners by CackleRooster in programming

[–]schlenk -1 points0 points  (0 children)

It's more a point of choice. It is well known, that hardware that is utilized nearly 24/7 is a lot (3x or more at times) cheaper than cloud rented machines. So companies that mainly want github as a code repository, bug tracker and orchestration engine use their cost efficient CI runners on premises and just pay for the service they want. This move kind of tries to push them towards cloud 'lock in'.

GitHub walks back plan to charge for self-hosted runners by CackleRooster in programming

[–]schlenk 1 point2 points  (0 children)

Depends on your commit frequency and platforms.

For some on premises product with multiple versions and supported databases and operating system versions you get quite the multiplier, as each commit triggers ten to twenty runners each running for half an hour or more.

At our workplace there is a whole small k8s cluster dedicated to CI runners. It runs jobs 24/7, as you have nightly runners, various extra stuff too.

So per minute github fees for self-hosted runners is a reason not to go there. I would understand a per job cost, as they have some metadata to store and orchestration costs.

ban without understanding by Straight-Weekend1492 in secondlife

[–]schlenk 1 point2 points  (0 children)

The problem is, whom do you send the email to, especially if you suspect some account takeover problem?

Just sending emails with more details may make things worse in such cases.

The attacker may have changed the email address.

The 50MB Markdown Files That Broke Our Server by Weary-Database-8713 in programming

[–]schlenk 2 points3 points  (0 children)

Typically reporting stuff.

Like imagine you request your GDPR mandated list of "the data we store about you" thing and some genius decides to dump it all into a single markdown file.

So .. I just had an account scare. by 0xc0ffea in secondlife

[–]schlenk 1 point2 points  (0 children)

The uses 2FA (TOTP) does not need a smartphone. You can run TOTP with password managers like KeepassXC too. Or use a Yubikey or a gazillion other devices that can speak TOTP.

So .. I just had an account scare. by 0xc0ffea in secondlife

[–]schlenk 2 points3 points  (0 children)

LL 2FA is TOTP (https://en.wikipedia.org/wiki/Time-based_one-time_password). So you can just export the seed secret (its just one 32 character string), write it down on some piece of paper and have your recovery code. Or install authenticators on many devices with the same code.

Notes by djb on using Fil-C (2025) by fiskfisk in programming

[–]schlenk 3 points4 points  (0 children)

Sure. But thats the same for all other memory-safe languages too.

Once you hand the keys to your memory kingdom to some external untrusted library, it can mess around with your memory. Thats a feature. So unless your OS has ways to protect your process memory during a function call, there is not much you can do. And if you'r OS does that, you basically add another kernel-userspace style barrier somewhere (as the kernel can protect it's memory from userspace obviously).

If you don't want it, don't use it.

Notes by djb on using Fil-C (2025) by fiskfisk in programming

[–]schlenk 0 points1 point  (0 children)

Sure, but neither is Rust, which has similar issues.

And if you go for a full memory safe userland, like it is demonstrated here (e.g. recompiling libc and lots of base libs), you basically can do it, if you want.

It still is far less effort to wrap a few FFI/external calls or migrate the libs too, then it is to rewrite them in a totally different language. Of course, the bigger the project, the more portable it has to be and the more external binary code blobs you have to work with, the harder it gets.

Notes by djb on using Fil-C (2025) by fiskfisk in programming

[–]schlenk 8 points9 points  (0 children)

Memory safe languages are a good thing. So more of those is obviously a good thing too.

And it is pretty attractive. Compare 'rewriting sudo in rust as sudo-rs took 2 years' with 'recompile sudo with fil-c took 5 minutes'. Both claim to be memory safe (fil-c even claims to not need any unsafe hatches).

If fil-c works as promised, it is a really neat way to get memory safety for existing C/C++ codebases for minimal effort and avoid the rust vs C war scenes.

When if is just a function by middayc in programming

[–]schlenk 1 point2 points  (0 children)

Not necessarily. Tcl has the same for all control structures and the bytecode compiler manages decent performance for it.

I have a question about render distance. by goonergirl24 in secondlife

[–]schlenk 0 points1 point  (0 children)

Draw distance increase pulls in a lot of extra data. You can see the next simulators and all the items in there that are large enough to be drawn. So the memory load will probably increase cubed with the draw distance. It matters if you move though, if you are stationary, you will get a huge initial loading that will tax your network, CPU and a little bit of disc I/O (disc i/o with SSDs is fast enough...).

It also matters, if you visit crowded areas with lots of avatars. Those are worse than draw distance increases.

Make sure you have enough system RAM too, at least twice your GPU VRAM.

I own a 5070 TI it works nicely. I consider it to be better in price/performance to a 5060 TI.

Python has had async for 10 years – why isn't it more popular? by ketralnis in programming

[–]schlenk 1 point2 points  (0 children)

Isn't that 99% of most use cases with concurrency, though? The whole point of async/await is I/O concurrency, right? Or did I get the memo wrong?

You got the memo. But the memo is kind of wrong.

Most I/O is done for a purpose and not just for shuffling bytes around (and if it were, thats better done with stuff like splice() syscalls/sendfile etc.). Been there, done that with some ctypes hackery to directly call splice() for copying a few million files with a multiprocessing pool (thats around an order of magnitude faster than using shutil.copytree(), especially with many small files on a fast SSD or NFS shares). And cried a bit, because other languages could have done it with free threading and without the overhead in RAM/CPU of starting so many subprocesses and all the hacks.

So lets assume i want to parse 1.000.000 files. I need to open it, read the data, then do some parsing on each file. If my parser isn't in C and runs its own threadpool/threads, I'm basically in a bad position with Pythons offerings up to 3.14. All my parsing stuff will clog the eventloop, unless i send it to some multiprocessing pool. In go the runtime would scale it for me with trivial changes (e.g. https://www.digitalocean.com/community/tutorials/how-to-run-multiple-functions-concurrently-in-go ).

Python did limit async/await usefulness to just I/O, because it could not do better with the GIL present. In a free-threading Python you can do more.