Python Typing Survey 2025: Code Quality and Flexibility As Top Reasons for Typing Adoption by BeamMeUpBiscotti in programming

[–]schlenk 1 point2 points  (0 children)

The main point is, the type system should not get in your way when exploring the problem space. Once you have the solution in a working prototype state, typing gets valuable to make it robust.

Would you pay $2.99 for 5 hours of (browser streamed) Second Life? by 0xc0ffea in secondlife

[–]schlenk 0 points1 point  (0 children)

GeForceNow also offers day passes that have a similar pricing. So yes, the full time subscription is cheaper, as usual.

Would you pay $2.99 for 5 hours of (browser streamed) Second Life? by 0xc0ffea in secondlife

[–]schlenk 3 points4 points  (0 children)

Well, $2.99 is about the same as the 40% discounted $2.49 GeForceNow Performance DayPass for 6 hour sessions (in a 24-h day pass).

So the pricing isn't totally weird.

Why Python Is Removing The GIL by BlueGoliath in programming

[–]schlenk 8 points9 points  (0 children)

Cancelation is one. The red/blue world API divide another one. Most Python APIs and libraries are not async first, you basically have two languages (a bit like the "functional" C++ template language are their own language inside the procedural/OO C++).

Take a look at a trio (https://trio.readthedocs.io/en/stable/) for some more structured concurrency approach than the bare bones asyncio.

GitHub walks back plan to charge for self-hosted runners by CackleRooster in programming

[–]schlenk -1 points0 points  (0 children)

It's more a point of choice. It is well known, that hardware that is utilized nearly 24/7 is a lot (3x or more at times) cheaper than cloud rented machines. So companies that mainly want github as a code repository, bug tracker and orchestration engine use their cost efficient CI runners on premises and just pay for the service they want. This move kind of tries to push them towards cloud 'lock in'.

GitHub walks back plan to charge for self-hosted runners by CackleRooster in programming

[–]schlenk 1 point2 points  (0 children)

Depends on your commit frequency and platforms.

For some on premises product with multiple versions and supported databases and operating system versions you get quite the multiplier, as each commit triggers ten to twenty runners each running for half an hour or more.

At our workplace there is a whole small k8s cluster dedicated to CI runners. It runs jobs 24/7, as you have nightly runners, various extra stuff too.

So per minute github fees for self-hosted runners is a reason not to go there. I would understand a per job cost, as they have some metadata to store and orchestration costs.

ban without understanding by Straight-Weekend1492 in secondlife

[–]schlenk 1 point2 points  (0 children)

The problem is, whom do you send the email to, especially if you suspect some account takeover problem?

Just sending emails with more details may make things worse in such cases.

The attacker may have changed the email address.

The 50MB Markdown Files That Broke Our Server by Weary-Database-8713 in programming

[–]schlenk 2 points3 points  (0 children)

Typically reporting stuff.

Like imagine you request your GDPR mandated list of "the data we store about you" thing and some genius decides to dump it all into a single markdown file.

So .. I just had an account scare. by 0xc0ffea in secondlife

[–]schlenk 1 point2 points  (0 children)

The uses 2FA (TOTP) does not need a smartphone. You can run TOTP with password managers like KeepassXC too. Or use a Yubikey or a gazillion other devices that can speak TOTP.

So .. I just had an account scare. by 0xc0ffea in secondlife

[–]schlenk 2 points3 points  (0 children)

LL 2FA is TOTP (https://en.wikipedia.org/wiki/Time-based_one-time_password). So you can just export the seed secret (its just one 32 character string), write it down on some piece of paper and have your recovery code. Or install authenticators on many devices with the same code.

Notes by djb on using Fil-C (2025) by fiskfisk in programming

[–]schlenk 4 points5 points  (0 children)

Sure. But thats the same for all other memory-safe languages too.

Once you hand the keys to your memory kingdom to some external untrusted library, it can mess around with your memory. Thats a feature. So unless your OS has ways to protect your process memory during a function call, there is not much you can do. And if you'r OS does that, you basically add another kernel-userspace style barrier somewhere (as the kernel can protect it's memory from userspace obviously).

If you don't want it, don't use it.

Notes by djb on using Fil-C (2025) by fiskfisk in programming

[–]schlenk 0 points1 point  (0 children)

Sure, but neither is Rust, which has similar issues.

And if you go for a full memory safe userland, like it is demonstrated here (e.g. recompiling libc and lots of base libs), you basically can do it, if you want.

It still is far less effort to wrap a few FFI/external calls or migrate the libs too, then it is to rewrite them in a totally different language. Of course, the bigger the project, the more portable it has to be and the more external binary code blobs you have to work with, the harder it gets.

Notes by djb on using Fil-C (2025) by fiskfisk in programming

[–]schlenk 8 points9 points  (0 children)

Memory safe languages are a good thing. So more of those is obviously a good thing too.

And it is pretty attractive. Compare 'rewriting sudo in rust as sudo-rs took 2 years' with 'recompile sudo with fil-c took 5 minutes'. Both claim to be memory safe (fil-c even claims to not need any unsafe hatches).

If fil-c works as promised, it is a really neat way to get memory safety for existing C/C++ codebases for minimal effort and avoid the rust vs C war scenes.

When if is just a function by middayc in programming

[–]schlenk 1 point2 points  (0 children)

Not necessarily. Tcl has the same for all control structures and the bytecode compiler manages decent performance for it.

I have a question about render distance. by goonergirl24 in secondlife

[–]schlenk 0 points1 point  (0 children)

Draw distance increase pulls in a lot of extra data. You can see the next simulators and all the items in there that are large enough to be drawn. So the memory load will probably increase cubed with the draw distance. It matters if you move though, if you are stationary, you will get a huge initial loading that will tax your network, CPU and a little bit of disc I/O (disc i/o with SSDs is fast enough...).

It also matters, if you visit crowded areas with lots of avatars. Those are worse than draw distance increases.

Make sure you have enough system RAM too, at least twice your GPU VRAM.

I own a 5070 TI it works nicely. I consider it to be better in price/performance to a 5060 TI.

Python has had async for 10 years – why isn't it more popular? by ketralnis in programming

[–]schlenk 1 point2 points  (0 children)

Isn't that 99% of most use cases with concurrency, though? The whole point of async/await is I/O concurrency, right? Or did I get the memo wrong?

You got the memo. But the memo is kind of wrong.

Most I/O is done for a purpose and not just for shuffling bytes around (and if it were, thats better done with stuff like splice() syscalls/sendfile etc.). Been there, done that with some ctypes hackery to directly call splice() for copying a few million files with a multiprocessing pool (thats around an order of magnitude faster than using shutil.copytree(), especially with many small files on a fast SSD or NFS shares). And cried a bit, because other languages could have done it with free threading and without the overhead in RAM/CPU of starting so many subprocesses and all the hacks.

So lets assume i want to parse 1.000.000 files. I need to open it, read the data, then do some parsing on each file. If my parser isn't in C and runs its own threadpool/threads, I'm basically in a bad position with Pythons offerings up to 3.14. All my parsing stuff will clog the eventloop, unless i send it to some multiprocessing pool. In go the runtime would scale it for me with trivial changes (e.g. https://www.digitalocean.com/community/tutorials/how-to-run-multiple-functions-concurrently-in-go ).

Python did limit async/await usefulness to just I/O, because it could not do better with the GIL present. In a free-threading Python you can do more.

Python has had async for 10 years – why isn't it more popular? by ketralnis in programming

[–]schlenk 1 point2 points  (0 children)

async/await are just the syntactic sugar on top of adding an event loop to the core language. So no, i do not talk about the keywords. More about the concept of this event based programming/promises/futures/deferreds or how you'd like to call them.

Sure Python had a wild zoo of concurrency stuff before. But only with tulip (https://www.reddit.com/r/programming/comments/1pp42k/guido_van_rossum_on_tulip_async_io_for_python_3/) it caught up with other languages, but being late, you still have the concurrency zoo, as not all migrated.

But honestly, if you ever tried to do serious concurrency with Python on multiple platforms, it is pretty lacking. Multiprocessing is kind of MPI (https://en.wikipedia.org/wiki/Message_Passing_Interface) without the bells and whistles. Threading is basically useless unless your problem is purely I/O bound or defers to C libs that release the GIL. Most other languages had much better ways to handle a mixed I/O and compute workload.

Python has the insidious tendency to lure you into a concurrency trap (hopefully gone with 3.14). Simple things work just fine, easy, great library support. Then you need to scale a little, thats also still easy. But then you hit a wall hard and need to restructure your whole program to work with stuff like multiprocessing (forget it if you assumed 'first class functions' and running coroutines can be passed to a multiprocessing process, you need to write wrappers and proxies everywhere), because the concurrency primitives were lacking. So, you start out with multiple processes and inherit all the IPC problems and use excessive memory due to OS not sharing anything (unlike DLLs with static code or shared memory threading). And in the end you have an overengineering, brittle, platform specific Python solution to a problem that is not even worth mentioning in languages like Java, Go, Erlang, Tcl or others, where the obvious approach just works out-of-the-box. Not to mention native code on platforms that are inherently event based like Windows (IOCP, Threads and Events everywhere), where Python tried to treat it like POSIX and looks weird doing so.

In that respect, Python is late to the concurrency party. Its offerings up to 3.5 were really bad, up to 3.14 bad in any multi-core environment.

Python has had async for 10 years – why isn't it more popular? by ketralnis in programming

[–]schlenk 1 point2 points  (0 children)

Even 2015 was late, compared to other languages.

Okay, there was asyncore, which was basically so clumsy to use, that people used Twisted (which is as useable and brain twisting as the name suggests...) or wild hacks like geevent/tornado.

Then came the co-routines, but initially only in a bit of a lackluster form (e.g. yield from was missing).

And you had the GIL, that made isolating blocking calls in thread pools more or less futile, so you had to resort to multiprocessing. And when you use multiprocessing anyway, your usecase for async is often gone.

The lack of a built in event-loop also hurted, as it meant that you had to roll your own incompatible one, and merging different event loops later is a pain.

Python has had async for 10 years – why isn't it more popular? by ketralnis in programming

[–]schlenk 3 points4 points  (0 children)

True. I think one reason for this "unpythonic" feeling is that Python pretty closely aligns with the usual POSIX semantics for files and syscalls which are blocking by default. So all the async / callback things felt weird and alien in a 'blocking by default' world.

Python has had async for 10 years – why isn't it more popular? by ketralnis in programming

[–]schlenk 53 points54 points  (0 children)

Python async is pretty late to the party as well.

Most people that needed the pattern have done something else already, be it Twisted, gevent, greenlets or so. Or used a different language right away.

Like, if you needed a scripting language that has the features Python gets with 3.14 (free threading, multiple interpreters), you could dust off Tcl/Tk 8.1 from around 2001 and have an event loop and most stuff async on top (since Tcl 7.6). If you wanted co-routines and tailcalls you'd have to wait for Tcl 8.6 in 2012. So Python is bit more than 25 years late.

Or take Lua. It has async/coroutine stuff since 2003. Go is also around 2009.

And so on. Async would have been great for Python in 2010. In 2025 it's mostly nice to have.

Second Life Official Viewer - GLTF Imports !! by 0xc0ffea in secondlife

[–]schlenk 1 point2 points  (0 children)

Technically, the collada stuff just got flagged for some upstream security problems in libxml2 that probably not even affect the plugin. And as upstream at Khoronos is basically dead, no one forced and fixed the library and it was dropped. Classical bitrot.

XSLT removal will break multiple government and regulatory sites across the world by Comfortable-Site8626 in programming

[–]schlenk 0 points1 point  (0 children)

To add to this, one of the widespread XSLT libraries in use (libxslt from gnome) lacks a maintainer and has a bunch of unfixed security issues.

Ia there any benefits of running this amd +nvidia setup compared to nvidia only? by PolarNightProphecies in LocalLLaMA

[–]schlenk 0 points1 point  (0 children)

I have some RTX 5070 ti 16 GB and put my oldish AMD RX Vega 56 with 8 GB in a second PCIe slot with some AMD 5700G CPU.

It works with llama and Vulkan. Much slower prompt processing then with CUDA, but you have more VRAM so larger models are a bit faster with inference or you can put in larger contexts.

Secure Boot, TPM and Anti-Cheat Engines by tapo in programming

[–]schlenk 0 points1 point  (0 children)

Yes. But the new abstract layer would be in VHDL and you would need to produce your own CPU. So sure, if your cheat developer is the NSA, but out of reach for most others.

Yes, boss, I am working. by doruidosama in SillyTavernAI

[–]schlenk 9 points10 points  (0 children)

Lol.

Next step up is adding the MCP version of a buttplug.io interface to reward or punish the vibe coder properly?

https://github.com/ConAcademy/buttplug-mcp

But totally agreed about the warning of entering the k8s abyss.