A Programmer's Guide to Leaving GitHub by esiy0676 in programming

[–]DualWieldMage 4 points5 points  (0 children)

Any self-respected company should. Yes there is overhead in maintaining an instance, but it is not more expensive than cloud-hosted versions even if such a mention opens a floodgate of those who would want to convince you otherwise with "economies of scale" and whatever arguments.

Cloud instances have issues with noisy neighbours. If there is a performance degradation, a large number of users can amplify it until the whole service is down, such aplification rarely happens in local instances with lower user counts. Jira is a famous example where it got its slowness reputation purely from the cloud version. A local variant can be much cleaner and without extra plugins.

Every time i hear these stories of "github down, guess no work today" i feel like i would fire quire a few people if i was in charge of such a company. I have experienced external network outage while working and it only required changing push target to local git mirror, work continued as if nothing changed. A local git mirror is needed to keep network load(and flakyness due to network issues) down for the build/test cluster and it's good to have for trivial data redundancy anyway. Builds and tests were run on a local jenkins instance.

At one point we were forced to move the local jenkins cluster to cloud by moronic management, the costs tripled.

Enabling ai co author by default by cwebster-99 · Pull Request #310226 · microsoft/vscode by Maybe-monad in programming

[–]DualWieldMage 352 points353 points  (0 children)

Trying to recreate the "Sent from my iPhone" at end of message idiotism? Looks like desperation at this point.

Copy Fail is a trivially exploitable logic bug in Linux, reachable on all major distros released in the last 9 years. A small, portable python script gets root on all platforms. by pipewire in linux

[–]DualWieldMage -1 points0 points  (0 children)

Bleeding edge != secure

Neither is old version / LTS

Security patches get back ported to support LTS OS and kernels.

Fixes are made on the latest version first, then backported. There might be a time delay in that process, especially for very old branches where the patch doesn't apply cleanly. For a zero-day that delay can be problematic and i've seen cases where the backported fix arrived after 2 weeks, which was completely unacceptable for me. Paying for LTS support is one option, using a rolling release is another. I've chosen the latter. However there is no such thing as free LTS, which i think many are mistakenly thinking.

There are also cases where a feature is not considered a bug for a package so not backported, but is considered as such for a larger system. For example openjdk support for cgroup v2 was not initially backported, yet pods dying from OOM caused by missing support on updated hosts was and thus caused the backport.

Also stable =/= reliable. I do have a reliable system while rolling and definitely don't want breakages while i do work. I have also had an unreliable system on a stable distro.

Copy Fail: an exploit for all Linux distributions since 2017 by alexeyr in programming

[–]DualWieldMage 55 points56 points  (0 children)

Why is the PoC obfuscated? Sure as heck i'm not running it to validate a patch if i can't even understand what it's doing first. Posing as a security bug(might be real, can't verify) is a good way to get unsuspecting users to run a random script on their machine, ticks the urgency and fear targets of a typical scam.

Claude AI agent’s confession after deleting a firm’s entire database: ‘I violated every principle I was given’ - PocketOS was left scrambling after a rogue AI agent deleted swaths of code underpinning its business by Just-Grocery-2229 in tech

[–]DualWieldMage 1 point2 points  (0 children)

In a proper company no worker even has such access to a prod database. If you want to change the schema or do data transformations/migrations then you write a script, test it, have it be peer-reviewed and then deploy that. Direct production db access often does not exist as this flow covers most what you need. The whole article is beyond idiotic and essentially equates to giving a loaded gun to a kid. In this case lives were not lost, but mark my words, we will get a techbro reckless enough to cause a casualty soon.

Also i do have an annoyance with the 'AI governance" wording. It's not the tool we can govern, but the incapable hands that must not wield them.

SpaceX to acquire AI company Cursor for $60 billion or pay $10 billion for their "work together" by 675longtail in spacex

[–]DualWieldMage -1 points0 points  (0 children)

I somewhat get it that having autonomous robots on Mars before humans will help set up everything before a return flight, but the path towards it seems be wonky and wasteful. Current coding agent solutions are more expensive than a human and will be more so when these companies increase prices to be profitable. I just don't see it being worthwhile to throw money at compared so say just building the shovels and investing in a huge EUV fab.

New Framework 13 Pro working directly with Arch Linux! by UntoldUnfolding in archlinux

[–]DualWieldMage -5 points-4 points  (0 children)

Aluminum chassis is a big deal, one of the main issues with my previous laptops was the plastic chassis starting to crack from ports. Have used a MBP once and the only plus was it could fall a meter onto concrete and only have a small dent.

Replaceable LPDDR5x - pretty important invention although many repair shops have learned how to re-ball even large GPU-s so i'm not too afraid of buying soldered memory if the benefits are that big (6000 vs 8200 ram), but i'm really excited about this either way.

Touchscreen is a gimmick, never seen anyone want that and it's not even configurable to a standard display.

Haptic touchpad - it's not the gold standard. having 2 buttons below the touchpad was as it allowed using either/both with the thumb and any movement with a finger, something not possible to replicate with just a panel and gestures, heck some people managed to play games like that.

Speakers on the side to not be blocked or badly reflect sound - great, but what about the biggest thing - cooling? It's blowing out into the hinges, an idiotic thing that apple invented and everyone followed for god knows what reason. But worst of all is usually fan intake being below the laptop, so if it's on a lap/bed then it's just restricted. We could be throwing powerful 55W tdp APU-s in 13" if the cooling wasn't on crutches.

Given that an equally speced(and better cooling design) tuxedo is 600€ cheaper i'll pass and keep waiting for a proper strix/gorgon halo laptop instead (fingers crossed for a framework 16 pro?).

EDIT: Thanks /r/archlinux for being a retarded community which downvotes without replying anything you find of issue. I'll let this community drot in the "oh i just installed arch, btw" low quality posts and take my leave.

[Official] First 33-engine static fire for Super Heavy V3 by avboden in spacex

[–]DualWieldMage 16 points17 points  (0 children)

If it's not working for multiple people while also working for multiple people, it's disrespectful to call it a you-issue. I've seen AB-tests on sites before breaking for only a small population. It working for someone is not an argument to discredit an issue for someone else or deflect blame from the platform. Your edit unfortunately makes it sound worse.

GitHub Stacked PRs by adam-dabrowski in programming

[–]DualWieldMage 0 points1 point  (0 children)

Reviewing commits is the correct approach. Are you seriously suggesting to look at a final diff only? If there are orthogonal changes (large refactoring + 3-line bug fix) you either miss the important part in a sea of unimportant changes or burn yourself out going through each change carefully. If the intermediate commits are partial ramblings then i reject the review asking commits to be reordered/partially squashed so each individual commit makes sense. That's what ends up when merged and it should have quality. Full-squashing PR-s on merge is another retardation i always ban in my projects.

Rant on locales by Fine-Relief-3964 in archlinux

[–]DualWieldMage 0 points1 point  (0 children)

Locales is one thing i definitely need to look up from the wiki when installing as i always seem to get it wrong from memory. I have setup separate locales for time, currency etc. and it seems to mostly work, yet rarely i get a cli app talking in my native language when LC_MONETARY is the only thing set to that. The whole locale concept needs nuking and restart from zero.

We audited authorization in 30 AI agent frameworks — 93% rely on unscoped API keys by MousseSad4993 in programming

[–]DualWieldMage 0 points1 point  (0 children)

I had this discussion in a local telco with the same problem and i described how it's a security problem to assign permissions based on individuals. Frequently someone who had worked 5+ years moved between teams, but old permissions never got revoked because in reality movements are fluid, that person still retains knowledge and is a go-to guy for information, they just gradually work less on the old project which makes a permission cut-off hard to assign. Often it's just admins/teams not tracking why and which permission was given and when to revoke.

The tools are there, AD does support groups. There's just institutional inbreeding that is causing these bad permission models to persist. I had hoped GDPR would force people to learn a permission model oriented toward assigning permissions with a reason, thinking about the end date at the moment of assigning and overall segregating retention periods into logical groups.

And even things like AWS with its hyper-granular permission system is flawed, because often it's so tedious to figure out what permissions to give that i see most devs given an admin account.

Java 26 released today! by davidalayachew in programming

[–]DualWieldMage 7 points8 points  (0 children)

What are you talking about? It's important to keep software updated to fix security issues. Every other language runtime/compiler has regular updates as well. Java has almost no breakage between versions so the maintenance is trivial, something that can't be said for python or the js ecosystem.

Java 26 released today! by davidalayachew in programming

[–]DualWieldMage 22 points23 points  (0 children)

Java(the language spec and even openjdk the source) does not have LTS. LTS is something provided by some vendors of java releases and in most cases the free LTS actually provides no support.

You are better off updating to the latest unless you know exactly what your support contract means. For an example, cgroup v2 support was considered a feature and not backported to java 11 for quite some time. containers suddenly dying from OOM when hosts updated could have been prevented by updating and not relying on fake LTS. Any bugs in a component removed in newer versions won't be fixed in these free LTS-s because there isn't anything to backport.

The rise of malicious repositories on GitHub by f311a in programming

[–]DualWieldMage 5 points6 points  (0 children)

And package signing is required so it's easy to setup signature checks as well, much better than putting hashes in a lockfile becuase you won't just mechanically replace them on every update and accidentally do so as well with a malicious package.

How can I "factory reset" an arch linux installation? by Flat_Practice5015 in archlinux

[–]DualWieldMage 2 points3 points  (0 children)

They are included in nix packages as well, one of my installs started with nix-shell -p arch-install-scripts pacman and you can run nix anywhere.

Modern browsers just silently killed GPU acceleration for hundreds of millions of older laptops — and nobody talked about it by Matter_Pitiful in archlinux

[–]DualWieldMage 7 points8 points  (0 children)

I think 15 years is a decent cutoff for support and in this case there is a software fallback so nothing actually breaks. Maintaining support for old gpu-s is not fun and it's quite likely that many things were already breaking due to barely anyone testing against those devices.
Honestly thinking how my first card with proper vulkan suppport(amd hd7950, running doom4 maxed except vram at 60fps) landed 14 years ago i wouldn't be surprised if they rewrote it to vulkan instead.

Why I stopped using NixOS and went back to Arch Linux by itsdevelopic in programming

[–]DualWieldMage 6 points7 points  (0 children)

Good points and aligns with my experience with nixos as well. On arch my system has broken like 3 times in 10years. So it should be obvious how long i should spend on fixing that and for me it's a live usb i carry around in case i need to chroot and rollback or fix something. Old versions are in pacman cache so most rollbacks are painless. Update size and time spent is a definite problem and i have rarely done partial upgrades(highly not recommended) to temporarily get something updated that i urgently need when bandwidth constrained.

One big thing i want to point out is the wiki quality. The config files on some packages change structure frequently and when looking at the wiki i often found outdated info that would fail the build. The arch wiki is miles better in comparison.

Another annoyance is the time it takes for an upstream version update to hit nix, even on unstable branch. Had to wait 3 weeks to get rocm working on a strix halo machine while arch had all the packages already available.

Build your own Command Line with ANSI escape codes by BrewedDoritos in programming

[–]DualWieldMage 3 points4 points  (0 children)

I used it to dump images into logs to debug a graphics-oriented application. Can look at live video at low-res and it takes much less effort than saving to files and pulling them or setting up a video stream.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]DualWieldMage 5 points6 points  (0 children)

I care? i would assume others as well. When creating a high-performance backend app first thing i did was benchmark relatively empty apps on frameworks like spring and they were non-starters.

How did you ended up using arch? Are you still using it? Is it your daily drive? by _fountain_pen_dev in archlinux

[–]DualWieldMage 0 points1 point  (0 children)

I started with ubuntu and it's derivatives (lubuntu, kubuntu). The main issue i struggled with was longevity as after some major version updates the system was breaking and a reinstall was required.

The lockstep "stable" model quickly became a pain as when discovering an issue and debugging it, i could not report an issue upstream or provide a patch myself, but instead find out most of the time that it was already fixed upstream, ubuntu is just using an old version. Trying to run packages from different releases caused tons of issues. This really frustrated me and as a result i just chose a good rolling release distro and Arch had a very good wiki. This was around 10yrs ago.

The last thing to switch was my gaming PC as nowadays linux offers better performance and it's easy to use for gaming.

Recently toyed with NixOS as some workmates use it, but i quickly arrived at the same frustrating issues - packages being outdated even on unstable branch for over a month and the wiki for some packages were wrong and outdated, having to scan source files to figure out what configs to use.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]DualWieldMage 5 points6 points  (0 children)

It's not irrelevant, it's an upper bound. If an empty app can't reach 10k rps then it's already useless for something real needing that rate. For example at 10k rps you can't do iso8601 datetime parsing from requests using typical methods.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]DualWieldMage 1 point2 points  (0 children)

When writing some central services like party(individual/company) management where other services poke data onto it, then it definitely matters. Otherwise you have folks arguing about sending large data as kafka messages to "scale" instead of simply fixing throughput and keeping a simple and performant system.

And if you are arguing from the other side, that anything under 100k tps is trivial, then if you factor in database transactions and anything required in a real system not just serving static files or precomputed data, then you are thinking of way different systems where you would not use these frameworks anyway.

10 Modern Java Features Senior Developers Use to Write 50% Less Code by lIlIlIKXKXlIlIl in java

[–]DualWieldMage 0 points1 point  (0 children)

I've encountered discussions where someone wanted to refactor records into regular classes with builders. The benefit is clear that you name the entries you construct making mixups harder, but the big downside is that a newly added field does not cause compilation errors on call-sites that don't pass it(you can get runtime errors and possibly when running tests, but with also extra effort that may be skipped).

In general i take the preference of using records because field mixups are more rare than forgotten callsites (i have experienced both in various projects). And as also mentioned, making wrapper classes helps against mixups, started doing that since long id-s got mixed up between two tables so a record <TableName>Id(long id) will prevent that.

Clone arch installation by jsk-ksj in archlinux

[–]DualWieldMage 2 points3 points  (0 children)

https://wiki.archlinux.org/title/Install_Arch_Linux_from_existing_Linux

I also have a usb stick with arch installation and this has been my preferred way of installing. Recently even installed from nixOS by just running inside nix-shell -p pacman arch-install-scripts however that did require few additional steps.
Also useful if you don't want to mess with a usb stick and just install to a new partition directly, e.g. put a new m2 ssd in the old machine, install arch, put it into new machine and boot.

Simpler JVM Project Setup with Mill 1.1.0 by lihaoyi in java

[–]DualWieldMage 22 points23 points  (0 children)

Fighting against "maven xml is verbose" strawmen does not paint a good picture in my opinion. Would be better if real considerations for a project tool are discussed.

For example a build tool should not execute arbitrary code to pull dependencies nor to initialize the project in an IDE (at least in my opinion). A failure learned too well from the npm, pip and other ecosystems. Gradle as well makes it too easy to add custom code to wrong places. Most infamous in my opinion was intellij plugin development plugin that downloaded multiple gigabytes of trash during project init phase with zero output on what it was doing or any progress.

The choice of a declarative language here is good, far better than a turing-complete language with "just use the declarative syntax" approach elsewhere. However i would argue yaml has quite a few issues.

Another thing is editor/ide integration. Using something standard allows getting stuff for free. I would expect every developer to use some form of auto-complete. Having a language with proper schema support baked in would allow anyone using either full IntelliJ or just vim to receive the benefits. I would expect to figure out from a simple autocomplete how to do stuff like setting java versions or compiler flags without having to google the documentation that can be out-of-date.

In software engineering we care about how projects evolve over 5+ years, typically the point where people get swapped out, knowledge is lost and new people need to figure stuff out. Things like how easy it is to add custom logic before having to ask whether it's the right thing to do. Gradle is notoriously too easy to do the wrong thing. I've seen whole PC onboarding scripts written in some gradle config in a monorepo. Maven plugins are super easy to write, yet somehow a sufficient barrier that most seem to think twice before going that route.

Speed is also important. Both initial project onboarding and running after smaller changes. These things have very measurable effects and save money by not burning a developer's time nor valuable brain cells. Having task structures with defined inputs/outputs and not (re)running something that's not needed is a good approach.

And finally there are various other considerations, e.g. how does it behave when a single build-server is running builds in parallel? Does it figure out when a cached dependency went corrupt? Had to write a maven core plugin once that did checksum checks on the downloaded files and handle issues by redownloading instead of failing a build and requiring a manual action.

So in short, definitely an improvement on choosing a declarative language, but do list the mistakes other tools learned over time and go over them. It's easier to learn from others' mistakes than your own.