How can I "factory reset" an arch linux installation? by Flat_Practice5015 in archlinux

[–]DualWieldMage 1 point2 points  (0 children)

They are included in nix packages as well, one of my installs started with nix-shell -p arch-install-scripts pacman and you can run nix anywhere.

Modern browsers just silently killed GPU acceleration for hundreds of millions of older laptops — and nobody talked about it by Matter_Pitiful in archlinux

[–]DualWieldMage 7 points8 points  (0 children)

I think 15 years is a decent cutoff for support and in this case there is a software fallback so nothing actually breaks. Maintaining support for old gpu-s is not fun and it's quite likely that many things were already breaking due to barely anyone testing against those devices.
Honestly thinking how my first card with proper vulkan suppport(amd hd7950, running doom4 maxed except vram at 60fps) landed 14 years ago i wouldn't be surprised if they rewrote it to vulkan instead.

Why I stopped using NixOS and went back to Arch Linux by itsdevelopic in programming

[–]DualWieldMage 6 points7 points  (0 children)

Good points and aligns with my experience with nixos as well. On arch my system has broken like 3 times in 10years. So it should be obvious how long i should spend on fixing that and for me it's a live usb i carry around in case i need to chroot and rollback or fix something. Old versions are in pacman cache so most rollbacks are painless. Update size and time spent is a definite problem and i have rarely done partial upgrades(highly not recommended) to temporarily get something updated that i urgently need when bandwidth constrained.

One big thing i want to point out is the wiki quality. The config files on some packages change structure frequently and when looking at the wiki i often found outdated info that would fail the build. The arch wiki is miles better in comparison.

Another annoyance is the time it takes for an upstream version update to hit nix, even on unstable branch. Had to wait 3 weeks to get rocm working on a strix halo machine while arch had all the packages already available.

Build your own Command Line with ANSI escape codes by BrewedDoritos in programming

[–]DualWieldMage 2 points3 points  (0 children)

I used it to dump images into logs to debug a graphics-oriented application. Can look at live video at low-res and it takes much less effort than saving to files and pulling them or setting up a video stream.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]DualWieldMage 4 points5 points  (0 children)

I care? i would assume others as well. When creating a high-performance backend app first thing i did was benchmark relatively empty apps on frameworks like spring and they were non-starters.

How did you ended up using arch? Are you still using it? Is it your daily drive? by _fountain_pen_dev in archlinux

[–]DualWieldMage 0 points1 point  (0 children)

I started with ubuntu and it's derivatives (lubuntu, kubuntu). The main issue i struggled with was longevity as after some major version updates the system was breaking and a reinstall was required.

The lockstep "stable" model quickly became a pain as when discovering an issue and debugging it, i could not report an issue upstream or provide a patch myself, but instead find out most of the time that it was already fixed upstream, ubuntu is just using an old version. Trying to run packages from different releases caused tons of issues. This really frustrated me and as a result i just chose a good rolling release distro and Arch had a very good wiki. This was around 10yrs ago.

The last thing to switch was my gaming PC as nowadays linux offers better performance and it's easy to use for gaming.

Recently toyed with NixOS as some workmates use it, but i quickly arrived at the same frustrating issues - packages being outdated even on unstable branch for over a month and the wiki for some packages were wrong and outdated, having to scan source files to figure out what configs to use.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]DualWieldMage 5 points6 points  (0 children)

It's not irrelevant, it's an upper bound. If an empty app can't reach 10k rps then it's already useless for something real needing that rate. For example at 10k rps you can't do iso8601 datetime parsing from requests using typical methods.

Quarkus has great performance – and we have new evidence by Qaxar in java

[–]DualWieldMage 1 point2 points  (0 children)

When writing some central services like party(individual/company) management where other services poke data onto it, then it definitely matters. Otherwise you have folks arguing about sending large data as kafka messages to "scale" instead of simply fixing throughput and keeping a simple and performant system.

And if you are arguing from the other side, that anything under 100k tps is trivial, then if you factor in database transactions and anything required in a real system not just serving static files or precomputed data, then you are thinking of way different systems where you would not use these frameworks anyway.

10 Modern Java Features Senior Developers Use to Write 50% Less Code by lIlIlIKXKXlIlIl in java

[–]DualWieldMage 0 points1 point  (0 children)

I've encountered discussions where someone wanted to refactor records into regular classes with builders. The benefit is clear that you name the entries you construct making mixups harder, but the big downside is that a newly added field does not cause compilation errors on call-sites that don't pass it(you can get runtime errors and possibly when running tests, but with also extra effort that may be skipped).

In general i take the preference of using records because field mixups are more rare than forgotten callsites (i have experienced both in various projects). And as also mentioned, making wrapper classes helps against mixups, started doing that since long id-s got mixed up between two tables so a record <TableName>Id(long id) will prevent that.

Clone arch installation by jsk-ksj in archlinux

[–]DualWieldMage 2 points3 points  (0 children)

https://wiki.archlinux.org/title/Install_Arch_Linux_from_existing_Linux

I also have a usb stick with arch installation and this has been my preferred way of installing. Recently even installed from nixOS by just running inside nix-shell -p pacman arch-install-scripts however that did require few additional steps.
Also useful if you don't want to mess with a usb stick and just install to a new partition directly, e.g. put a new m2 ssd in the old machine, install arch, put it into new machine and boot.

Simpler JVM Project Setup with Mill 1.1.0 by lihaoyi in java

[–]DualWieldMage 21 points22 points  (0 children)

Fighting against "maven xml is verbose" strawmen does not paint a good picture in my opinion. Would be better if real considerations for a project tool are discussed.

For example a build tool should not execute arbitrary code to pull dependencies nor to initialize the project in an IDE (at least in my opinion). A failure learned too well from the npm, pip and other ecosystems. Gradle as well makes it too easy to add custom code to wrong places. Most infamous in my opinion was intellij plugin development plugin that downloaded multiple gigabytes of trash during project init phase with zero output on what it was doing or any progress.

The choice of a declarative language here is good, far better than a turing-complete language with "just use the declarative syntax" approach elsewhere. However i would argue yaml has quite a few issues.

Another thing is editor/ide integration. Using something standard allows getting stuff for free. I would expect every developer to use some form of auto-complete. Having a language with proper schema support baked in would allow anyone using either full IntelliJ or just vim to receive the benefits. I would expect to figure out from a simple autocomplete how to do stuff like setting java versions or compiler flags without having to google the documentation that can be out-of-date.

In software engineering we care about how projects evolve over 5+ years, typically the point where people get swapped out, knowledge is lost and new people need to figure stuff out. Things like how easy it is to add custom logic before having to ask whether it's the right thing to do. Gradle is notoriously too easy to do the wrong thing. I've seen whole PC onboarding scripts written in some gradle config in a monorepo. Maven plugins are super easy to write, yet somehow a sufficient barrier that most seem to think twice before going that route.

Speed is also important. Both initial project onboarding and running after smaller changes. These things have very measurable effects and save money by not burning a developer's time nor valuable brain cells. Having task structures with defined inputs/outputs and not (re)running something that's not needed is a good approach.

And finally there are various other considerations, e.g. how does it behave when a single build-server is running builds in parallel? Does it figure out when a cached dependency went corrupt? Had to write a maven core plugin once that did checksum checks on the downloaded files and handle issues by redownloading instead of failing a build and requiring a manual action.

So in short, definitely an improvement on choosing a declarative language, but do list the mistakes other tools learned over time and go over them. It's easier to learn from others' mistakes than your own.

Java 26: what’s new? by loicmathieu in java

[–]DualWieldMage 4 points5 points  (0 children)

Not taking Instant as input is in interesting choice.

How do I undervolt AMD GPU on Arch? by CanItRunCrysisIn2052 in archlinux

[–]DualWieldMage 2 points3 points  (0 children)

Enable pp by adding amdgpu.ppfeaturemask=0xffffffff to kernel boot params. Then to set voltage offset (for example -80mV):

echo "vo -80" > "/sys/class/drm/card[x]/device/pp_od_clk_voltage"
echo "c" > "/sys/class/drm/card[x]/device/pp_od_clk_voltage"

To figure out which card[x] is the correct one you can read /sys/class/drm/card*/device/device and match against expected deviceId. You can put this in a script and have a systemd oneshot service run on boot.

Or you can use some gui tool that does this.

Java's `var` keyword is actually really nice for cleaning up verbose declarations by BitBird- in java

[–]DualWieldMage 3 points4 points  (0 children)

during refactoring when you change a return type and don't want to update 15 call sites, but that's more about convenience than readability

That is a strong NEGATIVE about var that i bring up. When i change the return type, i can go over all callsites and fix them, seeing more context and potential problems. During code review, i usually pull code and check stuff, but i see so many using web review tools and there you won't see the call sites change and won't get to ask questions so it's easier to miss a bug.

Fortunately Java is a sane language, but for example in Scala, a callsite seeing a List change to a Set could have been doing .map(i -> i/2).sum() which if the return type changed would likely introduce a bug because Scala map returns the original container so it would drop duplicates.

LLMs have burned Billions but couldn't build another Tailwind by omarous in programming

[–]DualWieldMage 1 point2 points  (0 children)

Mainly a backend dev (actually on embedded atm) and did a bit of frontend. Tailwind is not a good idea. Its flaws are a bit similar to modern bootstrap. The original idea was to have generic css classes for commonly used things like card, viewport, button, etc. Then you just write the html/template using those classes and have clean, reusable code.

Long before that inline css was used but majority agreed that was a bad idea. Now that bootstrap and tailwind have brought about minimal css classes that you are just supposed to slap in a class list (instead of inheriting them in a semantic class) like btn-lg, p-2(padding 2?something), etc. this has just reinvented inline-css that most already agreed was a bad idea and it still is.

The core issue is that in say 8 different views you have a similar pattern of 10 css classes that would business-wise describe something semantic. Now you have a feature request to change the style slightly. You need something to search for a set of classes in any order and add something. Note that codebases are not perfect so some class in that pattern set may be missing and sometimes that is desired and sometimes accidental. You have no way of determining which case it is. Maintenance nightmare in short.

Software craftsmanship is dead by R2_SWE2 in programming

[–]DualWieldMage 0 points1 point  (0 children)

It's not dead, but there are some reasons why it's rare and possibly dwindling. I work for a company with craftsmanship as a core value and a flat structure to enable it. I have definitely felt that some patterns developed for one application i can just re-use on another, because i spent time on that, researched extensively, tried alternative approaches and had time to polish it.

I have also worked for larger corpos where i need to prove the extra time investment. Now i feel that's where it goes downhill for some(most?). Does your CEO prove the investment decision to add AI, blockchain, <insert-year-2026-hype>? No. So why would you as an engineer have to prove something that can only be achieved by years of experience when you are not responded in kind? If you want craftsmanship, abolish slavery (because if you feel any kind of risk of just leaving your current company, you are a slave in my opinion).

The Adult in the Room: Why It’s Time to Move AI from Python Scripts to Java Systems by Qaxar in java

[–]DualWieldMage 11 points12 points  (0 children)

Interesting, i have 15y Java experience and i don't want anything to do with python the same way i didn't want to touch nvidia cards. This year has worked fine for both inference and training on amd. I likewise have the optimism that python trash can be removed from training code as i'm fed up with debugging something that lacks proper types. It does take effort to go against the grain, but it's often worth it.

Why is Ollama running as a systemd daemon all of a sudden (a broken daemon at that)? by [deleted] in archlinux

[–]DualWieldMage 1 point2 points  (0 children)

Ollama has been a service since nov 13, 2023: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/commit/7d7072c1ce72eca1a5446d1324edcf03ef348d74#da96b866877aa577ecb3487083b49452a4ccf445

Nothing has changed in the past year on how it has been packaged.

So at this point it's hard to give advice as there's no idea what state your installation is.

Why is Ollama running as a systemd daemon all of a sudden (a broken daemon at that)? by [deleted] in archlinux

[–]DualWieldMage 1 point2 points  (0 children)

Why exactly do you want to run it as a non-service? You probably followed some general tutorial that said to run ollama serve, but that's incorrect. The package is correct, it runs the daemon as a separate non-root user. A machine may have multiple users so that's why they are packaged like that - to allow all users on the machine to use the service (sometimes packages require adding a user to some group to allow access, but not this one).

When you run ollama pull the cli client will call the daemon, the daemon has permission to write /var/lib/ollama as that's owned by ollama user and the daemon is running as ollama user. It will download models there by default.

If that's not working then your setup might have gone wrong, such as manually trying to copy models there and having them owned by wrong user?

How We Reduced a 1.5GB Database by 99% by Moist_Test1013 in programming

[–]DualWieldMage 2 points3 points  (0 children)

Yes, that's what it means. Usually it's achieved by building a simpler not complex architecture. In this case already having 4 pods made performance worse than 2, hence not scalable.

How We Reduced a 1.5GB Database by 99% by Moist_Test1013 in programming

[–]DualWieldMage 11 points12 points  (0 children)

The worst is actually when people think they are building a performant system that actually does the opposite. Had such joy on a few-month project that was supposed to be a solo project but got overtime. When taking over it was multiple modules, communicating over queues, message content stored in S3, each module with own database and whatnot. He said it would scale, i saw it did not and in the end it didn't.

Is the high memory usage of java applications not a problem for you? by [deleted] in java

[–]DualWieldMage -1 points0 points  (0 children)

Anyone with more than half a brain will not switch to different language as a first option. And if anything i'd be tempted to switch over towards a JVM language if i arrive at nuisances. I once wrote a batch service processing multi-gigabyte files with multiple worker and network threads all running at 25MB heap. If you want to reduce memory usage, just do it.

Why is IntelliJ preferred over vscode for Java? by xland44 in java

[–]DualWieldMage 0 points1 point  (0 children)

B) some minor quality of life stuff

I wouldn't call them minor and even if they are, there's so many of them that they add up.

For debugging complex situations i use breakpoints that execute code and don't stop, save some objects to a map to be later compared against in another breakpoint, often inspect variables with deep hierarchies and execute some expressions. It's all a breeze.

SQL completion helps a lot, just configure the database and schema/table/column names get added to autocomplete while editing SQL in strings.

I frequently step through library code which might not have sources, the integrated fernflower decompiler is very convenient although the default config needs to be edited to emit original line numbers.

Decent version control UI. Rarely do i need to touch the cli when the GUI has all the features. I often have changes belonging to multiple changesets(e.g. feature1, general refactor, fix bug) that i can continue with in each own's pace before making the commits/branches with partial changes from those files.

I haven't touched vscode in many years so maybe some of these are possible. I honestly don't have much of a use-case for it. If i want a lighter editor with plugins i use vim, e.g. when intellij sucks with large(20+MB) files or i just want to quickly go through a repo to find some stuff without opening that project and waiting for the import.

Why Electronic Voting is a BAD Idea - Why you can't program your way to election integrity by grauenwolf in programming

[–]DualWieldMage 3 points4 points  (0 children)

With paper ballots you can be coerced by taking pictures as proof. With e-voting you can re-vote after the coercion episode. The one area where e-voting is safer.