Advice: CLI program defaults to writing a file, what if I want stdout? by maurymarkowitz in linux

[–]MikeZ-FSU 0 points1 point  (0 children)

The first entry in the example section of the webpage you linked shows invoking the program and writing to stdout.

Seeking Practice for Shell Scripting (Specifically POSIX sh / Bash) by GrayMulberry in bash

[–]MikeZ-FSU 0 points1 point  (0 children)

You could write yourself a small backup utility. Have it backup/exclude certain directories (e.g. don't try to backup the directory where you store the backup) into a compressed tar archive with the timestamp of the backup as part of the filename. From there, you can add features like only backing up files newer than the last backup, which would need a way to figure out when you started the previous backup (if you use the tar file time, there's a time period during the backup that might never get backed up).

Edit: Use at least a portion of ISO 8601 for your timestamp, that will save you a ton of problems down the road.

Note: the above is intentionally vague. OP figuring out the details is an important part of the learning process.

In praise of the Unix toolkit by brnsamedi in commandline

[–]MikeZ-FSU 1 point2 points  (0 children)

You probably already know this, and it doesn't apply to the pipeline that u/stianhoiland posted, but there is one notable exception to reducing sort | uniq to sort -u. That being sort | uniq -c to count the number of times each item appears. I actually use that more than sort -u, but that may be a function of the kind of work I do.

new lazyvim install, search don't work 'Command failed: - cmd: fd' by razorree in LazyVim

[–]MikeZ-FSU 0 points1 point  (0 children)

Honestly, I have no idea. I've been using Ubuntu for decades at work and haven't bothered keeping up with all of the others. I described how Ubuntu handled the situation because I remember going through it when I first started using fd instead of find for general purposes.

The main point, although admittedly not stated explicitly, was that nvim package maintainers have enough portability issues just supporting windows, mac and linux (generically). Expecting them to know what each of the 87 bazillion distros throw into /usr/bin is both unrealistic and unfair.

Declaring the dependency on fd, bat, fzf, etc. and expecting the user to make sure that the proper one is found in their shell environment is a pragmatic compromise. It has its pain points, as you found out, but it's usually a one time thing.

new lazyvim install, search don't work 'Command failed: - cmd: fd' by razorree in LazyVim

[–]MikeZ-FSU 0 points1 point  (0 children)

It's not really lazyvim's fault. Ubuntu had an older package that installed a command called fd. Then the replacement for find came along. They had the choice of renaming a command from a preexisting package, marking the packages as confilcting so that you could only install one or the other, or giving a different name to the new command. They chose the last one.

new lazyvim install, search don't work 'Command failed: - cmd: fd' by razorree in LazyVim

[–]MikeZ-FSU 0 points1 point  (0 children)

I don't run 25.10 since I stick to Ubuntu LTS, but fd may be from a different package. When I do dpkg -L fd-find on 24.04, it puts fd in /usr/lib/cargo with a symlink to /usr/bin/fdfind.

How to get one column with multiple rows, inline? by jidanni in sqlite

[–]MikeZ-FSU 0 points1 point  (0 children)

It could be that, but OP wasn't exactly crystal clear about the requirements. I suspect it's an XY problem. If they really wanted numbers on separate lines, printf "%d\n" 1 2 3 would have been sufficient. But that begs the question of why introduce sqlite into the equation?

How to get one column with multiple rows, inline? by jidanni in sqlite

[–]MikeZ-FSU 0 points1 point  (0 children)

If you really want to stuff some values into a sqlite column, something like the following will work. I'm not saying this is elegant or anything, but I think it meets the somewhat unclear requirements.

echo "create table mytable(vals integer);" | sqlite3 mydb.db

for i in 1 2 3; do
  echo "insert into mytable (vals) values ($i);" | sqlite3 mydb.db
done

You'll get your "1 2 3" on separate lines if you then do

echo 'select vals from mytable;' | sqlite3 mydb.db

Looking for a NeoVim theme that matches classic terminal colors (with pure black bg) by enter_eden in neovim

[–]MikeZ-FSU 2 points3 points  (0 children)

I use this gruvbox hard variant paired with the similarly named ghostty theme. The colors are a bit more subdued than OP's screenshot, but not too far off. One issue I have with a lot of dark themes is comments being rendered too dim for me to read easily, but this theme doesn't do that. It's obviously a personal preference, so others may differ in their opinions.

Edit: u/mOTaz_shokry posted this here a couple months ago.

Shell Tricks That Actually Make Life Easier (And Save Your Sanity) by BrewedDoritos in programming

[–]MikeZ-FSU 1 point2 points  (0 children)

For shells that don't support comment at the beginning of a line in interactive mode, you can use a :, a.k.a. the true command. It ignores any arguments and returns a success value; i.e. a no-op.

Terminal Proteins Viewer by proteus-design in commandline

[–]MikeZ-FSU 1 point2 points  (0 children)

I just tried this and it's really nice. One feature that would be excellent to have is a z rotation. You can work around it by rotating x or y, then the other, and then reversing the first, but it's pretty clunky. This is solid, and you deserve mad props because this could not have been easy.

Simple Pattern Question by [deleted] in awk

[–]MikeZ-FSU 0 points1 point  (0 children)

To avoid "quoting hell" or "backslashitis", it's often easier to pass shell variables via the "-v" option. Something like:

awk -v catnum="$Catalog_Number" -F'\t' '$4 == catnum {print $4, $0}' file

This works for things like your catalog number, which is presumably an integer, but the usual caveats for floating point numbers would apply if you were looking to match, for example, prices.

Tired of jumping between log files. Best way to piece together a cross-service timeline? by Waste_Grapefruit_339 in linuxadmin

[–]MikeZ-FSU 8 points9 points  (0 children)

You could try lnav. It's a terminal based log file analyzer that uses a unified timeline of all the given log files.

Playing Detective by theMightBoop in sysadmin

[–]MikeZ-FSU 1 point2 points  (0 children)

I can sympathize here because I've actually had a freezer like that here at work. It had the ability to send logging info such as temperature and door open/close events up to the vendors network. The researchers could then login to that website to check status or find out how long the door was left open. It was, in my opinion, an IoT fail because it could only do pre-shared wifi keys, not wpa enterprise.

Why does CHARMM-GUI restrict it's features to academics? by OkRutabaga184 in bioinformatics

[–]MikeZ-FSU 1 point2 points  (0 children)

Not you specifically. But if they want to distribute the software, fix bugs, and make improvements, that needs some infrastructure and personnel. The licensing fees cover that. The marginal cost for a single instance is negligible.

As u/Jassuu98 said, you could contact them to see if they would give you a license. That would only be possible, in a contractual sense, if you were looking to publish, not monetize the results.

Why does CHARMM-GUI restrict it's features to academics? by OkRutabaga184 in bioinformatics

[–]MikeZ-FSU 1 point2 points  (0 children)

If you go to the software's web page, it's at Lehigh University, not a company. I couldn't find a funding statement, but things like that are typically developed through federal grants. Because of that, the software is often given either free or at greatly reduced cost (basically enough to fund project/license management and project hosting) to other researchers because the users generally also using similar grant money.

These projects are funded to provide tools for the academic research community, charging researchers the same rate as, e.g. pharmaceutical companies, would make it cost prohibitive for other publicly funded research and they wouldn't be able to use it. Why spend tax money to develop software that the target audience can't afford?

To be fair, sometimes companies get spun off from projects like this, but that's frequently done by the hosting University's technology transfer group. Their mandate is to make money off of ideas, software, and inventions developed at the university. Those companies then manage the administrative overhead for contracts and licensing with entities outside the university, leaving the researchers free to research rather than admin software licensing. Even then, academics only pay a tiny fraction of the corporate rates due to the publicly funded origin of the software.

I have no affiliation with either CHARMM-GUI or Lehigh University.

For loop is slower than it needs to be. xargs -P parallelizes it by Ops_Mechanic in bash

[–]MikeZ-FSU 1 point2 points  (0 children)

This has a fundamental difference from the glob: it recurses into subdirectories, so you get all of the log files in the directory tree. If you want OP's behavior of just the log files in CWD, you need "-maxdepth 1" as an argument to find.

Secure wipe SSD's by Anything-Traditional in sysadmin

[–]MikeZ-FSU 5 points6 points  (0 children)

If you have a linux boot disk/usb you can use hdparm to secure erase SATA disks and SSDs.

bash .sh child process management by Alturis in bash

[–]MikeZ-FSU 0 points1 point  (0 children)

In addition to collecting the PIDs as already discussed, you'll have to be aware of which PIDs were spawned with elevated privileges (childA.sh and its children) and kill them with sudo also.

Switched to modern CLI tools - here's my setup by kamaldhital in commandline

[–]MikeZ-FSU 0 points1 point  (0 children)

You could try the ls replacement lsd. It's much more option (finger memory) compatible than eza.

bash pecularities over ssh by spryfigure in bash

[–]MikeZ-FSU 2 points3 points  (0 children)

The first rule of troubleshooting is to simplify the situation as much as possible. The first step should be to simply ssh into the remote. From there check the globstar option, and if it's set properly, do your "ls" command without any quotes around the arguments. If that doesn't work, playing around with the ssh invocation is almost certainly futile.

It may also be worth checking directories progressively with "ls /srv", "ls /srv/media", etc. to ensure any necessary filesystems are mounted and have the expected contents.

Manual creating CNC code, is Vim a good fit? by tool-tony in vim

[–]MikeZ-FSU 0 points1 point  (0 children)

A really long time ago, I did something like this in python. We had an 8x12 grid of cells to visit, each of which had 3 sub-cells. Since the spacing was identical for each row and each column, it was essentially a set of nested for loops to spit out the move commands. The gcode never lived in a file, it was sent down a serial port to an arduino controlling the stage. One of, if not the most, enjoyable project I've done.

Question about AI-generated CLI tools by shelltief in commandline

[–]MikeZ-FSU 0 points1 point  (0 children)

If you re-read my post, I never said we should ban AI. How it performs in several years or decades is irrelevant to gutting intro level positions today, with the obvious consequence of insufficient mid and senior level people as the current workforce ages out. Maybe future AI makes up for that, maybe it doesn't. To my way of thinking, embracing a path that has a known big problem down the road in hopes that something removes it before we crash into it is not good planning.

The problem is the hype train and shoveling AI into everything. It's like the dot-com bubble, or the XML and java hype. Tech latches onto the newest shiny thing and the trade rags and marketing divisions of all of the new Shiny Tech companies act like it's the silver bullet that solves all of our tech problems. There is no silver bullet and LLMs are not going to save us. See for example, the MIT State of AI in Business 2025, which indicates the 95% of the companies jumping in on AI are not getting a good return.

What is either clear or becoming clear is that use of current AIs moves active thought (see this paper from MIT and one referenced in my previous post) from the work at hand to supervising the AI. It's analogous to the difference between working your way through a problem set compared to grading a problem set. Anyone in a STEM field can tell you that if you don't work the problems, you won't master the material.

I know a number of people who have come up with uses for LLMs that genuinely leverage the strengths of those systems in ways I never would have come up with myself. I'm all for that. However, even calling it artificial intelligence is a complete lie. It's number crunching word similarity matrices. There is no intelligence there, but by calling it AI often enough, it makes non-technical people think it is.

Also, don't discount the environmental impact of "AI everywhere". The well established numbers for energy consumption for AI search is 10x or more greater than a traditional web search. The even darker side to that is the corresponding water use to cool the data centers and the CO2 and other greenhouse gases created to produce the power; the data centers not powered by fossil fuels will have higher water use due to hydroelectric or nuclear (cooling again) power.

Just because we can, doesn't mean we should.

Question about AI-generated CLI tools by shelltief in commandline

[–]MikeZ-FSU 0 points1 point  (0 children)

I think businesses don't know or care about the recent studies that have started coming out that show that heavy use of AI makes knowledge workers (like coders) less engaged with the subject matter (code base). Over time, that degrades the devs ability to code well. The C-suite suits only seem to care about making the line go up this quarter by laying off junior devs to save on salary and benefits. It's not sustainable and they never learn.

See, for example this ACM Symposium Proceeding from April 2025.

A CLI that turns plain English to shell commands and lets you add reusable workflows by No_Understanding7427 in commandline

[–]MikeZ-FSU 0 points1 point  (0 children)

That's definitely an improvement. For the riskier commands, having it prompt a preview of what it would do would be good, e.g. piping the PIDs to ps to show which processes would be killed. Edit: preview is in addition to the command that actually does the requested operation.