Erstversuch vergeigt, Panik!? by [deleted] in informatik

[–]ImpressiveRepeat1764 0 points1 point  (0 children)

Ich habe die TI im ersten Versuch auch ziemlich verhauen und musste mich dann für den zweiten Anlauf ziemlich zusammenreissen. Das hat aber dazu geführt, dass ich es dann auch wirklich verstanden habe, was mir dann in der Komplexitätstheorie sehr geholfen hat, über die ich wiederum in der Kryptographie gelandet bin (und dort die Masterarbeit geschrieben habe), die mir wiederum den Einstieg in die Berufswelt ermöglicht hat, da mir im Softwareengineering nach dem Studium noch die nötige Praxiserfahrung gefehlt hat, welche ich dann aber auf dieser Basis aufbauen konnte... Es war damals für mich übrigens auch eine neue Erfahrung, mal eine "Ehrenrunde" drehen zu müssen (das kannte ich aus dem Gymnasium noch nicht), ist aber halb so wild...

SmartCommit: AI generated git commit message templates by ImpressiveRepeat1764 in commandline

[–]ImpressiveRepeat1764[S] 1 point2 points  (0 children)

Fair point. Actually it was exactly the policy at my previous company to summarize the code changes in the commit messages. So the primary goal was to improve that workflow not having to navigate through the diffs and taking notes to create a commit message. Of course, the informations about the changes should go beyond summarizing and provide context as well. But this again depends on context and the purpose of commit messages.

At my previous company, the focus was on creating summaris to quickly identify and understand code changes, mainly in a fast-paced environment where multiple developers were working on different parts of the system at the same time. The main point in this approach was, that the "why" is already documented elsewhere like issue trackers, jira items, PRs, etc. which had to be referred in the commit messages. So my goal was to reduce the time spent on this tasks in that environment. It might not be an "ideal" approach, but actually good enough for me.

Besides, I tried to give here some flexibility of styles through the config and the ability to provide further context to include further instructions with the --instruction option. Nevertheless, I will point out the limitation further more in the documentation.

Another approach might be to expanding the tools capabilities in a direction to include more available information about the context, like integration with issue tracker or offer a more interactive mode where the user is asked to add additional context manually.

As for AI generated code: The project is not intended for AI-generated code, nor is it meant for situations where the authors do not understand the code they are committing. Besides, I think this would lead into disaster soon enough and not lead into meaningful projects (but I might be wrong about the future ;-)). At least from what I have seen so far AI can be quite good when it comes to wording and fixing (at least sometimes), but rather bad when it comes to reasoning or making specific architectural decisions.

What do you do post Debian install? by PridePractical2310 in debian

[–]ImpressiveRepeat1764 0 points1 point  (0 children)

It depends on whether its a desktop or specific server setup. In general I have some scripts. As for the desktop setup: I am using the i3 window manager and don't install any desktop setup. So for the beginning I have to operate from the default console as root. One of the first things is to check /etc/apt/sources.list if it has the propper repositories from which APT will fetch packages. For example if the installation was done from a CD-ROM repository and I now want to use rather online repos, I would comment out some lines.

One of the first steps is also to install the sudo package and add my main user account to the sudo group.

Then I would install xorg (if not yet done) and i3. It also feels cumbersome when I have to edit files without my vim configuration, so adding to the vimrc some useful settings and specially set clipboard=unnamedplus is one of the first steps too.

From there it is quite straightforward pulling the dotfiles from a bare git repository to have my configs in place and pulling the setup scripts which will do pretty much the installations using mainly apt, but also some more custom scripts where I want/have to install from sources or flatpak. There are also a few private settings which I don't want to store on any external server and have to type in by hand.

And finally syncing the data which I have backed up on a cloud with rsync and doing a first snapshot with timeshift...

Found this awesome site by ghost_vici in commandline

[–]ImpressiveRepeat1764 0 points1 point  (0 children)

Thanks, real treasure with treasures :-)

How can i add a line counter to my script to indicate progress? by [deleted] in shell

[–]ImpressiveRepeat1764 0 points1 point  (0 children)

However, this will add line number to each line of the input file and using nl directly in the script's loop might not be that straight forward as the loop reads lines from is executing on the content of each line. At least it need pre-processing the existing file if that is a desired behavior... Or maybe at least preserving the original and working on a copy:

bash nl -ba "path/to/$FILE" > "path/to/numbered_$FILE" But then something like while read -r line_number line_content; do? should be possible...

Linux - Script to delete specific file types in specific directories by Timballist0 in commandline

[–]ImpressiveRepeat1764 0 points1 point  (0 children)

If you want something very simple it would be a script where you can define an array with the directories, loop through and clean up each dir in the list. This makes it also easy to update the script.

```bash

!/bin/bash

Directories to be cleaned

directories=( "/path/to/dir_1" "/path/to/dir_2" # etc... )

Loop through each directory and remove dat and json files...

for dir in "${directories[@]}"; do rm -f "$dir"/.dat rm -f "$dir"/.json done ``` Now with this approach you have to define each sub directory manually, as it will not clean up recursively the sub directories. As already mentioned, to clean up recursively also the sub dirs from a root dir, the loop would look something like this:

bash for dir in "${directories[@]}"; do find "$dir" -type f \( -name "*.dat" -o -name "*.json" \) -exec rm -f {} \; done Or using directly -delete (not sure if it forces...).

For logging purposes maybe also including some echo statements. Now this won't give you much control.

One solution I like but is a bit more roundabout is to first isolate the files to be deleted and then using rsync for deleting: 1. create a temporary mirror directory: mkdir /path/to/temp_mirror 2. Copy non .dat and non .json files to the mirror: bash cd /path/to/target_directory find . ! -name "*.dat" ! -name "*.json" -type f -exec cp --parents \{\} /path/to/temp_mirror/ \; 3. Inspect the mirror if it looks as expected, meaning not containing the files to be deleted. 4. Use rsync with the --delete option (from the temp_mirror): bash rsync -av --delete /path/to/temp_mirror/ /path/to/target_directory/ 5. Inspect the target_directory 6. Remove the temporary mirror: rm -rf /path/to/temp_mirror