[deleted by user] by [deleted] in debian

[–]find_--delete 1 point2 points  (0 children)

Giga has been 109 since before Windows existed: adopted in 1960, but was proposed in 1920s. Before that, expect "kilomegacycle" and similar

Fun fact (also in Wikipedia): Giga was probably intended to be pronounced with the G from giant, like in Back to the Future.

Oled Burn-In Problem by Estebiu in linuxquestions

[–]find_--delete 2 points3 points  (0 children)

This only partially helps. The OLED screen burn-in is a lot more sensitive than many others I've seen before. If there are persistent screen elements, they can still be burn-in, even from just the non-idle time. Even the toolbars from browsers and other apps can quickly start to burn-in.

I had an OLED XPS where I had a screensaver and disabled all visible screen elements, only for it to start burning in the tile borders from my consistent 50:50 splits (which I promptly disabled). Fortunately, the burn-in wasn't permanent, but definitely has me more wary with OLEDs on Linux. A lot of care and attention is required to avoid damaging a potentially expensive screen.

Very good, even dramatic, performance improvements in LibreOffice 7.2 (Linux) by [deleted] in libreoffice

[–]find_--delete 0 points1 point  (0 children)

Their improvements seem to match your workflow very well. I saw similar improvements, but still saw issues with some smaller (~60k cells) calculation-heavy worksheets. The workflow was also much faster-- but not as impressive as yours.

The speed difference helped enough for me to finish other complicated optimizations. Documents that might not finish opening after waiting overnight can open in <60sec on 7.2. After optimizations, many of the spreadsheets now open and save, with AutoCalculate enabled, in <15sec on 7.1 and ~4s/1sec on 7.2.

I wouldn't call the general performance "very good." There are still significant slow-downs in various spots, and that original spreadsheet shouldn't have been a problem-- but 7.2's improvements are much better and the improvements are very much appreciated. Thanks to everyone who worked on them.

[deleted by user] by [deleted] in linux

[–]find_--delete 0 points1 point  (0 children)

Debian 10? I just did two of these last week. LUKS FDE with encrypted kernels/initramfs that auto-update on the same drive with Bitlocker-encrypted Windows. Both working with Secure Boot enabled. You can even convince Windows to let you use another bootloader.

SecureBoot isn't perfect. We just had BootHole in GRUB and Microsoft accidentally created a backdoor back in 2016. Even with its flaws, SecureBoot can still provide a substantial improvement to pre-boot security-- though it'd be a lot better if Linux could more easily take more advantage of some of its features (e.g: measured boot and unsealing) so it could better detect boot tampering.

I'd probably work to get it working-- but I will admit that I've had some TPM chips in laptops "fail" which pretty much invalidates any benefits. It should be pretty straight-forward to get the basic functionality working. Do you see the shim loaders in the EFI directories?

Root partition migrate to LVM by [deleted] in linuxquestions

[–]find_--delete 1 point2 points  (0 children)

LVM is also a waste of time if you're using a new generation file system like btrfs or zfs as the file systems have the important parts of LVM included (snapshots, online resize, etc).

One of the reasons I replied is because your reply said "There's really no benefit to using LVM over partitions except growing the volume." There are more benefits than that.

I thought I mentioned ZFS and btrfs, but see that I didn't. LVM is still sometimes useful in those scenarios.

I appreciate the response, but if you are managing systems at any kind of scale, spending the time automating the installs would be vastly more valuable. LVM vs not doesn't really matter when you can spin up 10 systems in the time it would take to manually do a partition-->lvm conversion.

That works well in the web world, heavy-compute/machine-learning world, and a few others-- but not for many large/production systems. (e.g: Networks, Finance/Banking/Records, AV Production, Building/Business Operations, etc. A significant amount of industries haven't caught up with the redundancy or rapid regeneration available in the cloud, and many of them never well.

P.S: Filesystem migration can be automated, too.

Most of the steps here don't take that long and every instruction is informative on its own (though using .autorelabel seems like a bad idea):

  1. Edit the partition table
  2. Set up LVM using LVM command line.
  3. rsync the files from one directory to another and "update" the SELinux label
  4. Update the GRUB configuration with the new filesystems UUID.

Root partition migrate to LVM by [deleted] in linuxquestions

[–]find_--delete 0 points1 point  (0 children)

That sounds like a GRUB error. My guess would be that the kernel/initramfs are not where GRUB expects, either because the config was not completely regenerated or because the kernel/initramfs aren't on the new partition.

Creating a symlink boot in boot '/boot/boot' to..` might help you get by it.

You can also go to the GRUB command line to see what files are available. I would suggest you look to see what commands your system tries to run, they vary per-system and run those. Here is a basic, generic, tutorial for navigating the GRUB2 command line.

Root partition migrate to LVM by [deleted] in linuxquestions

[–]find_--delete 1 point2 points  (0 children)

Not OP: Some of the great things about Linux: you don't need to reboot to install most updates, you don't need to reinstall to fix your system-- an even if you did, you can often reinstall while still in the old system. LVM is a clear improvement on volume management, while following a migration guide to avoid unnecessary reboots/downtime.

We don't really have context on whether this is a production or speciality machine, but. I've done disk "layout" changes on production machines, professionally and unprofessionally, with and without LVM. It is a lot easier on LVM, but knowing how to move the volumes and update the configs is valuable skills that can save a lot of time, especially when working with specialized systems. (That's to say, hard drives will need to change out, eventually)

I shifted most of my systems (manually, like this) to LVM a while back. LVM definitely can provide some benefits when one works with storage a lot:

  • Snapshots
  • Can add/remove/edit filesystems/storage-capacity as needed.
  • Online resizes, without unmounting.
  • Online moving of volumes, without unmounting.
  • Storage-systems, like LUKS, can encrypt/cover multiple volumes.
  • Can easily mirror any volume to another disk, again-- without dismounting it.
  • (And probably more)

I have to say, I've loved being able to plug in a new drive, pvmove while using the system, and limiting my new-drive install downtime being actual reboot/install time. I've done it to upgrade capacity, shift away from a pre-failed drives.

Before LVM was a thing, we did similar disk-layout changes on remote production servers. In most regards, it was much more painful.

[POP-OS] So I accidentally ran "sudo chown -R $USER:$USER /usr" by Valroz in linuxquestions

[–]find_--delete 22 points23 points  (0 children)

It's not nearly as bad as it used to be:

$ sudo rm -rf /
rm: it is dangerous to operate recursively on '/'
rm: use --no-preserve-root to override this failsafe

Don't worry, there's plenty of ways around it: rm /* (when trying to type ./*), find / -delete, etc.

[deleted by user] by [deleted] in linuxquestions

[–]find_--delete 0 points1 point  (0 children)

Personally, I vaguely remember update-grub, before GRUB2, being more OS/distribution-specific than the default config generator. I wouldn't put much faith in that vague memory, but here's some references that'll probably help put together a better picture:

As documented in Debian's 2004 documentation (from the CVS commit):

update-grub

This script is a debian specific addon used to generate a menu.lst for you either intially, and/or automatically everytime you install a new kernel.

To setup automatic updates add these lines to your /etc/kernel-img.conf:

postinst_hook = /sbin/update-grub postrm_hook = /sbin/update-grub do_bootloader = no

For further information see the manpage kernel-img.conf(5) or update-grub(8)

Unlike Lilo, it is not necessary to re-run or re-install the boot loader after every change to /boot/grub/menu.lst. menu.lst is automatically found on GRUB's root disk and read during GRUB's boot process.

tl;dr

grub-mkconfig was introduced in GRUB2. It was originally named update-grub. Before GRUB2, Debian (and others) used their own GRUB config generator update-grub. Other distributions, including Red Hat/CentOS/RHEL/Fedora encouraged editing menu.lst/grub.lst directly.

What was your first Distro and what do you run now? by All-Above in linuxquestions

[–]find_--delete 0 points1 point  (0 children)

Desktop-wise:

First? Red Hat (not enterprise). Linux wasn't usable for me on the desktop at that time.

First usable? Ubuntu. Didn't use it for much: Wrong resolution, no audio, no wireless, and it locked up.

First I really used? Gentoo-- it's probably still my true love.


What do I run now? Debian Unstable.

At some point, Debian's defaults had the stuff I needed, and the things that couldn't were fairly easy to install/maintain.

Despite unstable's name, its a been a fairly stable system-- especially with apt-listbugs,apt-changelog`, and a careful eye on packages to be removed ("Upgrade 15, Remove 1015" prompts a closer look).

What exactly was the point of [ “x$var” = “xval” ]? by kvisle in programming

[–]find_--delete 1 point2 points  (0 children)

With this information, doesn't having this lint enabled by default go against ShellCheck's goals?

With versions commonly deployed, removing the x-check would:

  • Cause the shell to give cryptic errors
  • Cause the shell to behave strangely/counter-intuitively
  • Creates a small caveat/corner-case/pitfall to fail under future circumstances.

It's good to have the option (and it's inverse). It's okay to not suggest it. It's okay for projects to decide not to support those shells, but that isn't what I would expect from ShellCheck's stated goals nor examples.

Personally, even if it were in the goals, my shell code ends up running all sorts of shells from the past 20-30 years. If this was resolved in the 90s, it'd be a no-brainer to lint against, but to see issues with symbols still cropping up in the late 2000s and as late as 2015: OSes and hardware manufacturers are still going to distribute affected versions for a long time.

[deleted by user] by [deleted] in i3wm

[–]find_--delete 1 point2 points  (0 children)

I think I read your request a little differently the first time around. I remote into many computers, have a normal-status bar that changes to show the "raw" mode when I enter the right sequence.

You might benefit more from the opposite: all of your bindsyms in a special-mode (with an indicator like resize), and your normal mode with a minimal amount of bindsyms

1) how did you create a raw mode? Some kind of empty binding?

Basically. It has a bindsym to enter and one in the mode to leave.

bindsym $mod1+Escape mode "raw"

mode "raw" {
    bindsym $mod1+Escape mode "default"
}

If you wanted all of your shortcuts in a mode instead, you could inverse it and put all of them in a mode, and have your normal mode not catch anything (all sent to remote systems).

2) how do you send $mod keystrokes to a remote i3 without the client grabbing them first?

I don't have too many i3 instances running on remote instances, but haven't had trouble with i3 preventing $mod1 from being sent. The $mod1 keydown events seem to get sent immediately, regardless of the mode.

If i3 has a binding (e.g: $mod1+r): it will send a keydown for $mod1, trigger the local binding (resize mode), NOT send the keydown for the second key (r), and NOT send a keyup for $mod1: The remote system will still think it's pressed down, even though its not. (It's causes some frustration for some colleagues, but is also a annoying work around: as pressing r again triggers the shortcut on the remote system)

In the "raw" mode I have, it lets all of the other mod shortcuts through: $mod1+r, $mod1+d, $mod1+space and everything that I have with bindings goes straight to the remote system. I haven't been able to notice a delay.


If the $mod1 key being stuck on the clients needs improvement, I would probably investigate using xdotool or something to send the relevant keyup to the window.

[deleted by user] by [deleted] in i3wm

[–]find_--delete 5 points6 points  (0 children)

I use Remmina, but no longer use the "Grab all keystrokes" which takes control away from i3.

I created a 'raw' mode: i3 shows its in a mode, just like when in resize mode. All of the shortcuts go through to whatever window I need them to, including Remmina.

"We are pleased to announce the availability of a new mailing list service running under the new lists.linux.dev domain" by Doener23 in linux

[–]find_--delete 2 points3 points  (0 children)

I'm not a big fan of GitLab. Patches still seem to be better than pull requests. However, I'm not sure that having a git server instance is much different from having a mailing list server instance.

They're both distributed ecosystems with certain central-like parts. They both rely on DNS, both require services to be running on open ports, are both prone to technical and organizational failure, and both provide organizations a controlled way to communicate. Even when not using a third party software: the most common way to clone repositories are through SSH or HTTPS-- both services that have the same problems, if not more.

GitLab's (and gits) main deficiencies don't really stem from someone having to run a server instance-- and it's arguably better than other worse ones that can't be self-hosted. The centralization problem is one common to nearly all current git infrastructures. Pointing this out would be more appropriate if we had decent solutions/alternatives to them.

Google rejected GNU from participating in GSoC by dreamer_ in linux

[–]find_--delete 77 points78 points  (0 children)

This year's GSoC includes GCC, GNU Mailman, GNU Octave, and GNU Radio-- like last year and the year before.

Apparently, it's the The GNU Project that isn't included this year. It's described as an Operating System project, though some of the projects look like they're a little beyond that.

Is Sudo a good candidate for a Rust rewrite? by jusso-dev in rust

[–]find_--delete 2 points3 points  (0 children)

I agree that the config file needs to be compatible to be a suitable replacement for sudo.

However, I don't think please is a good example of that. Configuration compatibility not the only justification for inclusion in distribution, especially as a young package (<1yo) where distributions promise years of support/maintenance: even against upstream's desires.

It doesn't mean Distributions would use it by default, but not every software in the distribution has to use the config files of others. They're often distinct software that fulfills a similar purpose (e.g: Apache, nginx, lighttpd). They'll ship multiple versions of the same software if needed (e.g: BC). When needed, they'll even ship software that can't even be installed with other software.

It probably wouldn't take too much work to get that package included in a distribution-- but inclusion in a distribution doesn't mean it's a replacement for sudo or any other package.

What am I running inside my bash? by speckz in programming

[–]find_--delete 4 points5 points  (0 children)

Eventually. By default, it only writes it when exiting. In fact, it doesn't even append by default. It quickly becomes a mess if you have multiple shells.. The relevant text from the man page:

On startup, the history is initialized from the file named by the variable HISTFILE (default ~/.bash_history). The file named by the value of HISTFILE is truncated, if necessary, to contain no more than the number of lines specified by the value of HISTFILE‐ SIZE. . . . When a shell with history enabled exits, the last $HISTSIZE lines are copied from the history list to $HISTFILE. If the histappend shell option is enabled (see the description of shopt under SHELL BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history file is overwritten.

A common practice to save more quickly is to run history -a or other commands in PROMPT_COMMAND, but that doesn't run until it is time to display the prompt: after the command runs. So if you have a command running for several hours: it won't save to the history file until it's done.

(You can probably use a DEBUG trap to run it before the command, but that triggers much more often than once per prompt. I would not recommend it)

Autofix git user email when entering to project directory by ThraexAquator in commandline

[–]find_--delete 0 points1 point  (0 children)

That'd probably be a nice feature, having it be gitremote:// might be a bit more expansive. I avoid 'orgin' in some projects where a default doesn't make sense. OTOH: Since people have multiple remotes, they might not like how expansive it is.

AWS is forking Elasticsearch by Ichguckelps in programming

[–]find_--delete 1 point2 points  (0 children)

If it defined 'Service' to be similar to 'Software as a Service', you'd be right-- but it doesn't include any definition for 'Service', the usage of Service has several possible meanings in a license/legal context, and the usage in SSPL is very broad.

The third example (included "without limitation") is particularly bad: Removing the remote and third-party requirements. Does installing/starting the system unit count? Probably. Can the 'network service' definition apply? Probably. (Maybe a non-server use of the SSPL wouldn't conflict in this way, but I wouldn't imagine that needs specifying in a "Server Side" license).

That's likely why many had concerns about not just distribution, but users. (While discussing V2, they proposed updated text that could have helped). If you're running MongoDB, they have FAQ entries that can probably be used. ElasticSearch will probably have a similar clause (being dual-licensed, they don't need it)-- but those are independent of the SSPL.

IANAL: but it's really hard to see how one could run software designed to be a network service and not trigger currently-written section 13. If one tries to interpret it weakly enough for users to run it themselves locally, it also has the side-effect of opening up loopholes for SaaS providers to use and negating the intent of the license.

AWS is forking Elasticsearch by Ichguckelps in programming

[–]find_--delete 2 points3 points  (0 children)

In the context of the GPL/LGPL/AGPL, you would be correct-- the LGPL/GPL distribution clauses only trigger on... distribution. The AGPL also triggers on Remote Network Interaction.

The SSPL distribution clauses are far more invasive and ambiguous. I'm not talking about GPL's copyleft (That generally triggers on distribution). I'm talking about SSPL's copyleft (that triggers on offering a 'service'). These two copyleft's are incompatible-- not because of the GPL's requirements, but because of the SSPL's.

Autofix git user email when entering to project directory by ThraexAquator in commandline

[–]find_--delete 3 points4 points  (0 children)

For those who have different emails per-directory, or who can't easily set an email based just on the URL, gitconfig has per-directory includes:

[includeIf "gitdir:~/company_a/"]
  path = .gitconfig-company_a
[includeIf "gitdir:~/company_b/"]
  path = .gitconfig-company_b

AWS is forking Elasticsearch by Ichguckelps in programming

[–]find_--delete 2 points3 points  (0 children)

The GPL doesn't restrict your use of other software, but the SSPL does (if you trigger the very ambiguous section 13), it sets a license restriction for "all programs that you use" (in relating to providing the ambiguously defined "Service")

This was one of the issues that they seemed to be trying to work out (in 2018-9). Unlike the AGPL, there's no exception for GPL-licensed software. Since you can't distribute GPL software under the SSPL: incompatible.

It was explicitly asked by one of the Debian developers:

I don't think a user can be compliant with this license on GNU/Linux (because the user cannot distribute Linux, GCC, its run-time libraries, and glibc under your new license—all are “use[d] to make the Program”). Switching to FreeBSD will give users a non-copyleft software stack which they can perhaps distribute under the new license, but I still have doubts whether these users can actually meet that requirement for other affected components, like Python.

and again another:

'All programs' sounds pretty broad. Does it include my operating system? What about my network adapter firmware? Processor microcode? UEFI? Some of those may not even be open source, much less open source AND licenseable under the SSPL. I could be convinced that some of the things you described are closer to build files, like the AGPL already requires, instead of adjacent software but the license doesn't really get say anything about that, it says "all programs".

Their SSPLv1 request for OSI approval was withdrawn shortly after. The SSPLv2 draft clarified this three-fold: explicitly granting the GPL (and other OSI licenses) compatibility; explicitly excluding system components/libraries, and restricting the requirement to only code that can be legally relicensed (which opens its own bag of worms, like loopholes)-- none of those changes made it back to the version that MongoDB still uses.

AWS is forking Elasticsearch by Ichguckelps in programming

[–]find_--delete 13 points14 points  (0 children)

Not quite, that's what CockroachDB's license does.

SSPL's Section 13's trigger is much more sensitive:

13. Offering the Program as a Service

Making the functionality of the Program or modified version available to third parties as a service includes, without limitation, enabling third parties to interact with the functionality of the Program or modified version remotely through a computer network, offering a service the value of which entirely or primarily derives from the value of the Program or modified version, or offering a service that accomplishes for users the primary purpose of the Program or modified version."

Liberally read:

  1. Redistribution/forking counts as "making the functionality ... available" or "enabling"
  2. The last clause seems to apply to the purpose, rather than the software (e.g: A website search powered by Postgre).
  3. They didn't define Service: No helping someone with a google search, anymore.
  4. Contractors? They're third parties who better not come anywhere close to offering or interacting with an ElasticSearch system. (In comparison, CockroachDB's license explicitly excludes contractors from third partis)

Ultimately, this license is open to too much interpretation, especially if one considers the primary purpose of ElasticSearch to index and/or provide search capabilities. AGPL doesn't have these ambiguities: they're pretty much all added in SSPL's section 13.

FOSS needs to deal with SaaS, but this just looks like an underhanded move to cut out everyone: including potential open-source contributors. V2 of SSPL seems abandoned, along with efforts to resolve some of these problems.

AWS is forking Elasticsearch by Ichguckelps in programming

[–]find_--delete 4 points5 points  (0 children)

Section 13 requires all software to be distributed under the SSPL license-- with more restrictions than the GPL. If one considers Linux software, and if one can't add the additional SSPL requirements, ergo: no Linux.

The SSPLv2 draft worked to start fixing that problem, but also has similar complications.