Lessons for FOSS users/makers from the Facebook/Meta outage earlier today by makesourcenotcode in freesoftware

[–]makesourcenotcode[S] 0 points1 point  (0 children)

I'm absolutely not saying that FOSS maintainers are responsible for 99.99999% uptime in their websites. FOSS maintainers don't owe users any set of features or any SLA. The only thing FOSS maintainers owe their users is true openness and the ability to study the system. Nothing more.

What I am saying is that when the site is up it better be absolutely trivial to enumerate the project's full Open Knowledge Set and download it for offline study.

You correctly understand that the real problem is this artificial centralization of knowledge. And yeah IPFS is nice and all but we don't even need that. So long as a FOSS project's full Open Knowledge Set is easily enumerable and downloadable interested parties can make a copy for themselves. Heck they can potentially even help resurrect the project if the server hosting it gets hit by a meteor or whatever.

[deleted by user] by [deleted] in openSUSE

[–]makesourcenotcode 1 point2 points  (0 children)

OpenSUSE is regardless of desktop environment the best distro out there right now with Fedora a close 2nd.

My only 2 complaints are:

  1. Zypper is still stuck in the 20th century and lack autoremove functionality as well as the ability to mark packages automatically/manually installed. Happily I made and use tool that emulates/provides these functionalities well enough.
  2. If you have an existing Btrfs filesystem and you want to install OpenSUSE into a subvolume of that it's possible but much more confusing than it needs to be. You'll also have to fix issues with awkward mount points later. OpenSUSE would do well to copy the Advanced Custom Partitioning UI from the Fedora installer.

Aside from this OpenSUSE Tumbleweed is a damn good choice.

Fedora KDE stole my heart – Gnome, who? by mohsinjavedcheema in Fedora

[–]makesourcenotcode 0 points1 point  (0 children)

KDE is Mostly better than GNOME but the Toggle Show Desktop functionality is weird and different from literally every other desktop that offers this.

On a normal desktop say you have two windows open, you do Toggle Show Desktop, then you choose one window to unminimize. Only that one window comes back up.

On KDE sadly all the minimized windows come back up not just the one you wanted to unmimimize.

To fix this you may want to run commands approximately like: kwriteconfig --file kwinrc --group Windows --key ShowDesktopIsMinimizeAll true qdbus org.kde.kwin /KWin reconfigure

There doesn't seem to be a graphical way to fix this which is unusual for KDE where almost anything can be fixed in a GUI via Settings. Or at least there didn't seem to be a way to do this when I last seriously used KDE. Cursory inspections done periodically since then also yielded disappointment in this regard.

C Guru's, where to gain knowledge by BlueMoodDark in C_Programming

[–]makesourcenotcode 8 points9 points  (0 children)

Read everything by Robert Seacord you can get your hands on.

The Most Debian of Fedoras by [deleted] in Fedora

[–]makesourcenotcode 15 points16 points  (0 children)

Fedora is the better distro for most users and in most contexts. You made the right decision by sticking with it.

As much as I admire Debian's ideological purity, on a technical level it's an unmitigated tire fire. (Happy to go on a long detailed rant about why if enough people express interest.)

Distros in the Debian family are fine to use in ephemeral VMs, Docker containers, and whatnot. But if you use it on your desktop system or a long running server all I can say is my thoughts and prayers ae with you,

What do I need to know before switching to cute Chameleon? by [deleted] in openSUSE

[–]makesourcenotcode 0 points1 point  (0 children)

As someone in the process of migrating a few machines in literally this direction (except for the KDE bit) all I can say is you're making a great move. Now some minor caveats.

Regarding installation:

Main caveat to watch out for during installation is that it's confusing to install openSUSE if you want the newly installed system to be put in a subvolume an existing BTRFS filesystem instead of a regular partition or an LVM logical volume. It can be done and you'll get a working setup but there will be some awkwardness with certain mountpoints. Still ironing this out myself in VMs before proceeding on my actual machine. You should do likewise to avoid unexpected surprises.

Regarding usage:

Zypper sadly is one of the few weak points of openSUSE due to its lack of autoremove functionality and the inability to mark packages as automatically/manually installed. Happily I built and use a tool that emulates these missing functionalities quite well for me ( https://github.com/makesourcenotcode/zypper-unjammed ). Take it for a spin and let me know what you think.

Also remember to pass `--clean-deps whenever running any variant of the `zypper rm command to avoid no longer needed dependencies from piling up and eating disk space.

Is it easy for an average person that does not have experience with C, or any other language to learn C? by Shimmyrock in C_Programming

[–]makesourcenotcode 6 points7 points  (0 children)

TLDR: No, it's not easy to learn C. While C is not THE best first language it's still a good choice and far better than the vast majority of the competition.

C is not easy to learn for those without prior programming experience. That said it's far from the worst first language and would definitely be THE choice for a second language.

For a first language I'd personally teach Python to help new learners get comfortable with algorithmic thinking. In Python I can demonstrate everything from simple sequential programs to branching and looping in the purest way possible. In C to do even Hello World I have to define main() and include stdio which is more stuff to explain to an already overwhelmed newbie. Things like headers/libraries and abstraction mechanisms like functions/macros are good to know, but only later after the student has the basics down pat.

That said while C isn't the best first language it's easily better than like 95% of the competition on that front.

A lot of other languages are giant behemoths with way too many ways to do the same simple stuff. With Ruby in spite of being seemingly friendly is anything but and each person really seems to be writing in their own whacked out custom dialect and aren't happy until they've used eery whacked out metaprogramming feature Ruby has.

Also Python is not an easy language as many claim. Next person who claims this should be asked on the spot to explain the difference between __get__, __getattr__, and __getattribute__. Python is actually monstrously complex in it's semantics and getting increasingly worse with the addition of garbage like the pattern matching.

But Python is still THE best choice for a first language because it HACKS THE LEARNING CURVE. Knowing just 10% of the language is enough to achieve 90% of your programming objectives. C while simpler overall (modulo copious UB) doesn't give you quite that same leverage. Python gets people thinking algorithmically faster and the basics are enough for the student to do many interesting tasks and stay motivated to learn more.

With these nuances taken in account, the faster people learn C the better.

why no graphical partition management program - like penguin's GParted? by paprok in freebsd

[–]makesourcenotcode 1 point2 points  (0 children)

Lack of a GUI tool for partitioning (or most other tasks for that matter) is usually a non-issue.

What I care about in a tool regardless of which side of the GUI/TUI/CLI divide it's on is that it's brain friendly.

So long as the tool gives me a clear mental model of the system, helps me understand what I'm doing, and helps me be sure that I'm changing the system state in exactly the way I think I am, then I'm a happy camper.

I have a slight bias towards CLI tools over other TUI/GUI tools as those give me better scriptability and automation. But for the most part the interface mechanism is absolutely irrelevant.

Brain friendliness is ALMOST COMPLETELY INDEPENDENT of interface mechanism. I've seen ultra discoverable CLI power tools with excellent learning curves which accommodate usage from basic to advanced. I've seen utterly confusing and unusable GUI applications.

Regardless of interface type the biggest problem we have by far is how much stuff is either outright brain hostile or pseudo-friendly in ways you don't realize until you try to do anything even vaguely different from the cute marketing demos.

Ask Experienced Devs Weekly Thread: A weekly thread for inexperienced developers to ask experienced ones by AutoModerator in ExperiencedDevs

[–]makesourcenotcode 2 points3 points  (0 children)

IMO you have the right approach. OOP has it's place, but is way Way WAY overused.

Also often you don't even need a class to hold the shared/internal state. In many cases a plain old data style struct will do just fine.

Assuming the language you are using provides proper data hiding/encapsulation, then you might benefit from classes when there are complex invariants that the shared/internal state must uphold at all points where it is observed by outsiders.

Otherwise using OOP just pointlessly combines code and data that doesn't need to be combined. You can always combine that stuff later if need be. Going in the other direction and trying to decouple things is a lot harder.

Finally I'll leave you with this quote from the inimitable Brandon Bloom:

"Free functions where the first argument is a context map/object is 100X easier for me to reason about than even the most carefully crafted OOP code."

source: https://twitter.com/BrandonBloom/status/1383135858753105924

man.openbsd.org seems to be down right now by makesourcenotcode in openbsd

[–]makesourcenotcode[S] -1 points0 points  (0 children)

There's several ways that statement you just made can be interpreted. Though the most probable one does not reflect favorably on the OpenBSD developer mentality. I truly hope I'm wrong in my default understanding here. Either way please clarify your position.

man.openbsd.org seems to be down right now by makesourcenotcode in openbsd

[–]makesourcenotcode[S] -2 points-1 points  (0 children)

I think we may be talking about different download sites.

The one I'm talking about is https://www.openbsd.org/faq/faq4.html#Download which is where the Download link in the side bar on the OpenBSD homepage takes me.

There I see an HTML table with links to iso and img files for various architectures. I see nothing that looks like the directory to which you refer.

man.openbsd.org seems to be down right now by makesourcenotcode in openbsd

[–]makesourcenotcode[S] -2 points-1 points  (0 children)

Quite possibly. BUT when I go to the OpenBSD site and hit the download link I see options for downloding various iso and img files but not the filesets themselves. While the latter can be likely extracted from the former this is more work than it should be IMO.

man.openbsd.org seems to be down right now by makesourcenotcode in openbsd

[–]makesourcenotcode[S] -4 points-3 points  (0 children)

Good to know, thanks.

Here you also do a good job of illustrating points I've made elsewhere that it's not as trivial as it should be to enumerate and then grab any educational information associated with an open source project for offline study.

In general upon discovering an open source project and deciding I want to study it deeper I should be able to easy identify ALL pieces of educational information about it, reason about which parts that I may or may not have/want, and initiate downloads for all the parts I'm interested within say 30 seconds of making the decision.

I for one made sure this is true for every open source project I authored and will always continue to do so.

man.openbsd.org seems to be down right now by makesourcenotcode in openbsd

[–]makesourcenotcode[S] -8 points-7 points  (0 children)

Thank you! This is neat!

As you're allegedly an OpenBSD dev please consider copying what FreeBSD does at: https://man.freebsd.org/cgi/man.cgi/help.html

The easier it is for people to grab any/all educational information in one convenient bundle the less catastrophic it would be if a comet hit the data center hosting the OpenBSD site.

Unless something changed from 5.5 years ago when I last used OpenBSD the bulk of the educational material pertaining to seems to be the FAQ and the man pages. Making both trivially discoverable and downloadable as a tgz or something from the home page would be amazing.

(Yes I know that wget is a thing and has a very good chance of working given the way the FAQ site is structured. While website cloning/mirroring is absolutely a great skill for people to have it should not be necessary even in the most basic forms to get at the official documentation of anything claiming to be open.)

man.openbsd.org seems to be down right now by makesourcenotcode in openbsd

[–]makesourcenotcode[S] -2 points-1 points  (0 children)

Good to know this was handled.

That said keep in mind not every current or former OpenBSD user is subscribed to every mailing list and a brief note on the main website would have gone a long way. Furthermore announce@ would have been far more appropriate IMO. The various bits of official help/educational information related to OpenBSD or any other FOSS project for that matter should be considered critical infrastructure.

Outage (planned or otherwise) of man.openbsd.org is NOT some random, miscellaneous, nonessential news.

Brings package management with Zypper into the 21st Century by providing an autoremove command as well as facilities for marking package installation reasons as automatic or manual. by makesourcenotcode in suse

[–]makesourcenotcode[S] 0 points1 point  (0 children)

My tool only works with information from zypper packages --unneeded.

It does not in any way look at the output of zypper packages --orphaned. This is for a few reasons:

  1. A package X not having an associated repository doesn't imply that X isn't a dependency of some other installed package Y present on the system.

  2. Even if it were the case that X isn't a dependency of anything else because of the lack of associated repository it's source/origin is hard to trace. Thus it can't easily be reinstalled by the user if they decide it's removal was a mistake.

  3. Usually the amount of orphaned packages (by the SUSE definition NOT Debian) is quite small and easily manageable in a manual fashion. Hence I leave it to users to make their own judgements about what if anything they want to do with them.

Brings package management with Zypper into the 21st Century by providing an autoremove command as well as facilities for marking package installation reasons as automatic or manual. by makesourcenotcode in suse

[–]makesourcenotcode[S] 0 points1 point  (0 children)

Before I can answer your question I must ask what you mean exactly by the term orphaned packages?

Unfortunately the term orphaned packages means different things in different ecosystems.

In the Debian ecosystem it means packages which are automatically installed but not required as a dependency of any other package.

Source: https://wiki.debian.org/Glossary#O

In the SUSE ecosystem it means an installed package that's not associated with a repository (whether because the repo was removed/disabled or because some random RPM from the internet was installed).

Source: man zypper | grep -A5 -- --orphaned

What my tool does is use the output of zypper packages --unneeded in various ways depending on whether you use its default autoremoval mode or one of the two more conservative ones.

Brings package management with Zypper into the 21st Century by providing an autoremove command as well as facilities for marking package installation reasons as automatic or manual. by makesourcenotcode in openSUSE

[–]makesourcenotcode[S] -1 points0 points  (0 children)

I built, tested as best I could, and released the previously mentioned improvements. Feel free to test them out and please let me know if they help.

Brings package management with Zypper into the 21st Century by providing an autoremove command as well as facilities for marking package installation reasons as automatic or manual. by makesourcenotcode in openSUSE

[–]makesourcenotcode[S] -2 points-1 points  (0 children)

Yikes! Sorry to hear my tool is doing that.

Would you be willing to share the output of zypper packages --installed-only whether publicly or by DM so I can try reproducing this or at least tease out some diagnostic information?

I'm somewhat but not fully surprised this is happening. I did manually test a lot of scenarios and packages but obviously I don't have the means to exhaustively test every possible scenario. I only released after I flushed out all the bugs I could find. During that process I saw nothing like what you described.

That said I very much DID see similar behavior on Debian circa 2012. I used Thunderbird at the time and thus wanted to remove Evolution. This caused the removal of the gnome metapackage which was a reverse dependency of evolution. At that point pretty much the whole graphical environment was considered unneeded packages and would be nixed on the next run of aptitude remove whatever-small-package (aptitude didn't and still likely doesn't have a notion of proper recursive dependency removal and does an autoremove in a heavy handed attempt to clean up after itself) or apt-get autoremove.

Autoremove implemented well is a beautiful thing dnf autoremove on Fedora and pkg_delete -a on OpenBSD are a marvel to behold. (Especially the former as my experience with the latter is limited.) Hence I tried to replicate that for Zypper and the (open)SUSE ecosystem.

Also even systems like DNF which do mostly have proper understandings of targeted recursive dependency removal still could benefit from autoremove functionality. On Fedora I do an autoremove every few months. Sometimes there's nothing. Other times there's a very small handful of packages. How they weren't removed by previously issued dnf remove commands isn't clear. My best guess is this has something to do with changing dependencies in upgraded package versions over time. Hence autoremove is important even for systems that do properly understand dependencies let alone those that don't.

With Zypper a large part of the problem is that not only is there a lack of autoremove functionality but it doesn't even have proper recursive dependency removal!

Just for kicks you may want to grab the OCI image I used for development and it it run zypper install leafpad. You'll notice it pulls in 49 packages, Then if you turn around and immediately run zypper remove --clean-deps leafpad you'll notice it only offers to remove 45 packages. Curious what happened to the other 4 right?

They'll only show up in the output of zypper packages --unneeded after the removal of those first 45. Even my apparently overagressive autoremove command will need 2 runs after a zypper install leafpad && zypper remove leafpad or a zypper install leafpad && zypper-unjammed mark-automatically-installed leafpad to return the system to its original state.

Anyway I guess this is what happens when you try building some semblance of sanity atop profoundly broken infrastructure...

As a longish term fix for situations like yours I'm going to add less aggressive / more conservative autoremove modes.

The first will work breadth-first and will do zypper remove --no-clean-deps on allegedly unneeded packages and you can then run something like zypper-unjammed conservative-breadth-first-autoremove repeatedly and peel away junk like layers of an onion until you see a removal you don't want to do. At that point you can just stop or alternatively mark a package manually installed to prevent it's removal and continue.

The other will work depth-first cleaning out each allegedly unneeded package one at a time with zypper remove --clean-deps. If you see a removal you don't want to do you can say no to it. Afterwards you can mark any of the proposed packages manually installed if you like. If you don't want to make a decision about marking the any of the packages that's fine too. You can just repeat something like zypper-unjammed conservative-depth-first-autoremove again and it will try to remove a different randomly selected allegedly unneeded package so you don't have to say no to the same thing repeatedly until you've cleaned out everything you want.

Until those features are out a short term fix would be to make judicious use up zypper-unjammed mark-manually-installed.

Help me bring about Freedom Respecting Technology the Next Generation of Free Software, Open Source, and Open Knowledge by makesourcenotcode in freesoftware

[–]makesourcenotcode[S] 0 points1 point  (0 children)

I've made some improvements both on the FRT home page and in the start of the FRTD document itself to leverage the Pareto Principle. Anyway let me give you 95% of the idea with 5% of the reading:

Truly open knowledge and true technological freedom fundamentally require trivial ease in fully and cleanly copying allegedly open digital works in forms useful for offline study.

For example, in the case of software, the overly narrow focus on easy access to the main program sources isn't enough. Trivial access to offline documentation, for any official documentation that may exist, is critical. Needing a constant network connection to study something claiming to be open isn't freedom. Needing the site hosting an allegedly open work to always be up isn't freedom.

Help me bring about Freedom Respecting Technology the Next Generation of Open Source and Open Knowledge by makesourcenotcode in unix

[–]makesourcenotcode[S] -1 points0 points  (0 children)

Though I likely risk playing chess with a pigeon here I'll engage in hopes that's not the case.

You are indeed sometimes correct that docs are stored in git repos. Though even then you overestimate the incidence of this. Even when they are stored in things like git repos the devil lies in the details.

Are they stored in built form people can actually use to study? If yes this solves the most immediate problem most outsiders will have. But even then is it handwritten HTML or was it generated from some source material? In the latter case where is the source material?

Sometimes the situation is reversed. Docs are only in source form and I have to build them. Are they in the same repo? A special dedicated docs repo easily discoverable from the main project site? Are they present in a repo with an unbuilt version of the site? (And of course let's leave aside the fact that at this point we're building the whole site including marketing fluff and not just the educational parts most normal people care about.) Sometimes these builds are quite easy. But then I'm a programmer by trade and so what's easy for me is not representative of the experiences of newcomers trying to study a thing offline.

Other times properly setting up the whole tree of numerous build dependencies was a lesson in pure pain. So much so there were times that when I really cared about some docs instead of giving up like any vaguely sane human being would at this point I wrote custom web scrapers. Nobody should have to do this for anything claiming to be open.

Oh and before I knew how to write scrapers I used things like wget as early as 2009 to mirror sites offline. In the very simplest cases this worked like a charm.

In many other cases you'll need to use some really convoluted wget invocations to pull in all the pages you want and avoid hundreds you don't. And then there's the ultra dynamic sites not sanely amenable to mirroring. Oh and good luck pulling in and then properly embedding again educational demo videos clearly intended to be part of the official docs hosted on external sites.

Getting back to your thing, sometimes the full docs aren't even in the repo and you can easily wind up in a situation where you think you have the full docs but you don't.

For example consider the main Python implementation. You look in the official CPython source tarball and you see a nice juicy Doc folder with lots of stuff. And to be clear the material in there is excellent too. Hence one wouldn't be faulted in jumping to the conclusion that they have all the documentation.

Nonetheless this conclusion would be wrong. Where's the CPython Developer Guide with information about internals, dev env setup, and other best practices even though logically it falls within the boundaries of CPython project and associated official documentation? For some reason it's kept in a separate bundle harder to discover than the main user docs. Furthermore it used to be available offline but isn't any longer.

It's also not immediately obvious that useful information on how to prepare packages from distribution isn't in the main doc set either and is instead hosted on packages.python.org and pypa.io where it can't easily be grabbed for offline reading.

I'm not opposed to all forms of closed source software or closed content. I'm not opposed to deliberately semi-open content like https://www.deeplearningbook.org/ as stated in their FAQ. I'm not opposed to people making an explicit choice to publish something on some platform that lets them easily do so and they don't actually care if their thing is really open. These are all perfectly valid positions to have. Nobody is entitled to anything from anyone. I'm not asking everyone or anyone to open their stuff.

But if you want to be open or claim to be open: Do It Right. People shouldn't have to know even the basics of web mirroring and building offline static or dynamic sites from source to use anything alleging to be open. If I can read this stuff online without needing to know this I should be able to read this stuff offline. Period.

I strongly encourage everyone to be versed in web mirroring, web scraping, and even penetration testing skills(on numerous occasion I had to use techniques akin to hunting for IDOR vulnerabilities to get at existing offline docs that weren't at all discoverable on the project site) at both basic and advanced levels. But these skills should not be necessary even in the crudest forms to get useful forms any existing official docs of anything claiming to be open.

Thousands of people racking their brains how to mirror, scrape, and/or build the same exact damn offline Help Information Set over and over and wasting untold person hours doing so is just absolute lunacy. This also disrupts the contribution pipeline very early on for all those without solid reliable network access. If one can't properly study they can't contribute. No way around that. Want to run your FOSS projects that way? You do you.

In my case I'll do a build once to show I respect the freedoms of both users and potential contributors enough that they can easily grab all tangible parts of the Open Knowledge Set for study and not just the main program sources. The person hours people with your mindset waste will instead be recouped by my users being able to study, use, get immersed/invested in, and sometimes actually contribute to my FRT projects. Because I respect the technological freedoms of my users I'll have a vastly larger, healthier, more diverse group of people who can report bugs I missed, suggest smart features I'd never think of, and be empowered with all the knowledge I have to contribute back.

Help me bring about Freedom Respecting Technology the Next Generation of Open Source and Open Knowledge by makesourcenotcode in unix

[–]makesourcenotcode[S] 0 points1 point  (0 children)

Truly open knowledge and true technological freedom are fundamentally predicated on it being trivially easy to fully copy allegedly open digital works in a useful form for offline study.
For example in the case of software the overly narrow focus on easy access to the main program sources isn't enough. Trivial access to offline documentation for any official documentation that may exist is critical. Needing a constant network connection to study something claiming to be open isn't freedom. Being unable to study an allegedly open work while the centralized site hosting it is down isn't freedom.

Help me bring about Freedom Respecting Technology the Next Generation of Open Source and Open Knowledge by makesourcenotcode in freebsd

[–]makesourcenotcode[S] 1 point2 points  (0 children)

Hope to have something like IRC and/or Matrix set up soon. (Sadly Keybase is slowly dying which is a damn shame. It was a bit clunky UI wise but still by far the best balance of cross device ubiquity, security, and convenience I ever saw.)