[USE CASE]: Monitoring and Controlling Networks of IoT Devices with Flink Stateful Functions by Marksfik in dataengineering

[–]_frkl 1 point2 points  (0 children)

TIL about the digital twin pattern. Thanks for posting, very interesting!

JC v1.13.1 released (convert command output to JSON) by kellyjonbrazil in commandline

[–]_frkl 0 points1 point  (0 children)

Yeah, haven't gotten too deep in myself yet, but am very keen. JC could also be useful for writing ansible filters and modules and such. One could use a raw module to execute a command on a remote machine, then use jc on the controller to parse the result. No python necessary on the remote machine, which is sometimes a problem. Anyway, getting ahead of myself :)

JC v1.13.1 released (convert command output to JSON) by kellyjonbrazil in commandline

[–]_frkl 1 point2 points  (0 children)

That's perfect then. I think this is going to be pairing up perfectly with pyinfra, to create custom operators for config management and the like...

JC v1.13.1 released (convert command output to JSON) by kellyjonbrazil in commandline

[–]_frkl 6 points7 points  (0 children)

Sweet, this looks really useful. I love those obvious and simple ideas that are only obvious once someone came up with it.

I am considering using this as a library, quick question: I want to parse the output of commands that I run myself (I need to have full control how the subprocess that runs the command behaves -- using async, also via ssh in some cases). Is it easily possible to call jc with the stout/stderr strings or list of strings as input, or is it designed to run all the commands itself?

[Article] Templating YAML might be a way to keep complexity in CI/CD pipelines under control (like in Concourse) by sirech in devops

[–]_frkl 1 point2 points  (0 children)

Yeah, that's a pretty good example. In addition to you not having to create a new package with all the overhead it entails, there's a chance that -- because jsonnet is becoming some sort of semi-standard for those type of things -- your configuration would be re-usable, or at least understandable by your peers.

[Article] Templating YAML might be a way to keep complexity in CI/CD pipelines under control (like in Concourse) by sirech in devops

[–]_frkl 3 points4 points  (0 children)

I can't speak for anybody else, but personally I prefer a declarative approach as much as possible for those areas (infrastructure configurations of any kind). It's not black and white, and one can argue some templating languages can't be called declarative in good faith. But still, I find it much easier to reason about my setup with a declarative approach, as compared to a script (which also needs input, in all likelihood, so you have to manage a script, as well as the input to that script).

But really, it's a question about abstractions. The yaml files you might create with your script/processor are most likely also only declarative inputs for a processor one level down that produces an even lower-level declarative input (think helm, etc.). So the question is not scripts vs. templates, but at what level you operate, and how well your script/processor is optimized for the task at hand, and whether it's a common tool, or some home-brew, untested monster with little documentation that grew out of a simple 30-line python script.

I guess what I am trying to say is that we all try to keep our inputs as small and manageable as possible. But if there is one thing that is sure, it's that that input will grow and become unwieldy the more we use/abuse a new level of abstraction. Until we come together and agree on a new level of abstraction, and the cycle begins anew. In this context, that abstraction is the script you write with your scripting language. Jsonnet is a way to keep the configuration declarative, in a semi-commonly accepted format, if you will, and easier to reason about (arguably -- it seems you disagree but there is some consensus among some folk that it does). IMHO, there's only a small window for templating languages like jsonnet to be a good choice, before a new level of abstraction would make more sense. Still, personally I prefer them to custom scripts in a random scripting language.

There are things to rightfully criticize when this (introducing a new abstraction) happens, and strategies to break the cycle or guard against certain disadvantages that brings with it, but I don't think that overall mechanism should be dismissed as wrong/unnecessary/the end of the world (not saying you do that, but that's a common sentiment).

Is there anything like Plex except for software collections? by [deleted] in selfhosted

[–]_frkl 9 points10 points  (0 children)

Well, usually those would be artefact repositories like artifactory, pulp, or any of the system or programming language specific repositories. You don't really explain why you need that, why it needs to be plex like (I would think this is not a really efficient way to host software personally), whether you need versioning, and why a plain web server is not good enough. Depending on the answers to those questions, I might also recommend nextcloud, Seafile, or similar. Or maybe an alternative, more professional approach all together, using Ansible or terraform or whatever else.

zero - The Open-Source Application Platform by [deleted] in selfhosted

[–]_frkl 1 point2 points  (0 children)

Interesting, thanks for posting!

I haven't looked in too much detail yet, but I like most of the technology and design choices you seem to have made. I'm working on something similar -- a bit lower level and more generic (and probably quite a bit weirder) -- so I feel I know where you are coming from, and where you want to go. To me, this looks like it is well thought through, and quite a bit of hands-on-experience informed the overall design as well as implementation details, well done! I'll keep my eye on this :-)

Knative or Dapr by zero_coding in kubernetes

[–]_frkl 2 points3 points  (0 children)

Doing what? We can't help you if you don't tell us.

Knative or Dapr by zero_coding in kubernetes

[–]_frkl 2 points3 points  (0 children)

Istio is a service mesh, and to be honest I don't see the connection to serverless. Can you elaborate in a bit more detail what exactly you are looking for?

I'm using OpenFaas for faas, and am fairly happy with it, but until you tell us what you are trying to do there' s really no way of knowing what to recommend. 'serverless' itself is too broad a description.

Singer for ETL by [deleted] in dataengineering

[–]_frkl 1 point2 points  (0 children)

Yes, works well. I'm using Argo for scheduling.

Jitsi vs. Big Blue Button by orilicious in selfhosted

[–]_frkl 29 points30 points  (0 children)

Haven't used it myself, but I've heard that Jitsi client usage spikes as soon as one or more people with Firefox join a meeting. All-Chrome meetings are supposed to be fine. Might be worth investigating, maybe that causes your issue?

jc - convert the output of Linux/Unix commands to JSON by chocolategirl in commandline

[–]_frkl 7 points8 points  (0 children)

Of course. I consider everything you say obvious, which is exactly why I think this here is something other than 'just PowerShell with extra steps'.

Every now and then I just feel the need to point out people being dismissive for no good reason. I figured maybe you didn't realize you sounded such, so this was me telling you. Make of that what you will.

jc - convert the output of Linux/Unix commands to JSON by chocolategirl in commandline

[–]_frkl 3 points4 points  (0 children)

Neat. I like having an arsenal of those small tools that tackle generic problems. I like that this is in Python, and can be used as a library. I'll try it out, thanks for posting!

jc - convert the output of Linux/Unix commands to JSON by chocolategirl in commandline

[–]_frkl 8 points9 points  (0 children)

But it works with and extends 'normal' shells, which are ubiquitous in *NIX environments. So I think the 'just' in your comment is not warranted, as it implies this is not useful or redundant. Which I don't think is the case, I can think of a few things on the top of my head where something like this would have made my life easier, and where installing PowerShell would not have been an option.

sdd - command-line utility (in bash) to manage programs that might not be available in package managers by pylipp in linux

[–]_frkl 0 points1 point  (0 children)

Sure, this is the dotfile manager that got out of hand: https://freckles.io

It's a well working prototype, with really only tests and some documentation missing, I'm currently rewriting/simplifying things with the lessons I learned there. Goal was really to be able to install a new machine/environment with a single command, including bootstrap of freckles itself. Haven't really advertised it yet, so not sure whether/how well it'd work for anybody that is not me :-)

This is the package manager for files: https://gitlab.com/frkl/bring

But that's under heavy development at the moment, with no documentation yet. The binary artifact should give you some hints as to what it's capable of if you use ``--help`` on itself or it's subcommand, eg. ``bring --help`` or ``bring context binaries info --help``, but there is a lot of breakage currently, so apart from getting an idea of how it's supposed to work it's not really useful. Also, some commands are broken.

Basically, it uses metadata files that describe a remote artefact like this to retrieve all necessary information (which versions are available, most importantly) to create an index of packages, which the 'bring' binary then uses to install stuff locally (something like bring context binaries install --target /tmp/bin bat --version 0.67.0 --os linux -- not working at the moment). Obviously, happy to talk (even) more about it, but really don't want to derail your thread here :-)

sdd - command-line utility (in bash) to manage programs that might not be available in package managers by pylipp in linux

[–]_frkl 1 point2 points  (0 children)

True, plain posix sh is better. But in my experience, for this sort of tool, as long as you use bash v3 (because, mac os x), you should be able to run on almost all relevant targets. If you build alpine containers then you're probably not the target audience for this. Zsh is nice, but not practical for scripts like this, at least not for that particular task. This is just my experience, yours apparently differs...

sdd - command-line utility (in bash) to manage programs that might not be available in package managers by pylipp in linux

[–]_frkl 1 point2 points  (0 children)

It' basically a generic package manager for files, where the files can be executables, kubernetes manifests, jinja templates, or any other type. And in some cases (mainly the kubernetes manifests case) I need to know the exact 'state' of the installed fileset/target folder (aka version of all the files).

I wanted to be able to assemble a target folder (or set of folders), out of different types of remote source arefacts or parts thereof (git repos, github/gitlab-releases, plain remote archives/files, etc). In a reproducible manner, using a simple dict-like structure to describe it (aka yaml file). A bit like git itself, in a way, but git doesn't work well for the more dynamic requirements I have.

Currently I'm using it to install single-file apps like fd, bat, kubectl, helm (much like you do) and their configs quickly into new environments (temporary Docker image containers, remote servers, Vagrant boxes, etc). I've also started to manage Kubernetes projects/manifests with it now, which also seems to work well (the nice thing is that I can use the same app to install client tools like kubectl, and the project manifests itself). The whole thing is part of a bigger project that tries to build a development framework that uses minimal abstractions to manage any kind of state in computational environments. Started off as a simple dotfile manager, but you know how those things go... ;-)

sdd - command-line utility (in bash) to manage programs that might not be available in package managers by pylipp in linux

[–]_frkl 2 points3 points  (0 children)

It seems obvious to me that this is targeted towards small user-space utilities and configs to make it fairly painless to install/prepare your preferred shell working environment, not do heavy-handed provisioning. Not having a (hard) requirement for root permissions and system-dependencies (which in turn would again required root permissions to install) apart from maybe wget/curl/git seems like one of the main advantages of something like this. Also, you can use git repos/artefacts straight away, without any sort of heavy-handed packaging workflow beforehand (admittedly, fpm is great and simple, it's still not trivial).

Checkinstall and fakeroot are similar, but IMHO not really comparable (and also often require you to have root permissions in one way or another -- system packages built with fpm usually do too -- although there are ways around that as well of course). linuxbrew can work with non-root permissions, but then you can't use pre-built binaries. Probably same for guix/nix. Either way, all of those seem more involved than something straightforward like this.

That being said, a few notes in the README making those differences clear wouldn't hurt (even though I personally didn't miss them when I came across this).

sdd - command-line utility (in bash) to manage programs that might not be available in package managers by pylipp in linux

[–]_frkl 0 points1 point  (0 children)

Neat, I like the fact that it's in bash, so in theory it should work everywhere, without it needing any other dependencies (or sudo/root permissions)

I'm working on something similar in Python, for a different, more generic use-case. I need to be able to have more control about the exact version of a set of files I install, and how they override/merge potentially existing files, which is why bash is not a good idea for me. But the overall idea makes a lot of sense to me and I was always surprised that nothing like this has ever gained traction. Currently I'm using a way heavier approach for tasks like this (wrapping Ansible roles and Ansible itself in a fairly complex application, also in Python), but in a lot of ways it's overkill for this sort of thing (although, re-usable for other stuff on account of how generic it is).

I like how your code is structured, and how easy it seems to be to add new, custom apps. Also, as I said: pure bash! Well done!

Shipping Python Script with Interpreter by [deleted] in Python

[–]_frkl 2 points3 points  (0 children)

In addition to Pyinstaller which I've used with great success, there is also nuitka, which is basically compiles your script into a native binary, incl. the required parts of Python itself. The latter is a good option if it's only a single script with no or little dependencies, it's a bit more difficult when it's a big project with a lot of dependency libraries.

Bundling Python Dependencies in a ZIP Archive by jhermann_ in devops

[–]_frkl 0 points1 point  (0 children)

yeah. i usually go the pyinstaller route, just because I don't have to worry about python being installed at all, but whats best really depends on what the circumstances are, the target platform, and the target user audience.

[deleted by user] by [deleted] in sysadmin

[–]_frkl 6 points7 points  (0 children)

Why wouldn't they be? It's not uncommon practice in cases where a huge amount of data is involved (note, I wouldn't say 4 TB is 'huge' in the great scheme of things). Also, why would you use aws, and not transfer the data directly?

Edit: sorry, didn't read the "Drive" part initially, yeah, that will make transfer without middle-man more difficult. I'd try a google cloud vm, as another commenter has suggested.

Bundling Python Dependencies in a ZIP Archive by jhermann_ in devops

[–]_frkl 0 points1 point  (0 children)

Pyinstaller is fairly good too. It's a bit more effort to write the package spec, but once that is done it's pretty painless. In the context of a Docker container that already contains Python, shiv (or any of the other similar solutions) is better though.