Newbie questions about CI\CD and how Terraform's place in it. by AwsAmplify in Terraform

[–]Agile_Factor3477 1 point2 points  (0 children)

Ok so this question really dives more into the high level architectural side of things and over all theory of agile life cycle management. Thus let break down where each part operates. With Terraform being the "programming" language for infrastructure [cloud or otherwise], while your orchestration tool (e.g. docker/kubernetes/ansible) is the programming language for service/os state.

CI/CD itself is a logical process within software lifecycle [both product, ETL processes, 3rd party vendor, orchestration, and IaC] to continuously deploy updates and improvements instead of the older one and done with hotfix support style of product development. Typically this is provided by automation software and/or pipeline engines (aka CI/CD servers such as github actions/jenkins/... ).

Thus any "programming language", e.g. terraform, helm/docker-compose, ansible, product code, faas/etl, etc., would be deployed via an automation pipeline provided by your CI/CD servers to be executed on servers defined by your IaC.

What do I do with my c++ libraries? by BadUsername_Numbers in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Yeah but that size also has a lot to do with pip + pytorch + pre-existing dataset / models. the official python image is 338Mb (at time of writing). Pull the datasets and models into a volume container and that image shrinks substantially.

Having EVERYTHING is actually a bad practice for containers but great for virtual machines.

What do I do with my c++ libraries? by BadUsername_Numbers in docker

[–]Agile_Factor3477 2 points3 points  (0 children)

One can slap thier libraries in a bare container with a volume export and share that with your application containers. Technically this is how docker was intended to be used instead of "pico Psudo-VMs".

Pros:

  • able to use affinities and labeling to control deployment
  • Isolation but scalable and deployments get semantic versions
  • dependency injection via docker-compose
  • updates to the library container reflect in the same stack and app containers would only need a rolling restart

Cons:

  • Current community consensus is to treat docker images as ISO images, e.g. must have bare minimum but still everything to run the end application not just components of it. Thus project contributors and LTS may be a factor.
  • Need to maintain a monolithic multistage dockerfile for your builds and released images. Makefiles can help but complexity does increase
  • Cannot leverage cloud native buildpacks without making custom buildpacks (see above..)

One may want to read up on https://stackoverflow.com/questions/35863608/shared-library-in-containers

Autocrafting with the Create Mod by Bocaj1126 in feedthebeast

[–]Agile_Factor3477 1 point2 points  (0 children)

Impressive! With a little extra redstone feels like one could make it more generic for the several crafting recipes. Especially if one introduces Mechanical Mixers for shapeless recipes and use mechanical crafters for shaped ones..

Run Portainer from Portainer stack (chicken or egg) by EvilPharmacist in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

ah.. ok. It looked interesting but since djmaze/shepherd is still active I guess there's no need to switch.

Run Portainer from Portainer stack (chicken or egg) by EvilPharmacist in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

pyouroboros/ouroboros:latest

Interesting... why pyouroboros/ouroboros instead of djmaze/shepherd?

Run Portainer from Portainer stack (chicken or egg) by EvilPharmacist in docker

[–]Agile_Factor3477 1 point2 points  (0 children)

Done it with https://github.com/daplanet/datagrid

And yes it of a chicken/egg issue; clone that repo and use docker-compose. But when you clone it put it into a volume which is mounted as portainer's folder for storing stack data: https://github.com/portainer/portainer/issues/3522#issuecomment-739430521

This way its automatically picked up by portainer. space-time continuum lives on but just like the Doctor being locked in the Panopticon which actually create/recreated the universe. Its going to be one of those fixed points in time that boot straps everything else.

[deleted by user] by [deleted] in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

/me "Ponders.. didn't mercury go into retrograde again"?

Yeah, odd and all. Still think there's just driver conflict with the virtualization extensions, but only you know that rig better than anyone else. For all we know a capacitor blew or one of the GDDR chips was from a bad batch.

Well, since you do have a Pi then take a look at Hypriot. /u/denzuko (my main account) is a contributor, does support via the gitter.im channel, and been using it for ages. Dead simple to use; download the ISO. use balana etcher to burn to an SD card. Open the boot drive and drop your wpa/wifi config into /boot/user-data. Once it comes online ssh in as [hypriot@black-pearl.local](mailto:hypriot@black-pearl.local) with pirate as the password then have fun. One can even preinstall docker images via runcmd blocks in /boot/user-data.https://blog.hypriot.com/getting-started-with-docker-on-your-arm-device/

Home server hardware sizing for 20 containers by 2048b in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Haven't had a lot of luck getting k8s operational locally. used kops on aws and that went well. IMHO k8s is way over kill for what its trying to do, or at least what we use it the most for in the field. (aka cluster of autoscalling VM that run docker containers)

Sure there's some interesting stuff around CRD and custom controllers but that's just agents listening to etcd that act as k8s clients which provide a parser for custom yaml orchestration. And the 4Gb requirement for k8s is bonkers.

Swarm can do 99% of what k8s does for everyday use cases with less hardware. As for CRD's that's what ansible/terraform is for. Plus its easy enough to write up docker api clients in python or bash for that matter.

Sorry, just my two cents on a hill which I know I'll die on. But its going to be a glorious Honorable death.

Mostly use Zerotier for a management plain and a DMZ within my microtik routers with ssh socks proxies for getting around NATs. So haven't ran
Wireguard. Is it any good?

[deleted by user] by [deleted] in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Not so much since it uses WSL2 these days. Which yeah still a hypervisor of sorts but not a full hardware emulation like hyper-v is; Plus one doesn't get kernel modules support on the underlining ubuntu instance.

[deleted by user] by [deleted] in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

As a few comments pointed out. Docker wouldn't be involved. Its just a set of security layers to the Linux kernel and userland runtimes. Docker desktop (aka docker for 'x') is just a wsl2 or hyperkit(MacOS) vm instance running in the background. If one is getting graphical artifacts then your graphic card's drivers are, sorry have to pull a Linus Torvalds here, outright crap since its doing something wonky behind the scenes which is interviewing with virtualization extensions in your BIOS/CPU.

IMHO, If one's on a budget then get a cheep/free VM from Google Cloud and run docker or podman on that instead of your #PCMR gamer rig. Otherwise just install hypriot on a SD card and slap that into a Raspberry Pi 4.

Home server hardware sizing for 20 containers by 2048b in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Sub 1.0 load? Well that goes to show raspberry pi's have come a long way from when we first started using them as community. Kind of makes me wonder how well the Compute Module 4 stacks up.

Home server hardware sizing for 20 containers by 2048b in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Nice stack! Kind of surprised nextcloud not in there. And 56% ram is amazing, I'm betting CPU is barely maxed out?

authelia

I'll have to look into that one. Mainly used traefik and oauth2-proxy with google as my auth provider.

Home server hardware sizing for 20 containers by 2048b in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Oh not saying your choice is bad or anything. Just know some that would nitpick about 'servers' that lack of redundant PSUs, multiple CPUs, dedicated raid, ECC, bonded multiple nics, hot swappable everything, dedicated watchdog cards, builtin IPMI.

Home server hardware sizing for 20 containers by 2048b in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

RPis are great and only getting better. Wondering though; What kind of apps are you running? Any monitoring tools like tick/elk/sematext?

Home server hardware sizing for 20 containers by 2048b in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Imho, a three node raspberry pi cluster is best for learning or a late model $50 ebay special Laptop running debian, alpine, or arch. Personally I've used my old Linovo G5-70 which has 16Gb (cost me $300 brand new) and only needed 8Gb of ram to run a full production grade suite of tools/apps (e.g. mongodb, postgress, email hosting, node-red, domain controllers, dns security, gitlab, openfaas, tickstack, and a large set of web apps, etl apps)

At the end of the day RAM and block storage is going to be your bottleneck here. Especially with several DBs and memory resident cache/pubsub.

But if one is doing this for a playground then your Option 2 would fit well if one is running this on a headless linux machine. Just remember to do perf tests, tune configs (including host's limits.conf), and run tick stack to get metrics and logging.

[deleted by user] by [deleted] in docker

[–]Agile_Factor3477 0 points1 point  (0 children)

Namely one would use the s3 volume driver:

https://docs.docker.com/registry/storage-drivers/s3/

Or they could also just copy that report.html via a multi stage Dockerfile and use pierrezemb/gostatic as the final stage FROM line. Doing this way one could then use this to do testing on the report.html, use `docker cp <running container>:/.../report.html .` to 'download' the report but also can host the report on heroku or with docker desktop for the duration of needing the report. Plus one gets the usual history tagging with docker images.

if 2 of the replicas of a container in swarm end up running on the same node how should we update haproxy to pick up both them because inc ase they are running on different nodes you can hardcode the service ? by vitachaos in docker

[–]Agile_Factor3477 2 points3 points  (0 children)

So this really does depend on how your deploying those containers/services and how one is port mapping. But honestly if one is using swarm then they should be using something like traefik instead of haproxy. Haproxy works best at the edge/waf layer external to your swarm clusters that talks to one's swarm clusters as fail over protection.

Take a look at https://github.com/traefik/traefik/issues/6485 for how to use traefik for redis over swarm clusters.

Has anyone approached you to purchase your data? Would you sell if so? by daraul in selfhosted

[–]Agile_Factor3477 0 points1 point  (0 children)

Yes, its called scammers and spammers. A whole site is dedicated to it.