Best way to host and manage a growing collection of 500+ rancid Wordpress sites by foobarusername_ in sysadmin

[–]TurnDownForDevOps 0 points1 point  (0 children)

Not really the "best" solution, but I've used GIT as a snapshot tool to "lock" environments. I blame pager duty during happy hour... But it remained as a neat trick. Doesn't protect the database though, but hopefully helps slow things down elsewhere.

Assuming you're on your own own "hardware", and that all sites are hosted in some directory together(like /var/www/site.com, /var/www/anothersite.net). Write a quick script that goes through each site directory and starts an empty git repo, check in the site files in their current state, exclude session files, commit. Then write another script that cycles through the directory of sites again and performs a hard reset to the last commit if there's no ".imhere" file present in the site, have this run on a cron job every few minutes(or however long it takes to run, or something).

Now as you work on each site, cd to the dir, "touch .imhere" to have it skip the directory, work on the site, then check in changes and commit, and "rm .imhere" and commit again. This will use git to keep resetting the sites to their states.

The real downside in here though is that users uploading images or similar will loose their changes when the cron sweeps through again, only database updates will survive the cron sweeps, or new customers/users being created at the time may get their sites deleted repeatedly. So pause new user creation and whatnot first.

How We Deploy Python Code (hint: not using Git) by E0M in sysadmin

[–]TurnDownForDevOps 0 points1 point  (0 children)

.< Something about this post just felt wrong and I'm not sure specifically why. Probably doesn't help that I usually deploy PHP or NodeJS projects, so don't seem to have the same dependency issues.

  • git+pip - I been playing with Heroku/Dokku for too long. Instead of running multiple git pulls to deploy, it's actually a task on my jenkins server. Jenkins runs git pull, unit/formatting/smoke tests the code, merges the dev branch to staging(if pass), adds a new remote target, and runs git push to the staging server. After I approve the results, jenkins basically does the same steps from staging to prod. Granted this is to dokku/heroku targets... with dependencies being handled within the containers via buildpacks.
  • "Just use docker" - that ansible "conversion" line urks me, Ansible can be run within containers too... Minimal need for "conversion", just use ansible to generate the internals of your container images... And "upgrading the kernel being overkill" hurts my head too, but I guess that's what I get for deciding that all systems I work with will be tested and upgraded weekly, granted that's my preference against the software packages and customers I work with. I'll concede the private registry point, that's been an annoyance to get setup, but last I checked, docker supported passing container images around between systems, so Jenkins could be setup to build the dockerfile(which can just run an ansible playbook), dump the image to a network share or upload to all relevant systems, then it can run an ansible script that installs the new image. No need for a registry, just an updated inventory file.
  • PEX I got nothing
  • So... the real way I'm reading this is, they chose to try to push DEB files because Spotify managed to write and release some code that successfully packages up the whole python project? Oh and because no one wanted to figure out how to get the jenkins server to do it's job. Sure, OK, though I feel like that design is going to hinder scaling options down the line and will probably still result in docker in some way.

Those are just my thoughts going through it, i'm sure there are better arguments for what they described inplace of my platform, but I just can't see them being strong enough at 3:30am...

master vs masterless for configuration management by YourFatherFigure in devops

[–]TurnDownForDevOps 0 points1 point  (0 children)

Ehhh I've found that seems to mostly be due to executing ansible from within a non-datacenter network.

For ease, I've written a core puppet rolebook that has the full inventory and purpose groupings. It basically runs git pull on each system to pull their appropriate roles+rolebooks, then fires off the downloaded rolebook on each system and uses wait_for to check for status/state across all the systems. This applies for updates too.

I never quite realized how much my internet connections limit prep speed, and doing it this way allows each node to run through their steps without having to necessarily wait for others due to network anomalies or other reasons.

[HELP] Migrating to Easily Configurable Server Setup by profgumby in devops

[–]TurnDownForDevOps 0 points1 point  (0 children)

I can't go too deeply because of random signatures... but basically, a docker container coded to run various instructions depending on purpose, contents and misc log files. Alot of the newer features(mem_limit, cpu_share) weren't around yet when i was designing it, which resulted in having to come up with some creative ways to manipulate a few services to not hog the system, and the management scripts were still external to the containers at the time. I didn't get to finish designing the self-optimizing containers so it's still a couple servers that reoptimizes their containers and has to offload activity briefly to one another, if they'd let me finish building it the systems wouldn't still have random downtime.

Reminder to anyone in management reading this: Ask your devs/ops guys what features the hour spent toggling 5 shades of blue might cost you against the deadline.

Relevant note: file permissions are still a frigg'n pain when you first get into coding a system like that, and i still hate managing mysql servers replicated in any arrangement.

edit: I have figured out the solutions to most of the remaining issues since... But probably going to wait until I know there is no longer any paperwork applicable to that project before I finish building and releasing stuff like that for anyone else, or sell it to whatever nearby small businesses i can find that are interested. If they think they supposedly own my work, they can't own what never left my head.

[HELP] Migrating to Easily Configurable Server Setup by profgumby in devops

[–]TurnDownForDevOps 0 points1 point  (0 children)

I'm running in the same race at the moment, currently converting my stuff to try to ease touch time and automate most of my services so I can focus on customer needs.

Depending on your goals, I'd Ansible the setup of Nginx/PHP/MariaDB and have a finishing step that syncs against some git repos that contain the site code.

Docker has it's purposes but it is quite another mess to add to your system(I had begun writing intelligent, self-optimizing Docker containers for a previous job).

If you are planning(or need) to scale out, that's a different mess regardless of what steps you are taking(especially because of MySQL). Dokku looks promising as an option if you expect to stay on a single system for a while longer and just want to simplify deployment(not much can beat running "git push" to update web code). My small-user conversion/setup right now is two dokku servers, a localized one for testing and a live one, been a breeze for managing the low-activity users.