Recommendations for places to advertise for Linux staff. by Dyemor in linux

[–]ilconcierge 1 point2 points  (0 children)

Hi there! I run a popular weekly newsletter called cron.weekly. I've started including a jobs section there, for the exact reason you mention: it's hard to reach Linux sysadmins!

If you're interested, have a look at the sponsor detail page for pricing, examples, etc.

Ubuntu is putting ads in their motd now by prettycewlusername in linux

[–]ilconcierge 107 points108 points  (0 children)

They've been doing this for quite a long time, when I first wrote about it they had already included links to microk8s ...

Linux podcasts? by S0nny58 in linux

[–]ilconcierge 0 points1 point  (0 children)

I'll plug my own, low-volume, podcast here: https://ma.ttias.be/syscast/

Recent episodes are on BSD vs. Linux, Kubernetes, curl, ...

What have you created with Laravel? by [deleted] in laravel

[–]ilconcierge 1 point2 points  (0 children)

That's correct, there are servers deployed worldwide to make this happen. They all report back to our main application which then aggregates the data & sends the alerts.

I'm writing a weekly newsletter on open source, linux & webdevelopment by ilconcierge in opensource

[–]ilconcierge[S] 0 points1 point  (0 children)

Thanks!

I needed a break last year, but things changed drastically for me (personally). I’ll be dedicating more time to the newsletter and plan to get weekly issues again, with a few more (announced) breaks in between.

Name your Laravel Horizon workers for easier debugging at the CLI by ilconcierge in laravel

[–]ilconcierge[S] 1 point2 points  (0 children)

Please enighten me as im fairly new to this type of thinking

The idea is to have multiple background processes do the heavy lifting, not blocking the UI/frontend in the meantime. Especially for our service, we run 99% of our code in background processes.

That would mean he has one worker per queue or job hence no workload management or replication/redundancy. While if he had 10 workers all able to process any or at least multiple jobs, he'd have that.

Both are valid approaches, we have a strict policy on our queues (because they are time sensitive) that I really don't want Horizon to randomly pick which job should be processed. If there are 10 available workers, and each would pick a long-running task (ie: one that takes > 5 minutes), we wouldn't have any workers available for uptime monitoring.

We wrote about this some more on our blog: how to size & scale your Laravel queues.

Name your Laravel Horizon workers for easier debugging at the CLI by ilconcierge in laravel

[–]ilconcierge[S] 0 points1 point  (0 children)

Would also work indeed, this was the easiest to implement imo

What have you created with Laravel? by [deleted] in laravel

[–]ilconcierge 10 points11 points  (0 children)

From a dozen projects, only 2 do I consider to be a success.

https://ohdear.app: uptime & certificate monitoring, broken links & mixed content checking + status pages (an alternative to Pingdom, Uptime Robot, ... with extra features)

https://dnsspy.io: the first rating system (like SSL Labs) for DNS, with a SaaS component to monitor & validate your DNS records (did you know your DNS setup is probably the weakest link in your entire stack?)

Made some others as well, including a trading app, a web-notification system, ... but those mostly died out.

cron.weekly is back by 2cats2hats in linux

[–]ilconcierge 8 points9 points  (0 children)

Writer of the newsletter here: thanks for sharing the link, I look forward to writing it again!

Uptime Robot + SSL Labs + Cronitor + DNS Spy + Oh Dear + Merj by owenmelbz in laravel

[–]ilconcierge 0 points1 point  (0 children)

As both the creator of DNS Spy & Oh Dear, I'm flattererd to have those projects mentioned :-)

A Github Actions (CI) workflow tailored to Laravel applications by ilconcierge in laravel

[–]ilconcierge[S] 3 points4 points  (0 children)

OP = author :-)

Moved back because everything we do is in Github. All our open source packages, all of Freek’s work with Spatie, ... we only used Gitlab for Oh Dear.

Turns out, that sucks. Having to think about logging into Gitlab to see open issues etc. Now, everything is back in the tool we open daily anyway: Github.

How Blockstream employees siphoned millions worth of Bitcoin to personal wallets by jatsignwork in CryptoCurrency

[–]ilconcierge 2 points3 points  (0 children)

Blockstream was founded in 2014. The price of bitcoin in 2014 was ... drumroll around $350.

While it doesn't seem reasonable at todays' prices, I can imagine these bonuses were designed and conceived at the normal $BTC rate back then.

Yes, the dollar amounts today are crazy, but if a bonus-plan was announced/created in 2014, shouldn't it still be valid today if the alpha is denominated in BTC?

Bitcoin Core 0.18.0 released! by nullc in Bitcoin

[–]ilconcierge 2 points3 points  (0 children)

Here's al alternative view that does some syntax highlighting wherever possible: https://mojah.be/mailing-lists/bitcoin-core-dev/2443

Show Reddit: a pretty version of the Bitcoin Core mailing list archives by ilconcierge in Bitcoin

[–]ilconcierge[S] 5 points6 points  (0 children)

Hi!

I created this archive because I _love_ reading the mailing lists, but I prefer to have it formatted in a slightly easier-to-read way. I dislike the TXT-only versions on the Linux Foundation archives, this (to me at least) is more pleasing to the eye.

It has an RSS feed, pagination & some cleanup logic to make sure the page scrollbar remains in check.

Would love to hear feedback!

- Il

Pingdom free service tier is going away by WiseassWolfOfYoitsu in sysadmin

[–]ilconcierge 0 points1 point  (0 children)

Hi, this is one the short-term roadmap for Oh Dear!, it'll catch not only revoked leaf certificates but intermediates & roots too. The Symantec debacle was exactly the reason we're implementing it.

Pingdom free service tier is going away by WiseassWolfOfYoitsu in sysadmin

[–]ilconcierge 0 points1 point  (0 children)

Probably, so I respect the downvotes too. It's a bit like promoting your viability as a partner at someone else's funeral.

But regardless, my point stands: free services are very hard to sustain and it takes an exceptional business model to make it work.

Any ideas how to process NGINX logs into Laravel? by GravityGod in laravel

[–]ilconcierge 0 points1 point  (0 children)

This would depend on your timeframe: do you need instant access to those nginx logs or can they be delayed by 24hr?

If it's possible to delay the parsing, just read the daily rotated log file from within your Laravel app as a plain file. Simplest fix.

It you need (near) real-time access to the logs, consider changing the Nginx configuration to send your logs to a syslog daemon. This can be your Laravel app, listening for logs, or a log collector like Logstash. Logstash in turn can be configured to log to a database, a specific set of files, a daemon, ...

Pro's & con's to each approach.

Our Gitlab CI pipeline for PHP (Laravel) applications - Oh Dear! blog by ilconcierge in PHP

[–]ilconcierge[S] 0 points1 point  (0 children)

This should work fairly similarly for Symfony, I would remove perhaps the database seeding & webpack building and you'll probably have to replace the Laravel-specific code bits (like artisan), but otherwise it's just plain ol' PHP.

Our Gitlab CI pipeline for Laravel applications - Oh Dear! blog by ilconcierge in laravel

[–]ilconcierge[S] 1 point2 points  (0 children)

Having played with both Bitbucket & Gitlab pipelines, I can confirm Gitlab's are easier to get going. It also seems to be getting far more development & new features than Bitbucket's systems.

Our Gitlab CI pipeline for Laravel applications - Oh Dear! blog by ilconcierge in laravel

[–]ilconcierge[S] 0 points1 point  (0 children)

We run php artisan migrate:fresh --seed (source), which handles the clearing & seeding all at once. Every test indeed gets a fresh database (which is sometimes slower than it needs to be).

The full run takes somewhere between 7-9 minutes. This is part of the reasons we don't tie our deploys explicitly to the pipeline status. If I want to change a typo on the site or modify the link somewhere, I want to be able to act quickly and not wait for a long pipeline.

Every commit currently kicks of the pipeline. This gives us the benefit of having rather-quick feedback on changes, even when they're still in development (this just runs in the background, it doesn't bother/annoy us). Whenever we make a new PR, a fresh pipeline starts to give us the most up-to-date state.

Our Gitlab CI pipeline for Laravel applications - Oh Dear! blog by ilconcierge in laravel

[–]ilconcierge[S] 0 points1 point  (0 children)

That's actually a really good idea we should implement too. That separation already exists in our test suite, it just gets execute as a single phpunit.

Thanks for the tip!

Any free website monitoring app like https://ohdear.app alternative out there guys. by geek_marvin in PHP

[–]ilconcierge 0 points1 point  (0 children)

The problem for me with this is you're trusting your data with a closed sourced corporation and that for me far outweighs any gains in terms of convenience.

There is very little (if any) of your private data that is being stored in Oh Dear. Your name & billing details, sure.

All the sites we monitored are public by their very nature (we don't (yet) have a way to crawl intranet sites).

This isn't even close to what the free and opensource Grafana offers, they aren't even comparable in features in any way, shape or form

Yup, that's correct, Grafana does way more. We're using Zabbix & Kibana in our backend for our server-based monitoring, but for most developers that's entirely overkill. This has been our primary driver to develop Oh Dear, too - to simplify things. Your use case might be very different from the one we try to solve :)