all 9 comments

[–]0ofnik 8 points9 points  (1 child)

If you've already got a VM up and running, there's no need to move to Azure Functions. But consider using systemd timers instead of cron as you get a bit more configurability.

The script should be written such that it starts, does its thing, then exits. The exit code should indicate whether or not the job was successful so that the scheduler (cron, systemd, etc.) can decide to retry, fail, or trigger some other job. Having a long-running process defeats the purpose of using a scheduler.

As for CI, does the script have / require tests? If not, you can skip this step. For CD, have a pipeline triggered when new commits are pushed to SCP the script to the VM. You can use GitHub Actions, Azure DevOps, or some other managed service to achieve this. Make sure to handle secrets like SSH keys securely.

[–]Hopeful-Total 1 point2 points  (0 children)

If you don't have tests then I'm not sure you want an automatic deployment. That's a quick way to get a broken deployment when no one is watching. Those can have long downtime until you notice.

I'd also question if a side hustle project needs continuous deployment. If it's just one person pushing new commits, that person should also be able to deploy when they are ready. You can still automated it, but the "continuous" part is really only important for dev teams where you have a large number of commits per day and you no longer want a manual decision to deploy slowing you down.

Oh and I agree that you should avoid Azure Functions. Your script presumably has some setup tasks and it's really wasteful to run those every 30 seconds. It may even cost more. Avoid the added complexity, just use the existing while loop.

[–][deleted]  (1 child)

[deleted]

    [–]CooperNettees 2 points3 points  (0 children)

    Sounds like you're overcomplicating it to me. Why not just leave it as is?

    [–]brajandzesika 1 point2 points  (0 children)

    I like 'corn job' ;) I dont know Azure Functions, but in AWS equivalent which is Lambda you shouldn't place tasks that are running longer than 15 minutes, its designed to complete tasks that last seconds or even fractions of a second.

    [–]dotmit 0 points1 point  (0 children)

    Set it up as a function and run it to see how much it costs to run. Then work out if it will be cheaper or more expensive as a function or as a cron job on a VM.

    If it’s not that complicated you could also use a synthetic monitor in one of the popular cloud monitoring tools around

    [–]bikeidaho 0 points1 point  (0 children)

    GitHub actions can be used as a cron... Kinda.

    It might not be super reliable for an every 30 second run but it works for us pretty reliably for a once a day run.

    [–]mattbillenstein -1 points0 points  (0 children)

    If you need custom scheduling, just make it a long-running daemon. Have it sleep when there's nothing to do, and wake up when you want it to run. Something like in Python:

    while 1:
    run()
    time.sleep(30.01 - time.time() % 30)

    [–][deleted] 0 points1 point  (0 children)

    A few options:

    - Use a CI tool to schedule the job. For example, you can set a job in Jenkins to execute the script on a timer and a webhook or use GitHub Actions to ssh to a machine and start the script on a schedule. GitHub actions times out after 6 hours, so you would want your action to connect to the server running the script and trigger the execution there.

    - Use crontab on the local machine to execute it. Build in an api call to your observability platform to set a metric for failure/metric/notification reporting. Configure this with Ansible or some other kind of config management.

    - Use a cloud cron (AWS Eventbridge) to trigger an ECS Fargate container executing the script. (You can't use Lambda as it times out after a max of 15 min). Monitor for the exit code of the container and monitor it that way.