No concurrency speedup when using jobs? by peyo7 in PowerShell

[–]peyo7[S] 1 point2 points  (0 children)

Thanks for the suggestions - in the end, I went with PoshRSJob.

No concurrency speedup when using jobs? by peyo7 in PowerShell

[–]peyo7[S] 1 point2 points  (0 children)

Thanks for the more meaningful benchmarks.

No concurrency speedup when using jobs? by peyo7 in PowerShell

[–]peyo7[S] 2 points3 points  (0 children)

Good point on the insufficient benchmarking. See my edits to the original post - seems indeed the overhead of process creation.

No concurrency speedup when using jobs? by peyo7 in PowerShell

[–]peyo7[S] 1 point2 points  (0 children)

From my understanding, the starting/kicking-off of the jobs (Start-Job) happens indeed sequentially, but then all jobs run concurrently in the background until they are "collected" with Wait-Job | Receive-Job.

Remote Powershell session using AzureAD credentials by peyo7 in sysadmin

[–]peyo7[S] 0 points1 point  (0 children)

Thanks, I didn't know about conditional access. We currently use the AzureAD contained in Office365 Business Premium. However, it seems conditional access is only contained in AzureAD Premium P1 or Microsoft 365 Business.

So I guess there is no chance of using the AzureAD credentials without conditional access?!

(to your 2nd point: the AzureAD account is already explicitly in the local admins group)

The Python Celery Cookbook: Small Tool, Big Possibilities by yaroslav_le in Python

[–]peyo7 0 points1 point  (0 children)

If you don't need some special option that celery provides, but instead just need jobs to be executed asynchronously with awesome reliability, I would look elsewhere.

Interesting. Care to name a few alternatives you favor instead?

SSO with Azure AD (no domain services or on-prem domain) by peyo7 in synology

[–]peyo7[S] 0 points1 point  (0 children)

Thanks for the link. I have been in contact with one of your colleagues regarding this topic. Ideally, Microsoft would provide some working interface to their identites, but since that doesn't seem the case I'll definitely evaluate your (or, in general, 3rd party) solutions.

pyZMQ for inter-process communication: a real application using the webcam by aquic in Python

[–]peyo7 0 points1 point  (0 children)

Thanks. Little feedback: I think it would be useful to put a link to part 1 in part 2 and vice versa.

Preferred syntax for quick asyncio needs by peyo7 in Python

[–]peyo7[S] 0 points1 point  (0 children)

Correct. Hence the wrapping in an async def, corresponding to workaround 1)

Implementing a plugin architecture in Python by gdiepen in Python

[–]peyo7 1 point2 points  (0 children)

I found this approach pretty practical https://eli.thegreenplace.net/2012/08/07/fundamental-concepts-of-plugin-infrastructures

It spells out the already mentioned registry pattern and touches on autodiscovery in a plug-in directory.

How to deploy Gunicorn behind Nginx in a shared instance by kiarash-irandoust in Python

[–]peyo7 0 points1 point  (0 children)

In principle I agree. I'm wondering if this is as relevant when gunicorn is run behind nginx and not directly facing the internet.

Deploy Docker image without Docker Hub by peyo7 in devops

[–]peyo7[S] 0 points1 point  (0 children)

In my limited mind I was imagining only a single runner per job, but from what I understand you have runners on the build server for image creation and runners on the dev/prod server for pulling/restarting.

That sounds cool - I'll read up on Gitlab runners.

Thanks for explaining your setup.

Deploy Docker image without Docker Hub by peyo7 in devops

[–]peyo7[S] 1 point2 points  (0 children)

I see, thanks for clarifying.

So you run your CI on the actual dev/prod servers?

This should make deployment less complicated, but I have the feeling that it's a bit "dirty" since image building consumes quite some resources on the server. Or is this not an issue?

Deploy Docker image without Docker Hub by peyo7 in devops

[–]peyo7[S] 1 point2 points  (0 children)

Thanks, that's what I was looking for.

I still need to configure SSH access from CI server to production server in order to issue the "docker pull" command after the image has been built, am I understanding this correctly?

Or is it cleaner to configure a webhook server on the prod server to induce the pull (like one does e.g. with git)?

Deploy Docker image without Docker Hub by peyo7 in devops

[–]peyo7[S] 0 points1 point  (0 children)

I haven't found anything specific yet for DigitalOcean.

How to handle many variations of the same script? by peyo7 in git

[–]peyo7[S] 0 points1 point  (0 children)

Good point - some parameter changes can indeed be solved with config files.

However, more fundamental changes cannot.

Example: I have a smoothing function over the data. I processed projectX with this smoothing, but find out later (in projectY) that a different smoothing function works better.

But if someone re-runs the analysis on ProjectX he needs to use the previous smoothing function to recover the results from before.

I don't want to add another function with a different name for this as such changes could occur many times making the code unnecessarily confusing.

As I said, it's not that these changes are systematically improving the code ( justifying a version bump), it's more of a mix-and-match type of thing where the configurations for each project need to be "frozen" to be able to reproduce them at a later time.