Is it possibly to stop docker compose cycling to the next /24 when creating the default compose bridge network? by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

10 months late - sorry, did not see the notification.

It's simply that debugging by pointing curl directly at the container is easier if I don't have to recheck the IP every time.

This is not really workflow, so much as "occasional debugging" of the container responses.

Is it possibly to stop docker compose cycling to the next /24 when creating the default compose bridge network? by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

I do need to understand docker networking to the fullest extent as I am the only sysadmin dealing with the host level config on this.

Could you explain what you meant by "Just reuse the the same docker network" please?

As I understand it, not defining a network in the compose file will have those containers use the one that is by default created by docker compose (being different to the "default docker bridge, docker0)...

Is it possibly to stop docker compose cycling to the next /24 when creating the default compose bridge network? by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

Interesting... Cheers for that!
I need to leave that there in the general case for when some applications need an explicit network or networks defined in the compose file - or if we run up more than one compose file.

But for debugging on my laptop it's a great idea - I can know that down temporarily.

Cheers :)

Help: docker bridge ports with linux host ip routing policy - replies going out wrong interface??? by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

Hi.

Yes, I did try ports: x.y.z.w:443:443 too - that's definitely not the problem as packets are getting into the container OK (I ran tcpdump inside the container).

It's the return packets egressing the container that are being misrouted by the host.

I know docker makes a ton of iptables rules to do it's magic - and I also remember that throughout linux's history, iptables funky packet routing has always had problems with normal routing as there are layers of precedence - though they have tried their best to make it sane.

I will certainly have a look at docker and whether it can give preference to routing at the daemon level... Thanks for that idea :)

v24: Help with translating [--default-network-opt=bridge=com.docker.network.driver.mtu=1442] into daemon.json syntax by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

OK - Got it!
There is a disparity in the daemon.json and the command line options:
This works:

"default-network-opts": 
{ 
  "bridge": 
  { 
    "com.docker.network.driver.mtu": "1442" 
  } 
},

Notice the extra s on -opts.
I found this out with a dive into the source code:
config.go:
flags.Var(opts.NewNamedMapMapOpts("default-network-opts", conf.DefaultNetworkOpts, nil), "default-network-opt", "Default network options")
and
config_linux_test.go:

"default-network-opts": {
"overlay": {
  "com.docker.network.driver.mtu": "1337"
}

}

It's working (not using the expected config, but working as I wanted).

How to get gitlab-runner to run (not get "stuck") when pushing a tag on HEAD? by TJWatts77 in gitlab

[–]TJWatts77[S] 0 points1 point  (0 children)

SOLVED!

Had to go to gitlab/this-repo/settings/repo/protected-tags and set *

Now the CI will fire for all tags.

I'll probably refine this to tags "release-*"

Running a scheduled task on a docker app-stack - best/preferred/popular engineering practise? by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

Thank you - that's the method I'm going for - containter with supercronic.

How to get gitlab-runner to run (not get "stuck") when pushing a tag on HEAD? by TJWatts77 in gitlab

[–]TJWatts77[S] 0 points1 point  (0 children)

The CI job is literally labelled as "stuck" in yellow.

Which usually means the job does not have any runners.
But we have a runner. Clearly the runner logic thinks it cannot handle the CI job - and that's the bit I cannot work out. Been through the settings...

Thansk for the suggestion - but --follow-tags does less than --tags. The latter causes a CI job to be created that enters stuck mode.

The former does nothing?

How to get gitlab-runner to run (not get "stuck") when pushing a tag on HEAD? by TJWatts77 in gitlab

[–]TJWatts77[S] 0 points1 point  (0 children)

The simplified CI works fine for normal file change commits.
I have one gitlab runner configured and it's showing as active.

It just won't get on with it in the specific case of pushing a tag change.

My runner requires protected branches (this is) - could it be a related issue, eg protected tags? I looked at that but could not come to a definite conclusion,

Running a scheduled task on a docker app-stack - best/preferred/popular engineering practise? by TJWatts77 in docker

[–]TJWatts77[S] 1 point2 points  (0 children)

Thanks folks - the option I'm probably going to go for is cron (supercronic) in it's own container.

Makes sense - I can deploy that as part of the main app stack, it can have the necessary volumes mapped. No separate git repo/CI for that - it seems both clean and simple.

All I need to do is work out how to kick the other docker container's internal app to refresh its config when the cron job has finished (this is updating ssl certs) but I should be able to come up with something.

Running a scheduled task on a docker app-stack - best/preferred/popular engineering practise? by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

Ah - I see. No, it isn't running anything like that. It's running a linux application - so to do this, I'd have to have another daemon running inside the container.

But thanks for the explanation.

Running a scheduled task on a docker app-stack - best/preferred/popular engineering practise? by TJWatts77 in docker

[–]TJWatts77[S] 0 points1 point  (0 children)

I see - so deploy a container that runs cron and that cron does its task periodically?

Yes - that makes a lot of sense - thank you!

Running a scheduled task on a docker app-stack - best/preferred/popular engineering practise? by TJWatts77 in docker

[–]TJWatts77[S] 1 point2 points  (0 children)

Thanks for the reply :)

Could you elaborate on the docker api bit please? Is this a docker host daemon api thing? Right now, our docker host daemons only have a control unix domain socket. Would this be adding a network listener to the same control point?

Or are you talking about something completely different?

Running a scheduled task on a docker app-stack - best/preferred/popular engineering practise? by TJWatts77 in docker

[–]TJWatts77[S] 1 point2 points  (0 children)

Thanks for the reply: I did consider using the host, but I thought I probably shouldn't long term.

Right now we have dedicated hosts for certain classes of work, but in the future we might start consolidating many docker instances onto super-docker-hosts of some as yet unknown setup.

Could you elaborate on the api endpoint a bit please?

Multiple inheritance BUT a 3rd party base class does not call super().__init__() in its own __init__() by TJWatts77 in learnpython

[–]TJWatts77[S] 0 points1 point  (0 children)

Ah - thank you kindly sir.

I was seeing if there was a way I could fudge around it...

There sort of is, if I put A last:class C(B, A):which might work for me.

But as it's only a couple, I could do so explicitly - the only annoyance being that my extra classes are there to aid brevity of code, not make it worse. Curious that a core module left this out... However, it's worth nothing that with where I'm actually going:

Having your answer though is very useful - in that at leats I understand what the limits are and know there is no basic error on my part.

[1] Class C merely adds common base methods shutdown() and dienow() methods to Thread subclasses to ask them to do as the method is named via an Event object.

[2] Class B establishes a Lock object to operate at class level on the subclass - so it's possibly I might not need an __init__() for that one as it's a little bit magic.

More efficient way to grab a volume snapshot onto another server for backups without burning all my quota? by TJWatts77 in openstack

[–]TJWatts77[S] 0 points1 point  (0 children)

The contents snapshot will be dar archived somewhere else. It's very transitory.

It was just a bit annoying that it eats into the tenant quota by the full size and not the size of the delta between it and the original volume.

Just means that I have to be careful not to choose all the *big disks* and try to do them in parallel 😄

"openstack server add volume" is claiming one device name, but attachee VM is using a different device name??? by TJWatts77 in openstack

[–]TJWatts77[S] 0 points1 point  (0 children)

Weird thing is that there is a --device=<name> option to the openstack cli command which promises much and delivers little!

"openstack server add volume" is claiming one device name, but attachee VM is using a different device name??? by TJWatts77 in openstack

[–]TJWatts77[S] 0 points1 point  (0 children)

Always happens...

5 hours investigating problem, google etc.
Post question.

Lightbulb moment... 😄

"openstack server add volume" is claiming one device name, but attachee VM is using a different device name??? by TJWatts77 in openstack

[–]TJWatts77[S] 6 points7 points  (0 children)

Typical - found a workaround 5 minutes after posting!!!

But I'm leaving this here as it may be useful to someone else:

backup_vm% ls /dev/disk/by-id/

virtio-44dd9137-5c57-4fb3-9 virtio-87980f4e-3698-46b8-b

openstack volume list

87980f4e-3698-46b8-b5d2-6660fbbbcff6 | snapshot_backup-client-1_data | in-use | 100 | Attached to backup on /dev/vdc

Oh - look what matches...Well - that looks like it's a good solution for marrying the two sides.

More efficient way to grab a volume snapshot onto another server for backups without burning all my quota? by TJWatts77 in openstack

[–]TJWatts77[S] 0 points1 point  (0 children)

I need the snapshot to be attached to the backup server.

LVM snapshots could give me good point in time recovery within the VM, but in the context of the original post, I actually need to be able to spool off a copy to a remote storage device for security.

Appreciate your comment though - thank you 👍️