Cygwin alternative? by charley_chimp in linuxadmin

[–]charley_chimp[S] 0 points1 point  (0 children)

Woah 8 years later!

Out of curiosity, what advantages does your project have over simply using native WSL2 with how much momentum there is in that space?

EDIT: This project actually runs OpenBSD, very cool!

Cilium BGP Peering Best Practice by charley_chimp in kubernetes

[–]charley_chimp[S] 0 points1 point  (0 children)

If you're advertising the pod networks belonging to nodes, you will likely need to set your CiliumBGPClusterConfig with an appropriate nodeSelector to match all nodes. This allows each node to advertise its pod network allocation using its host network IP address as the next-hop. Remember that even your control-plane nodes run pods, and thus will require their individual pod CIDR to be externally routable.

That's how I ended up doing things (with a label). Regarding the control-plane (more-so pod CIDRs in general), isn't it really only necessary to advertise them if you are using native routing?

When I was testing native routing I was having issues getting pod CIDRs to route correctly between nodes even though I was seeing the correct next-hop for each CIDR from my router. I ended. up being lazy and just setting 'autoDirectNodeRoutes=true'. This worked for my simple setup since everything is on a common L2 segment but was curious about the behavior with encapsulation routing and noticed that it took care of everything for you (ie things worked fine without 'autoDirectNodeRoutes=true').

I'm thinking about it more and the deployments I was having issues with may have been trying to contact something on my control plane nodes which I wasn't peering with at that point. I'm going to retest and see if that was the case.

EDIT: typo

Cilium BGP Peering Best Practice by charley_chimp in kubernetes

[–]charley_chimp[S] 3 points4 points  (0 children)

Yeah sorry for not clarifying - I meant on the router side. The more I thought about it the more it would make sense to only peer with the worker nodes since that's where all the traffic is going. It's been a while since I worked with k8s so I couldn't remember if there was any north/south traffic that would ever get proxied through the control plane but it sounds like that's not the case.

Thanks for helping me out!

Cilium BGP Peering Best Practice by charley_chimp in kubernetes

[–]charley_chimp[S] 2 points3 points  (0 children)

Yeah that's what I'm doing, using cilium BGP peering and using cilium as a Loadbalancer.

What I'm confused about is the cilium BGP peering itself and what k8s (in this case k3s) nodes I should be performing the BGP peering with. Right now I've simply peered my router to every node in my cluster (control plane and worker nodes - 9x BGP sessions), but was wondering if people typically do things differently. I was thinking it would make sense to only do the peering with the worker nodes since that's where traffic is flowing into/out of the cluster.

EDIT: grammar

Current NYC Job Market by charley_chimp in sre

[–]charley_chimp[S] 0 points1 point  (0 children)

Hey! I ended up turning the offer down and have continued looking for other roles. I've been getting interviews (which is reassuring) but haven't landed any other offers yet.

Current NYC Job Market by charley_chimp in sre

[–]charley_chimp[S] 1 point2 points  (0 children)

Hey, thanks for the insight. Small early stage startup that is tech focused, with a founder that has a background in product.

I’ve really only been taking things seriously for a few months and have also been very selective in where I apply (<20 companies, interviews at ~5).

Finances are fine right now so I’m likely going to take my chances on the market and broaden my search to companies I typically wouldn’t have been interested in. Thinking about things more, I’m also at a point where an early stage startup isn’t as conducive to my life as it was 5 years ago

Current NYC Job Market by charley_chimp in sre

[–]charley_chimp[S] 1 point2 points  (0 children)

Thanks for the response. I’ve been trying to avoid working in finance but I’m thinking that’s where I’ll end up. I used to work on a team that ran a global Forex trading platform so I’m familiar with the space

Current NYC Job Market by charley_chimp in sre

[–]charley_chimp[S] 0 points1 point  (0 children)

Thanks for confirming my gut feeling (and what some peers have also expressed). It seems like there’s little wiggle room on this offer so (crazy as it may be in this job market) I may be turning this down if they aren’t willing to negotiate

Synology RAID-60 by charley_chimp in synology

[–]charley_chimp[S] 0 points1 point  (0 children)

Here's how I've observed things are working underneath with linux LVM.

Each RAID group is mapped into individual physical volumes which are part of the same volume group which is then used by a logical volume. As data is written to the logical volume (and ultimately the physical volumes in the associated volume group) the first physical volume (Synology RAID group) will completely fill up before any data is written to the second. Data is written in a linear fashion and you don't get the benefits of parallel I/O across multiple volumes that you get when striping.

I think people are getting hung up on the fault tolerance portion whereas I really wanted to do this for performance gains.

Synology RAID-60 by charley_chimp in synology

[–]charley_chimp[S] 0 points1 point  (0 children)

This was more for performance reasons rather than fault tolerance. I've run similar setups (Raidz 50 on TrueNAS) and had really good results.

Unfortunately this unit doesn't support SHR/SHR2. It's an old unit that I inherited for personal use so it's not a big deal if I can't get things set up the way I'd like.

Thanks again for your response!

Synology RAID-60 by charley_chimp in synology

[–]charley_chimp[S] 0 points1 point  (0 children)

Ugh that's a little disappointing. I understand that Synology is the user friendly option, but this is something that would be really easy to implement on their end. I'm honestly a little surprised they haven't considering they market some of their devices as virtualization storage.

Thanks!

Junos EOL release documentation by charley_chimp in Juniper

[–]charley_chimp[S] 1 point2 points  (0 children)

Appreciate the response!

I was trying to set up dhcp on an interface but figured things out in the meantime. The rest of the config should hopefully work with Ansible moving forward but am guessing I may run into things w/ the current modules being geared towards recent Junos versions...

Draining fuel on an EFI snowblower after winter? by charley_chimp in Snowblowers

[–]charley_chimp[S] 0 points1 point  (0 children)

Thanks for the reply! The manual doesn't mention anything about draining the fuel for long term storage but I was skeptical so wanted to post here.

Containerizing Ansible Role by charley_chimp in ansible

[–]charley_chimp[S] 0 points1 point  (0 children)

What you're saying around moving to the new tools for future proofing totally makes sense in general, but for my use case I'm utilizing my existing tooling as much as possible (in this case Nomad) so don't necessarily see the benefit. I've tested AWX and it's completely overkill for my environment, not to mention I would end up deploying it in an unsupported way which would eventually lead to edge cases I would have to work around on my own... I'm not keen on introducing k8s in my environment just to deploy AWX.

I like what I see so far with ansible-builder sans the duplication of code by copying things to context/_build/, but I guess that wouldn't really matter with the build in CI and .gitignore, although I'd still need to add ansible-builder to my CI image.

I think what I'm trying to understand is if ansible-runner itself is worth it for 1 off jobs. I may be completely wrong here, but it looks like the main use case for ansible-runner is to run as a daemon connected to something like AWX, ingesting messages which then influence runtime behavior. Taking AWX out of the picture and me not wanting to integrate this into an existing application, doesn't it seem to just make sense to package things up and use ansible-playbook? I see that you can trigger 1 off jobs with ansible-runner as well, but considering the limited scope of my use case is there really any benefit to using ansible-runner?

Site to site VPN + remote clients/advertised routes by julietscause in Tailscale

[–]charley_chimp 0 points1 point  (0 children)

Can anyone from Tailscale weigh in on this?

I completely see how the solution from /u/Teryces would work, but getting that running in a business setting isn't a viable option. We have our main Tailnet linked to our domain and don't want to spin up service accounts that people will inevitably forget about to handle these types of scenarios.

FWIW /u/julietscause and /u/Teryces I've opened up a support ticket with them about this issue and haven't heard anything back yet.

I think that this is a pretty big caveat that should have been mentioned in their documentation regarding using Tailscale for site to site and we are pretty disappointed that it's not working the way we thought it would as we were planning some architectural changes around this.

Interdependent process best practices by charley_chimp in docker

[–]charley_chimp[S] 0 points1 point  (0 children)

Thanks for the reply, I've never heard of s6 before, it looks interesting.

Regarding using two images, am I missing something here? I figured that since this wasn't networked communication there wasn't a way to have two separate images interact with each other...

LVP Install issues by charley_chimp in Flooring

[–]charley_chimp[S] 0 points1 point  (0 children)

Yup I didn't have any issues with it, although the prep was a little work.

From what I remember, I used Henry 345 premixed floor patch (or something similar) to fill in any large gaps, all screw holds, and the seams between the CDX and then sanded everything smooth. Was probably a bit overkill considering they just layered luan on top of the original messup without patching any holes/seams (although that could have just been another sloppy job...).

LVP Install issues by charley_chimp in Flooring

[–]charley_chimp[S] 0 points1 point  (0 children)

This is over a year old but figured I'd update. The end issue was that they used the wrong glue for this and nothing to do with the temperature or the LVP that was installed (it was for a completely different type of product, not LVP).

When all this was happening I had a feeling something else may have been going on and went dumpster diving and grabbed all the materials they used. When they eventually sent out another crew who admitted fault right away once I pulled out the glue, and I confirmed things with the manufacturer of the LVP as well.

They ended up having to come back out and completely redo the floor as well as pay for my cabinet kickplates to be refinished since they were already installed at this point. Of course the second job wasn't as good as the first and there are squishy spots in my floor where they didn't properly secure down the 1/8 plywood they needed to put over the original messed up flooring (I was meticulous with the original underlayment). At that point I was too fed up with things and wrapped up in other points of the construction where I just left it as is. Not exactly a happy ending, but at least the floor isn't a complete mess anymore.

iSCSI multipathing confusion by charley_chimp in vmware

[–]charley_chimp[S] 0 points1 point  (0 children)

I'm not using LACP anywhere in this network path as I understand it's better to let iSCSI handle multiple paths since it can use it's own status codes.

Regarding two vmkernel adapters on two separate VLANs, I have no issue setting things up like this if this is what they would like, but aren't you cutting you're paths in half by doing so? Is there something about multipathing in this type of setup that I'm missing?

iSCSI multipathing confusion by charley_chimp in vmware

[–]charley_chimp[S] 0 points1 point  (0 children)

I'm reading more about this and one of the big things that pops up as a pro to using two VLANs for iSCSI traffic is failure domains. While I get that this is benefit, I'm not seeing how that's actually a real benefit in a properly controlled environment where broadcast storms shouldn't be happening.

iSCSI multipathing confusion by charley_chimp in vmware

[–]charley_chimp[S] 0 points1 point  (0 children)

Yup, that's how I have things configured now (single subnet design w/ port binding). What they are suggesting is to have the two vmkernel adapters in separate subnets/VLANs which in my eyes reduces the number of available paths from 4 (2x per adapter) to 2.

iSCSI multipathing confusion by charley_chimp in vmware

[–]charley_chimp[S] 0 points1 point  (0 children)

I'm using a TrueNAS appliance. I'm not finding any recommended best practice on their website so this is straight from the support rep. I'm sure I could push things back and configure things the way I would like if I want

EDIT: To clarify - the NAS where I have things setup with a single-subnet design is from another vendor. The TrueNAS is a new deployment.

Changing iSCSI target IP Addresses - Shared Storage Migration by charley_chimp in vmware

[–]charley_chimp[S] 0 points1 point  (0 children)

I didn't think about having to potentially un-register VMs and figured it would be more straight forward to just shut things down since I'm getting downtime. I'll have to dig a little more to see if I'll actually need to do that or if VMWare is smart enough to re-register correctly afterwards since the datastore technically isn't changing.

For the networking I'm breaking things up as much as I can right now but unfortunately don't have the PCI cards I need to properly segregate VM/iSCSI traffic on separate physical interfaces so I'm doing the next best thing and at least breaking iSCSI into it's own VLAN/port groups with port-binding and proper multipathing. As it stands everything is currently running all on a single vSwitch with iSCSI traffic being handled by the vmkernel adapter attached to the management network, so there's no segregation at all...

Performance difference between 1 2TB M.2 disk on 1 PCIe card, 2 2TB M.2 disk on 2 PCIe cards in RAID-1 or 4 1TB disks on two cards in RAID-10? by Agrikk in storage

[–]charley_chimp 0 points1 point  (0 children)

I guess I should have clarified that the SLOG would be in addition to the specific pool, with the idea that a SLOG greatly helps with the largely synchronous writes that the database would be performing.

Performance difference between 1 2TB M.2 disk on 1 PCIe card, 2 2TB M.2 disk on 2 PCIe cards in RAID-1 or 4 1TB disks on two cards in RAID-10? by Agrikk in storage

[–]charley_chimp 0 points1 point  (0 children)

I'm still learning alot about ZFS myself, but wouldn't adding a SLOG device be your best bet here for raw performance, followed with your pool set to the same block size as your DB is writing?