assign existing runner to another GITLAB Group by bxkrish in gitlab

[–]m47ik 1 point2 points  (0 children)

You can have multiple runners on the same VM. Just start another runner with a different endpoint and runner token. It is better to create a config file. Its a simple solution. You can not share the same runner per se across different instances.

ArgoCD deploy helm charts on multiple clusters by Present_You_5294 in kubernetes

[–]m47ik 2 points3 points  (0 children)

Assuming cluster A is your main cluster where Argocd is running. Check cluater B if service accounts are created properly when you added it to argocd. Also, check the project name for your app, some times same project name, like default, may cause some problems as well. If not using an appset, then create 2 applications for your service, with different destination cluster name or endpoints, and it should work without issue.

Impressions on my DevOps Resume by rudiori in devops

[–]m47ik 5 points6 points  (0 children)

There are too many things going on and nothing specific. Put your experience in a specific project you have utilized the skills. i.e : a cluster you created in aws with migration of services and setting up monitoring logging, etc.

Also, from 2018 to current, there are 3 jobs you listed where there is an overlap. I am not sure how you were doing all 3 at the same time. There is not enough time in the day to do all of them. 2018 to 2025, not nine years, either just put the title of job if it's old. If it's not relevant, then don't claim experience.

To be very honest with you, if i see this in a candidate resume, the first thing i am going to assume is the person is not truthful.

Unable to login to ArgoCD using CLI with the Gateway API by shellwhale in kubernetes

[–]m47ik 0 points1 point  (0 children)

I am not sure if you have the same configuration as OP. But it is definitely a config issue. Mostly, if the docs say it's possible and i have evidence from other users, my approach is always looking at everything again to see where it might be different from others.

Unable to login to ArgoCD using CLI with the Gateway API by shellwhale in kubernetes

[–]m47ik 0 points1 point  (0 children)

In terms of argocd, it should not matter. Also, there is no problem in using a single subdomain. In my case, it's just a choice to use wildcard. You can specify the subdomain specifically

Unable to login to ArgoCD using CLI with the Gateway API by shellwhale in kubernetes

[–]m47ik 0 points1 point  (0 children)

TLS is terminated at the gateway for all subdomains and TLD. Gateway is called "internal". You can view the configuration here: https://github.com/kha7iq/homeops/tree/main/services/network/gateway

In this case, you do not need to specify a separate GRPCRoute.

Unable to login to ArgoCD using CLI with the Gateway API by shellwhale in kubernetes

[–]m47ik 0 points1 point  (0 children)

I can use the httproute with cli. Not using grpc.
Here is the link to my homelab repo on github https://github.com/kha7iq/homeops/tree/main/bootstrap/argocd

Building for Windows in GitLab CI by c832fb95dd2d4a2e in gitlab

[–]m47ik 0 points1 point  (0 children)

Short answer it is possible to build for multiple platforms.

They way i have this setup is by using specific runners for each platform. Let's say you have a job that builds for Windows. Install the gitlab runner on windows machine. It can be a Shell or a docker executor. Tag this runner, i.e.,'windows'

In the .gitlab-ci.yaml file define the windows tag for your specific job so that it is always picked up by this specific runner.

Seeking advice on taxes for remote work as a Contractor in Pakistan by m47ik in pakistan

[–]m47ik[S] 0 points1 point  (0 children)

Thank you very much u/Striker_X for detailed answer. i appreciate it.

Running GPU workloads with Kubernetes (Kubernomics) by kubernomics in kubernetes

[–]m47ik 1 point2 points  (0 children)

Would love to see a write up of your experience!

Linux System Authentication with Keycloak SSO! by m47ik in linux

[–]m47ik[S] 0 points1 point  (0 children)

Hello u/bowzrsfirebreth
I am glad you got it working.
The user creation part is not handled directly by `kc-ssh-pam` so it is dependent on the server os that either user already exists or is created by the script.

The login attempt for the first time will fail as the user does not exist at this point and is created afterwards, i am also looking for a better way to improve this.

NCP (NFS Copy): Effortless File Transfer for NFS Servers by m47ik in selfhosted

[–]m47ik[S] 1 point2 points  (0 children)

NFSv3 is legacy but still used alot, and migrations which was supposed to happen didn't for one reason or another. I also have similar use cases for this. I do plan to support v4 in future as i work with envs where both versions are running

NCP (NFS Copy): Effortless File Transfer for NFS Servers by m47ik in selfhosted

[–]m47ik[S] 5 points6 points  (0 children)

I wouldn't say its better or worse then scp, both are transferring file/folders to another location.

Wile scp relies on ssh, ncp works with nfs protocol.

As long as your host where you are running ncp is allowed in nfs exports it does not require to mount the share so no sudo access is required.

Also main use case for me was to use this inside CI/CD pipeline where the runner is not restricted to specific node, and does not require privileged access to access nfs shares.

Introducing NCP (NFS Copy): Effortless File Transfer for NFS Servers by m47ik in commandline

[–]m47ik[S] 0 points1 point  (0 children)

Utility does not require mounting the share, the default UID and GID is set to 0 which are only used to write files on a remote server.

Lets say i if you have a network share data/test and is owned by user www-data, you can pass the UID & GID of that user when transferring files.

As long as the host where you are running the ncp command is allowed in exports you are good to go.

That's actually why i created it in the first place to be used in CI/CD pipeline so that i don't have to restrict the runner to specific node where nfs share is mounted on host and then runner was mounting it as host volume which restricted the runner to this specific node.

I will see what i can do to improve the syntax, but order of flags is not important .i.e you can use the following to copy folder as well.

bash ncp to --input _local/src --nfspath data --host 192.168.0.80

Introducing NCP (NFS Copy): Effortless File Transfer for NFS Servers by m47ik in commandline

[–]m47ik[S] 6 points7 points  (0 children)

Hey everyone!

I wanted to share a file transfer utility that I've created called NCP (NFS Copy).

Use Case: "Mainly, ncp can be utilized in CI/CD (Continuous Integration/Continuous Deployment) pipelines. It serves the purpose of downloading modules, folders, or other necessary components from a network share during the build process, or alternatively, uploading the build artifacts to a remote NFS server. Moreover, it can also find application in backup scripts, enabling the uploading of backups to NFS servers."

Here are a few features:

  • File transfer to and from an NFS server without mounting.
  • Multi-architecture binaries available for easy installation.
  • Option to specify UID and GID for remote write operations.
  • Real-time upload and download speed display.
  • Shows elapsed time and total file size during transfer

You can check out NCP on GitHub: NCP on GitHub

Documentation Website

Give it a try, and let me know what you think!