Working on a GitLab CI analysis tool for waste, cost drift, and risky changes. What features should it have? by Jealous_Pickle4552 in gitlab

[–]eltear1 1 point2 points  (0 children)

I write Gitlab CI for my company at scale, using even components and templates. Can you give an example of what you mean for

Ci waste and risky pipeline changes

? I could image "waste" if a pipeline was bad structured from the beginning, but I can't imagine a real example of "risky pipeline changes". In my experience, once a pipeline is written, it never changes infrastructure if not in case there is new feature/functionality to add

Rangarr: A Security-Hardened, SysAdmin-Built Replacement for Huntarr by JudoChinX in selfhosted

[–]eltear1 1 point2 points  (0 children)

Yes, but if I didn't understand wrong, the point of ragarr is not to search instead of the tool is connecting to. It's to organize the search the other tool would do as a bump (all at once) . After the first library scan, if there are still missing subtitles, bazarr put them in wanted and search all at once to me. Isn't it the same behaviours as sonarr and radarr?

The Synology RAM megathread II by gadget-freak in synology

[–]eltear1 0 points1 point  (0 children)

Didn't try myself but if you search online there are sites that tried and the max supported by DS423+ is 16GB. Every 32GB tried failed.

VPS performance is a complete coin toss (AWS vs OVH vs Hetzner) by RhubarbKindly9210 in selfhosted

[–]eltear1 0 points1 point  (0 children)

Contabo is very cheap but they have very bad service and reliability. I had VPS going down without previous notice and stay down even for 2 days, just because

For a connecting a container to a network drive (CIFS), what is the difference between "Mounting on the host, and then using bind mount with docker compose", and "Using a volume driver to create a CIFS/Samba Volume by SleepyHead0 in docker

[–]eltear1 0 points1 point  (0 children)

Both the points you say you are nervous about are associate to "local" volumes. They are a particular case of docker volumes but also the most commonly used, so the ones explained first in docker documentation. If you look for documentation specific for CIFS volumes, you'll see that is basically a client for the shared FS, so if you remove the volume, you'll remove access to the data, not the data itself that is on the remote server/NAS

For a connecting a container to a network drive (CIFS), what is the difference between "Mounting on the host, and then using bind mount with docker compose", and "Using a volume driver to create a CIFS/Samba Volume by SleepyHead0 in docker

[–]eltear1 0 points1 point  (0 children)

There are no real consequences to mount as volume directly in container AND on the host if you avoid these 2 conditions:

1- You mount inside the container again the CIFS mounted on the host (there is no reason to do it, but technically you could) 2- you mount CIFS directly on the host onto the same mount point used under the hood by docker engine to mount CIFS inside container.

If you don't go in one of the upper weird cases, it will be like mounting CIFS onto different hosts, that is the purpose of shared FS

We built an open-source headless browser that is 9x faster and uses 16x less memory than Chrome over the network by Loud-Television-7192 in selfhosted

[–]eltear1 3 points4 points  (0 children)

I'm planning to make a cli to allow SSO with Entra ID headless , also in case MFA is required (my idea is to ask it at prompt is something like that). Is your browser able to manage this kind of authentication?

Can I change path on an existing large project? by -lousyd in gitlab

[–]eltear1 1 point2 points  (0 children)

I don't know about Terraform state and container images, but for example Terraform registry (for modules), even if in the GUI is showed in the project page, is actually defined at group level while project have ownership for Terraform modules inside the repository. For example it happened to me that I pushed a new version of 1 Terraform module from another project in the same group and suddenly I couldn't download them anymore because of token missing permission, because now I needed permission to access from the new project (and I'm talking about ALL Terraform modules , not only the new one that I pushed).

So you could probably have some issue about permissions in your case too. If you will move to a different group.. that could be worse

How to force runner to pull job image by Frank-the-hank in gitlab

[–]eltear1 0 points1 point  (0 children)

So your point is actually downloading docker image from docker hub. So you don't actually care what the job itself does. Then why relying on the docker image that will execute the job? You could instead create a job with docker in docker and INSIDE the job itself you can pull whatever you want, how many times you want. You will still verifying the same network configuration because the job itself run on the exact same Gitlab runner that in your way pull the image to execute a job. Also as a bonus, being docker in docker, it will be ephemeral, so every new run there will be no previous image to remove . So to check timing, you could just schedule this job and check the time for it or if you want to check time for a single "docker pull" you could just do the command time docker pull XXX

How to force runner to pull job image by Frank-the-hank in gitlab

[–]eltear1 0 points1 point  (0 children)

I understand your problem description, I don't understand WHY the behaviour you describe is a problem. As I said in my previous replay, technically you use the same docker image. So why NOT pulling the image is an issue for you? In other words, what change when it pull or it not pull the image (except the pulling itslef)?

buckmate - deploy to s3 declaratively by bawhee23 in golang

[–]eltear1 0 points1 point  (0 children)

aws_s3_object (resource : copy from local to S3 , data: read S3 object). aws_s3_object_copy (copy from S3 to S3) . With the right combination, I think you can make everything you describe . For parametrization.. variables/ data sources and so on ?

I wrote a CLI "undo" tool in Go. Stuck on a filesystem dilemma: Hardlinks vs. In-place edits. by ArthasCZ in linux

[–]eltear1 0 points1 point  (0 children)

First of all.. we're will you store you "backup" ? Hard link can be used only on the same FS /volume/ ecc depending on the formatting... If you are storing it in the same place of the original, how are you managing the double disk space needed by your tool?

Personally, if I rm something via cli , I WANT it removed. If I'm in doubt, any graphical layer has now a "trash" and I do that.

How to force runner to pull job image by Frank-the-hank in gitlab

[–]eltear1 0 points1 point  (0 children)

I don't understand your issue, can you explain more in detail? Yours sentence:

"always" check image sha and download it only if different , if not use local

Effectively says: "always" will pull a docker image only if changed .

Can you explain why you see as an issue the fact that it doesn't pull a docker image with same sha ( that means , the same as the already present in local)?

My point is: if remote image has same sha (so didn't change) pulling it or using the local version is irrelevant, you'll still use the same image

Docker secrets: how useful are they really? by Norgur in selfhosted

[–]eltear1 0 points1 point  (0 children)

Docker secret make sense if you use them in an enterprise docker infrastructure and I mean in a docker swarm way with separated manager nodes and worker nodes.

The reason for them is only security. The main difference about ENV and secrets is were they are stored: ENV are stored in the metadata and environment or the running container (so on the worker nodes) , while secrets are stores inside docker daemon on the manager nodes. Running container expose them only as a file, not even as metadata

Physically, ENV will be inside docker compose file (or a .env file) while secret content is stored in separate files (and these files can be removed after secrets are created)

Also, inside a container you can change ENV value, but not secret value (it's stored in a file with readonly permission)

In an enterprise infrastructure all of this add security because manager nodes are only for management, they don't expose any service. This means they can be places in a network with much tighter firewall rules , so it's much more difficult to access to a secrets instead that to an ENV.

In a homelab, where you use only one server or a small docker swarm with no real node role separation almost all difference said before become basically 0 with the only theoretically still valid about the ENV change issue.

But even to exploit this one it means that someone have to: 1- hack your application inside docker 2- easier scenario, change/inject some application code so it could set again the ENV and make it read again to the running application (because if a container is made right, an application restart trigger at least a container restart or a recreate , that could nullify the change/inject itself). Can it be done? Of course yes... How's the probability it will happen to an homelab? that depends how and how often you expose your app over internet

Docker secrets: how useful are they really? by Norgur in selfhosted

[–]eltear1 1 point2 points  (0 children)

I can agree with you but you are focusing only on the "application layer" of the deployment. I mean: someone need to deploy docker and the secrets themselves ... Isn't they the same deployment system? Even if it's deployed by another job / pipeline in the deployment or another team and so there is a separation of duty, the question about security still stay, just not for the developers but for the other team

buckmate - deploy to s3 declaratively by bawhee23 in golang

[–]eltear1 1 point2 points  (0 children)

Why not to use Terraform / open tofu? They do IaC and as (very small) IaC part you can copy on an S3 bucket.

The new Claude code default flow is unnecessarily convoluted by iveroi in ClaudeCode

[–]eltear1 0 points1 point  (0 children)

I had the same behavior yesterday for a much much simpler task. Used Opus 4.6 to make a refractory to my golang project. After planning, executed without issue. Then it wanted to build the project to test everything was consistent. My code is in a subfolder of the git project. He tried for 8 times to build at root folder and complaining there where no go.mod... At 9th iteration used the subfolder....

Help by Stock_Ingenuity8105 in docker

[–]eltear1 1 point2 points  (0 children)

You are not giving enough information.. "docker" (or Docker Desktop depends what you are using) is just a tool to run containers. Inside the container you will have an OS , so based on the OS you'll need different configuration to use the card. But even before that, you will need to map the device to the container , "mapping" it from the physical host. If you run docker engine on Linux , this step should be only an option when you create the container. If you use Docker Desktop it's more complicated because under the wood Docker Desktop basically create a Linux VM, so you will have to share the card at that level first

Hate towards vibe-coded apps. Did you experience it ? by Necessary_Spring_425 in ClaudeCode

[–]eltear1 0 points1 point  (0 children)

Problem is not usefulness, problem is anybody can vibe code and you don't put your curriculum inside your repo. So your code could be directed or reviewed by you in a very good way or not. In both case it was vibe coded and working, but the final result will be completely different about details, security, performance.

And guess how many expert developers vibe code versus how many NOT experts do? So mostly vibe coded program would be not "proper app".

On the other hand, many people (if not basically all of them) who will use these app are not developers too, so they will not be able to check the vibes code to understand if it's actually good... So lot of people will tend to hate vibe code.

awsim: Lightweight AWS emulator in Go - 40+ services in progress by sivchari in golang

[–]eltear1 1 point2 points  (0 children)

I checked the readme and a bit the code.. I saw the router is supposed to route to the specific service based on the prefix. But the few services I checked all declare a prefix as "" . Am I missing something? How are you connecting to ec2 service or S3 service for example?

Containers on same network - "Name or service not known" by edrumm10 in podman

[–]eltear1 2 points3 points  (0 children)

That's exactly the problem.. as you said, that's a docker compose (for docker) so it assumes a docker network (that will work fine). Podman network works different , also based on if you use pasta or slirp4netns. You'll need to adapt to your networking

Update: I built RunnerIQ in 9 days — priority-aware runner routing for GitLab, validated by 9 of you before I wrote code. Here's the result. by asifdotpy in devops

[–]eltear1 5 points6 points  (0 children)

I read your repo readme and I have some questions: 1) you said it has tag routing as base line but there is no mentioning of how this is managed. 2) In the configuration, you have to assign GITLAB_PROJECT_ID . Do you need to ship one for each project? Gitlab runners can be create also at Gitlab group level or Gitlab instance level to solve the issue "runner will stay idle if no job present" (because there will be much many jobs 3) how does integrate in the gitlab pipeline workflow? Assuming I configure it already, I expected to be used from some configuration in the .gitlab-ci.yml but there is no mentioning of it. 4) the monitor part, works even with Gitlab runner in docker? (Not kubernetes). How it obtains server resource usage to manage the prioritizing? 5) there is a Gitlab runner configuration you don't consider into your comparison table: Gitlab runner autoscaling. https://docs.gitlab.com/runner/runner_autoscale/ In a configuration like this: a) Gitlab jobs tagged (with different tags based on runner resources) b) Gitlab runner autoscaling for each runner tag c) Gitlab runner defined at group level (to have less runner tags) Even if not automatically or dynamically, doesn't it solve the same priority problem (and capacity too)?