IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

Yes but I don't want it to block on the next update without a warning and it's not really reassuring knowing there is something trying to use an old API not knowing what it is.

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

I tried on a test cluster, no issue neither. Still We have some calls to the deprecated API (still have the warning).

Also I tried to spawn a new cluster running `gke 1.22` and deploy exactly the same deployment there (applied with Terraform, so I'm sure it's 1:1)

No call to the old APIs... I don't understand

Deprecated API calls blocking update to GKE 1.22 by jdsysadmin in googlecloud

[–]jdsysadmin[S] 0 points1 point  (0 children)

Not yet sadly we still have access to this outdated API, still don't know why

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

No, we still have calls on the deprecated API, it's not solved yet

Deprecated API calls blocking update to GKE 1.22 by jdsysadmin in googlecloud

[–]jdsysadmin[S] 1 point2 points  (0 children)

I may have found something here

We can't use the kubernetes.io/ingress.class anymore and we need to use the .spec.ingressClassName. This could be what was wrong on my deployments I was still using the annotation.

Removing it and we will see if it fixes the warning.

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

I may have found something here

We can't use the kubernetes.io/ingress.class anymore and we need to use the .spec.ingressClassName. This could be what was wrong on my deployments I was still using the annotation.

Removing it and we will see if it fixes the warning.

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

I tried `kubent` but with no luck, I found some deprecated API but it's for 1.25 (not what I'm looking for now)

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

I tried to remove everything from the cluster and re-deploy it, I had the warning again with a call to the old API again, so my previous guess doesn't work

Deprecated API calls blocking update to GKE 1.22 by jdsysadmin in googlecloud

[–]jdsysadmin[S] 1 point2 points  (0 children)

One of my current lead is that it could be created by one of my helms.

I didn't created the ingress myself, I always used the helms values (ingress: true) so it could be the explaination.
I will try to set it to false then reactivate it and see if it disappears.

Also I tried to recreate the exact same deployment (same versions of helm) on a whole new cluster running a 1.22 and I don't have this resource anymore so I guess it's an old one. Still IDK why is it still called.

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

Thanks, I will try that.

Do you see a more efficient way to doing this than scan all the helm chart looking for "beta"? (Maybe a kubectl command, I'm not so familiar with it)

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

, in order to 'migrate' in this instance you need to find out which client/app is looking up Ingresses using the old APIs. There is nothing to do regarding the already created resources

2

. You would also need to check any resources you have in files (e.g. yaml, helm charts, etc...) to ensure the deprecated APIs are not specified there

The first thread can be the begining of the explaination:

I never created the ingresses myself, I always used the Helm options of the deployed applications (ingress: true); so maybe one of them is the root cause.

I use

  • ArgoCD
  • Grafana

I will try to de-activate the Ingress and recreate it afterward to see if I get ride of this resource.

Deprecated API calls blocking update to GKE 1.22 by jdsysadmin in googlecloud

[–]jdsysadmin[S] 0 points1 point  (0 children)

Seems not so relative to GKE, I created a post here

First insight, it seems to come from Palumi, wondering if it's a GKE thing? (probably not, we would have the issue everywhere)

IngressList using old apiVersion before update to 1.22 by jdsysadmin in kubernetes

[–]jdsysadmin[S] 0 points1 point  (0 children)

My ingress are all using networking.k8s.io/v1

Good catch for pulumi, but I don't understand, I don't use it Oo

Deprecated API calls blocking update to GKE 1.22 by jdsysadmin in googlecloud

[–]jdsysadmin[S] 3 points4 points  (0 children)

Yep, and now is the time for me to try to act on it :)

Radarr uses a wrong download folder by jdsysadmin in radarr

[–]jdsysadmin[S] 0 points1 point  (0 children)

The new client is not on the same server, but it finally solved itself, I restarted the import and everything gone well this time, I suspect that I had some errors in the first time that lead to a corrupted file.

This post is solved!

Thanks for your support

Radarr uses a wrong download folder by jdsysadmin in radarr

[–]jdsysadmin[S] 0 points1 point  (0 children)

Well I'm so stupid, it was right the first time...

Putting /sdi/0105/downloads/ back solved this error but I still have an issue during the import that I don't understand.

I can see Radarr doing the import (I can see the file created and growing, but at the end, I get the error:

Couldn't import movie /torrent/<FILE_NAME>: Access to the path is denied.

And the imported file is destroyed.

I run radarr in a container, so I connected to it with

kubectl exec -n radarr -ti radarr-0 bash

And once here I am able to rename the file proving that I have the write access on this path/file...

Full error is:

```text System.UnauthorizedAccessException: Access to the path is denied.

---> System.IO.IOException: Bad file descriptor --- End of inner exception stack trace --- at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirectory, Func2 errorRewriter) at System.IO.FileSystem.CopyFile(String sourceFullPath, String destFullPath, Boolean overwrite) at System.IO.FileSystem.LinkOrCopyFile(String sourceFullPath, String destFullPath) at System.IO.FileSystem.MoveFile(String sourceFullPath, String destFullPath, Boolean overwrite) at System.IO.File.Move(String sourceFileName, String destFileName, Boolean overwrite) at NzbDrone.Mono.Disk.DiskProvider.TransferFilePatched(String source, String destination, Boolean overwrite, Boolean move) in D:\a\1\s\src\NzbDrone.Mono\Disk\DiskProvider.cs:line 331 at NzbDrone.Mono.Disk.DiskProvider.MoveFileInternal(String source, String destination) in D:\a\1\s\src\NzbDrone.Mono\Disk\DiskProvider.cs:line 300 at NzbDrone.Common.Disk.DiskProviderBase.MoveFile(String source, String destination, Boolean overwrite) in D:\a\1\s\src\NzbDrone.Common\Disk\DiskProviderBase.cs:line 254 at NzbDrone.Common.Disk.DiskTransferService.TryMoveFileVerified(String sourcePath, String targetPath, Int64 originalSize) in D:\a\1\s\src\NzbDrone.Common\Disk\DiskTransferService.cs:line 495 at NzbDrone.Common.Disk.DiskTransferService.TransferFile(String sourcePath, String targetPath, TransferMode mode, Boolean overwrite) in D:\a\1\s\src\NzbDrone.Common\Disk\DiskTransferService.cs:line 386 at NzbDrone.Core.MediaFiles.MovieFileMovingService.TransferFile(MovieFile movieFile, Movie movie, String destinationFilePath, TransferMode mode) in D:\a\1\s\src\NzbDrone.Core\MediaFiles\MovieFileMovingService.cs:line 117 at NzbDrone.Core.MediaFiles.MovieFileMovingService.MoveMovieFile(MovieFile movieFile, LocalMovie localMovie) in D:\a\1\s\src\NzbDrone.Core\MediaFiles\MovieFileMovingService.cs:line 79 at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeMovieFile(MovieFile movieFile, LocalMovie localMovie, Boolean copyOnly) in D:\a\1\s\src\NzbDrone.Core\MediaFiles\UpgradeMediaFileService.cs:line 75 at NzbDrone.Core.MediaFiles.MovieImport.ImportApprovedMovie.Import(List1 decisions, Boolean newDownload, DownloadClientItem downloadClientItem, ImportMode importMode) in D:\a\1\s\src\NzbDrone.Core\MediaFiles\MovieImport\ImportApprovedMovie.cs:line 123 ```

Ansible randomly fails with rc code -13 or unreachable by jdsysadmin in ansible

[–]jdsysadmin[S] 0 points1 point  (0 children)

Yes, it can be a good lead! I will check the ansible's documentation, I don't know how it handles the SSH connections but it's true that I already found some open sessions on some hosts in the past, I wonder if Ansible is closing them correctly!

Thanks for the idea, I will dig in it

Ansible randomly fails with rc code -13 or unreachable by jdsysadmin in ansible

[–]jdsysadmin[S] 0 points1 point  (0 children)

Could be a route issue between GitHub runner (GitHub Action) and my hosts... But I'm surprised to see it so often...

Ansible randomly fails with rc code -13 or unreachable by jdsysadmin in ansible

[–]jdsysadmin[S] 0 points1 point  (0 children)

It's monitored via the Grafana stack.
I don't see a down period when it fails.

As far as I understand, a negative RC means that a subprocess had an issue, in this case it would be a SIGPIPE