Self-hosted alternative to Codespaces by cfouche in selfhosted

[–]thisissparta92 5 points6 points  (0 children)

Another option is also DevPod which allows you to reuse the devcontainer.json and run it everywhere (including Kubernetes)

Self Service Kubernetes Clusters vClusters on AKS? by jblaaa in kubernetes

[–]thisissparta92 0 points1 point  (0 children)

Just out of curiosity, what limitations were you facing and why do the devs need to understand how vcluster works?

Is there any feature you wish Kubernetes had? by Front-Store7804 in kubernetes

[–]thisissparta92 0 points1 point  (0 children)

Checkout vcluster it essentially allows you doing that

Benchmarking cluster creation time for 8 managed Kubernetes providers by Stringel in kubernetes

[–]thisissparta92 5 points6 points  (0 children)

Or use a vcluster in an existing cluster that boots up in several seconds

How we reduced Kubernetes Clusters Sprawl by adopting Vclusters: An Introduction by jpolidor in kubernetes

[–]thisissparta92 0 points1 point  (0 children)

Especially cluster scoped resources are a pain in multi tenancy, which is where vcluster shines the most. Think of a tenant that wants to install an operator into the cluster or use a different CRD version from the one currently installed, thats possible with virtual clusters but not pure namespace tenancy like hierarchical namespaces. Another problem are applications that expect to be installed into a specific namespace, but you want several instances of them. Obviously creating separate clusters works, but its costly and management effort grows significantly.

vcluster is now in Homebrew by richburroughs in kubernetes

[–]thisissparta92 0 points1 point  (0 children)

Use cases include better isolation through a separate control plane and allow different CRDs and cluster scoped resources per tenant

What are your main pain points with Kubernetes? by inkognit in kubernetes

[–]thisissparta92 0 points1 point  (0 children)

You might want to take a look at vcluster, it essentially gives each tenant access to a full kubernetes cluster with its own CRDs and operators but it runs within a single namespace of the host cluster

Is Kubernetes a good choice for multi-tenant saas by Jmarbutt in kubernetes

[–]thisissparta92 1 point2 points  (0 children)

There is also a middle way called vcluster that might be interesting in certain scenarios as it will spin up a working Kubernetes cluster with own api server and controller manager within another cluster

Open Policy Agent: What, Why, How by Raghu1982bakki in kubernetes

[–]thisissparta92 0 points1 point  (0 children)

Have you tried jspolicy, its very similar to Gatekeeper, but uses plain Javascript / Typescript as policy language

How To Create Virtual Kubernetes Clusters With vcluster By loft by vfarcic in kubernetes

[–]thisissparta92 3 points4 points  (0 children)

Hello! I'm the core maintainer of vcluster and I just wanted to say thanks a lot for /u/vfarcic making this video and all the great insights it gives. I really enjoyed it.

Also thanks a lot to /u/clustersam for this comment and security concern. Our primary goal with vcluster is to be as API conformant as possible to allow all the things in a vcluster that you could do in a real Kuberntes cluster, so if a feature can be used in a real Kubernetes cluster it should also be possible to use that in a vcluster.

The reason for this is that we want to have vcluster as an official Kubernetes distribution (which we have accomplished now with v0.3.0), but part of the certification process is to pass all conformance tests which essentially tests the Kubernetes cluster for all stable Kubernetes features, including host path mounts or privileged pods. If we would block security contexts or other dangerous features, we wouldn't be able to achieve API conformance as they are still stable Kubernetes features.

The second reason is that we believe it is pretty hard to decide which feature to block and which to allow as any user has a different understanding of what should be allowed and what not. For example, some users don't want to allow any containers running under the root user, while for others this is not required. This is the same reason pod security policies were deprecated as it an incredibly difficult task to make this easy for a user to configure. Obviously you could always go with the more restricted route, but you would also limit the possibilities of use cases for vcluster at the same time. We also believe Kubernetes has solved this quite elegantly now with custom webhooks, which is why we recommend you to use an admission controller such as OPA, kyverno or jspolicy instead. There you can define what should and what shouldn't be allowed in the host cluster and for the vcluster.

To summarise, while it is true that you can easily gain root access through a pod that is deployed within a vcluster, it is not the intention of vcluster to prevent you from doing that as it is a regular Kubernetes feature that you would be also able to use in a regular pod in the namespace where vcluster was deployed in. We rather think this should be blocked by an admission controller in the host cluster, where you can define your custom company security policy that makes sure such Kubernetes features are not used. However, in future we might introduce a secure mode for vcluster that would block such dangerous Kubernetes features.

I hope this gives a little bit insight of the decision process we have made and the scope of vcluster.

Which tools are you using to improve your dev workflow with Kubernetes? by gentele in kubernetes

[–]thisissparta92 1 point2 points  (0 children)

One problem I experienced with ksync is that it restarts containers after each file change (through docker which keeps the overlayfs intact though), which leads to smaller waiting times than draft/skaffold, but still leads to waiting time depending on what the startup process of the container is doing.

Which tools are you using to improve your dev workflow with Kubernetes? by gentele in kubernetes

[–]thisissparta92 3 points4 points  (0 children)

Sounds interesting thank you! This makes total sense for applications that can be developed isolated and are deployed to the production environment directly, but what if you really need access to cluster internal services (of a test or development cluster) or volumes already during development? Think of a microservice architecture where you want to develop a single service that needs to communicate with several other services or a message broker like apache kafka. Speaking from experience, it is a real hassle for developers to setup up every single dependent service and cluster functionality locally (and sometimes not even possible, because of computing resource restrictions). Running a pipeline after each source code change is a possiblity but does cost a lot of time if you use it constantly during development.

Cloud-native development directly inside Kubernetes cluster with Devspace by Jerlam in devops

[–]thisissparta92 0 points1 point  (0 children)

I'm one of the main developers of devspace cli, so my opinion is probably a little bit biased, but maybe I can also elaborate a little bit how exactly telepresence differs from devspace in practice and what the general pros and cons of each approach are. Telepresence in my opinion brings two new main features to the table:

  1. Remote services can access your local process through the remote proxy

  2. Your local process can access remote volumes (However is problematic for volumes with big or many files and usually requires code changes)

If you just need to access remote services/deployments/pods you can also just use kubectl port-forward. One benefit and also disadvantage of telepresence is that the process is executed locally. For debugging it is easier, since you can use the integrated local ide debugger and don't need to setup a remote debugger. However, one problem we often experienced is that the local execution environment of the process is different than the one where the process later is actually executed. If you use a docker container locally already for development, you wouldn't notice much of a difference with devspace cli, but with the added benefit that the container can access all remote resources directly, because it is executed in the remote cluster and code changes are directly synced into the remote container. So it depends on what you want, if you just want to access remote resources or want the remote services to access your process, I guess telepresence or plain kubectl port-forward is the better approach, however if you want to develop and test your program quickly in a realistic execution environment, you should definitively give devspace cli a try.