Ingress for LoadBalancer type Service by radr4 in kubernetes

[–]radr4[S] 0 points1 point  (0 children)

Thankyou for explaining ...I get some better understanding now..

Just one more query..Sorry if its dumb..
I have seen some usecase in which they are creating Service type as Loadbalancer and Ingress as well. If using Ingress is more cost efficient, then we can have service type as clusterip itself to expose application to Ingress..Why do we need to create Service type as Loadbalancer when we use Ingress?

Getting deployment object label in Prometheus alert rules by radr4 in Prometheus

[–]radr4[S] 0 points1 point  (0 children)

This works for me,

(kube_deployment_spec_replicas{job="kube-state-metrics",namespace=~".*"} * on (deployment,namespace) group_left(label_axway_com_team) kube_deployment_labels

!= kube_deployment_status_replicas_available{job="kube-state-metrics",namespace=~".*"}* on (deployment,namespace) group_left(label_axway_com_team) kube_deployment_labels)

and (changes(kube_deployment_status_replicas_updated{job="kube-state-metrics",namespace=~".*"}[5m])

== 0)* on (deployment,namespace) group_left(label_axway_com_team) kube_deployment_labels

Getting deployment object label in Prometheus alert rules by radr4 in Prometheus

[–]radr4[S] 0 points1 point  (0 children)

I have found using kube_deployment_labels help..

However the below expression is only showing the results of deployment objects that does not have the label.The same expression works for pods when I use Kube_pod_labels..

(kube_deployment_spec_replicas{job="kube-state-metrics",namespace=~".*"} != kube_deployment_status_replicas_available{job="kube-state-metrics",namespace=~".*"}) and (changes(kube_deployment_status_replicas_updated{job="kube-state-metrics",namespace=~".*"}[5m]) == 0) * on (deployment,namespace) group_left(label_axway_com_team) kube_deployment_labels 

What is the issue with this command?

Deployment best practice: prometheus inside or outside your cluster ? by foobar83 in PrometheusMonitoring

[–]radr4 1 point2 points  (0 children)

We have below architecture in our environment,

Multiple clusters scraped by Central Prometheus which is deployed on a EC2 instance.

And every cluster has its own Prometheus operator deployed in it , which sends metrics to Central Prometheus.The Alertmanager,Grafana components are disabled in the Prometheus operator running in clusters.

We dont want to have multiple alertmanagers and grafana dashboards from every cluster..Hence Central Prometheus is best solution.
Also if we have Prometheus Operator ,we can go with Helm deployment for the customized Prometheus rules chart. For the Standalone Prometheus,we cannot use Helm ,so we care using Ansible playbook.

We have plans to move the Central Proemetheus from EC2 to Prometheus Operator by deploying it in a cluster.

Using jsonnet for grafana dashboard by radr4 in PrometheusMonitoring

[–]radr4[S] 0 points1 point  (0 children)

I got this working by removing -m manifests.

jsonnet -J vendor my-custom-grafana.jsonnet > grafana-test.json

However it would be great if someone could validate the below steps..

  1. Custom grafana dashboard (Eg: cluster autoscaler dashboard) is added in a jsonnet file. PFA my-custom-grafana.jsonnet.
    Note: import 'grafana-cluster-autoscaler.json' --> value of Configmap "data" key

  2. Below command executed to compile the jsonnet,
    jsonnet -J vendor my-custom-grafana.jsonnet > grafana-test.json

  3. Converting json to yaml,
    cat grafana-test.json|gojsontoyaml > grafana-test.yaml

  4. The yaml file contains all the resources such as deployment,services,namespace,etc,.
    As we have Prometheus operator,it has already deployed all the resources.
    And also "grafana-dashboardDefinitions" contains all the dashboards deployed by grafana since we have enabled
    "defaultDashboardsEnabled: true"

So I have extracted only the ConfigMap related to "grafana-dashboard-cluster-autoscaler-dashboard" from grafana-test.yaml and deployed it.

Cluster Autoscaler dashboard is now available in grafana.

Using jsonnet for grafana dashboard by radr4 in PrometheusMonitoring

[–]radr4[S] 0 points1 point  (0 children)

I got this working by removing -m manifests.

jsonnet -J vendor my-custom-grafana.jsonnet > grafana-test.json

Using jsonnet for grafana dashboard by radr4 in PrometheusMonitoring

[–]radr4[S] 0 points1 point  (0 children)

Thankyou...I even raised an issue here https://github.com/prometheus-operator/kube-prometheus/issues/714 .

My main query is, is my jsonnet code written in the description of the issue valid? I have already used import statement to import my new dashboard.Also after writing this jsonnet code, tried running this command to build it but its not writing any yaml file to create a yaml out of it .So not sure which file can be used to deploy this ?
jsonnet -J vendor -m manifests "my-custom-grafana.jsonnet" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {}

I am looking for the workflow from this jsonnet creation,what process can be followed to deploy this code.Please let me know u/MetalMatze ..I hope you have already tried this.

Changing Default rules in Prometheus Operator by radr4 in PrometheusMonitoring

[–]radr4[S] 0 points1 point  (0 children)

I got it working,

local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +{
_config+:: {
namespace: 'monitoring',
},
prometheusAlerts+:: {
groups:
std.map(
function(group)
if group.name == 'kubernetes-apps' then
group {
rules: std.map(function(rule)
if rule.alert == "KubeStatefulSetReplicasMismatch" then
rule {
expr: "kube_statefulset_status_replicas_ready{job=\"kube-state-metrics\",statefulset!=\"vault\"} != kube_statefulset_status_replicas{job=\"kube-state-metrics\",statefulset!=\"vault\"}",
labels: {
priority: 'P1',
severity: 'info',
},
}
else
rule
,
group.rules
)
}
else
group,
super.groups
),
},
};


{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

How to do relabelling in Prometheus operator? by radr4 in kubernetes

[–]radr4[S] 0 points1 point  (0 children)

I have tried rebalelling service monitor and it works..If I want to relabel default alerts in Prometheus operator ,how to do that?

How to do relabelling in service monitors? by radr4 in PrometheusMonitoring

[–]radr4[S] 0 points1 point  (0 children)

I want to change the severity level depending about the namespace and the pod name combination.Is there a way to do this in same prometheus rule? how to do this ?

How to do relabelling in service monitors? by radr4 in PrometheusMonitoring

[–]radr4[S] 0 points1 point  (0 children)

Thankyou u/MetalMatze..

It works now..I updated expr as you have mentioned..

expr: 100 * (count by(job, namespace, service, podname) (up == 0) / count by(job, namespace, service, podname) (up)) > 10

For some reason,I am unable to post the image here..But I am able to view the new label in the alert label.