Troubleshooting upgrade issues

Consult this topic for information about troubleshooting issues related to the product upgrade.

Issue with the ingress controller

If you face an issue with the ingress controller during the upgrade, run the following command:

$ kubectl get -A ValidatingWebhookConfiguration
 NAME                      WEBHOOKS   AGE
 ingress-nginx-admission   1          18h

 $ kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission


The Patroni PostgreSQL Server does not work

Scope

After deploying the Patroni PostgreSQL Server the status of the primary Patroni PG pod might not change to Ready and the pod continues to look for the master pod. You can see these details in the pod logs.

This issue occurs because the previous data cleanup was not successful.


Workaround

Delete Patroni endpoints. Perform the following steps to do this:

Kuberenetes

  1. Check whether Patroni endpoints exist. To get a list of endpoints, run the following command:

    kubectl get ep -n <NAMESPACE> | grep bmc

    Example output:

    postgres-bmc-pg-ha                                                10.42.21.182:5432                                                       30h
    postgres-bmc-pg-ha-config                                         <none>                                                                  30h
    postgres-bmc-pg-ha-pool                                           10.42.17.71:9999,10.42.20.108:9999                                      30h
    postgres-bmc-pg-ha-repl                                           10.42.11.118:5432,10.42.41.13:5432                                      30h
  2. If the endpoints exist, delete the Patroni endpoints. Run the following command:

    kubectl delete ep -n <NAMESPACE> postgres-bmc-pg-ha postgres-bmc-pg-ha-config postgres-bmc-pg-ha-pool postgres-bmc-pg-ha-repl

OpenShift

  1. Get the configmap. In OpenShift, the Patroni endpoints are set to false, so the configmap is used by Patroni. Run the following command to get the configmap:

    oc get cm | grep bmc

    Example output:

    oc get cm | grep bmc
    postgres-bmc-pg-ha-config          0      17h
    postgres-bmc-pg-ha-leader          0      17h
    postgres-bmc-pg-ha-pgconfig        1      17h
    postgres-bmc-pg-ha-pgpoolconfig    1      17h


  2. Delete the configmap. Run the following command:

    oc delete cm postgres-bmc-pg-ha-config postgres-bmc-pg-ha-leader postgres-bmc-pg-ha-pgconfig postgres-bmc-pg-ha-pgpoolconfig


Upgrade fails because of a Kafka exporter issue

Scope

The upgrade to version 22.4 fails because of a Kafka exporter issue. The following error is displayed:

UPGRADE FAILED: cannot patch "kafka-exporter" with kind Deployment: Deployment.apps "kafka-exporter" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string

{"app.kubernetes.io/component":"cluster-metrics", "app.kubernetes.io/instance":"kafka", "app.kubernetes.io/name":"kafka"}
, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable


Workaround

  1. Get the endpoint by running the following command:

    kubectl get ep -n <namespace> | grep postgres
  2. Delete the endpoint by running the following command:

    kubectl delete ep postgres-xxxxx -n <namespace>
  1. Obtain the Kafka exporter details by running the following command:

    kubectl get deployment -n <namespace> | grep Kafka-exporter
  2. Delete the Kafka exporter deployment by running the following command:

    kubectl delete deployment -n <namespace> Kafka-exporter
  3. Run the upgrade process again. 


Was this page helpful? Yes No Submitting... Thank you

Comments