Troubleshooting log collection from Kubernetes


Something went wrong error is displayed when you click Create & Download

You get the following error when you click the Create & Download button

Something went wrong

Issue symptoms

This issue might occur when there is an error in saving the integration.

Issue scope

This issue might affect all integrations.

Resolution

Check the pod logs for the tdc-controller-service service and search for any exceptions.

Error occurs when you run the command to create the bmc-logging namespace and other configurations

You get an error after running the following command:

kubectl apply -f <donwloaded_configuration_ yaml_file>

kubectl create -f <donwloaded_configuration_ yaml_file>

Issue symptoms

This issue might occur if you run the command from wrong directory or you do not have appropriate permissions to run the command.

Issue scope

This issue might affect all integrations.

Resolution

Confirm the following conditions:

  • The downloaded file is present in the directory from where you are running the command.
  • The command is run from the controller and not from a node.
  • You have the appropriate privileges to create Kubernetes entities.

Invalid image error occur after you apply the .yaml file configurations

You get the Invalid image error after running the following command:

kubectl apply -f <donwloaded_configuration_ yaml_file>

Issue symptoms

The connector is not running.

Issue scope

This issue might affect all integrations.

Resolution

Confirm the following conditions:

  • There is a valid image registry URL value present at the containers : env : image path.
  • The PARAMETERS.docker_container_registry_path string is not present in the daemonset definition in the .yaml file.

Unable to pull image error occur after you apply the .yaml file configurations

You get the Unable to pull image error after running the following command:

kubectl apply -f <donwloaded_configuration_ yaml_file>

Issue symptoms

The connector is not running.

Issue scope

This issue might affect all integrations.

Resolution

  • If you have not provided a value in the Docker Registry Path field, in the containers : env : image path, replace the PARAMETERS.docker_container_registry_path string with the docker registry URL.
  • Ensure that a valid image can be pulled by using the connector image repository URL and that the URL is entered in the .yaml file at the containers : env : image path.

All pods are in error status

The status of all pods is in error after you run the following command:

kubectl get pod -n bmc-logging

Issue symptoms

No logs are collected and integration status is Configured.

Issue scope

This issue might affect all integrations with incorrect configurations.

Resolution

  • Validate the .yaml file for the ConfigMap (name: bmc-config-map) definition. Make sure it is a valid .yaml.
  • Validate that the value of the ConfigMap definition follows the fluentd configuration syntax.

Error or CrashLoopBackOff status for some pods

Status of multiple pods is error after you run the following command:

kubectl get pod -n bmc-logging

Issue symptoms

All logs are not collected.

Issue scope

This issue might affect all integrations configured for a cluster.

Resolution

  1. Run the following command:
    kubectl get pod -n bmc-logging -o wide
  2. Find the nodes to which the Error/CrashLoopBackOff pods are assigned.
  3. Check the health of those nodes.

Collected logs are unavailable in Explorer

No logs are being collected and shown in Explorer.

Issue symptom

No logs are present in the Explorer.

Issue scope

This issue might affet all integrations.

Resolution

  1. Find the nodes to which the service pods are assigned by running the following command:
    kubectl get pod -o wide | grep <service name> 
  2. For each node, get the connector pod by running the following command: 
    kubectl get pod -n bmc-logging - o wide
  3. Note the connector pods running on the node where service pods are also running. 
  4. For each of these connector pods, run the following command:
    kubectl exec -it <daemonset pod name> bash -n bmc-logging 
  5. Go to the /fluentd/log path, and run the following command:
    cat fluent.log
  6. In the log file, search for the string "following tail of /var/log/containers/*<name of the service pod>".
    If the string is not present, the connector cannot find the logs of the service from the node.
  7. Ensure that the service pod logs are stored on the node.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*