Important

   

This documentation supports the releases of BMC Helix Intelligent Integrations and BMC Helix Developer Tools 22.2.00 and their patches. To view the documentation for earlier releases, see BMC Helix AIOps.

Troubleshooting log collection from Kubernetes

Something went wrong error is displayed when you click Create & Download

Resolution

Check the pod logs for tdc-controller-service service and search for any exceptions.

Error while running the command to create the bmc-logging namespace and other configurations

Resolution

Ensure the following:

  • The downloaded file is present in the directory from where you are running the command.
  • Command is run from the controller and not from a node.
  • You have the appropriate privileges to create Kubernetes entities.

Invalid image error after applying the .yaml file configurations

Resolution

Ensure the following:

  • There is a valid image registry URL value present at the containers : env : image path.
  • The PARAMETERS.docker_container_registry_path string is not present in the daemonset definition in the .yaml file.

Unable to pull image error while applying the .yaml file configurations

Resolution

  • If you have not given a value in the Docker Registry Path field, ensure that the URL is entered in the .yaml file at the containers : env : image path.
  • Ensure that a valid image can be pulled by using the connector image repository URL and the URL is entered in the .yaml file at the containers : env : image path.

All pods are in error status

Status of all pods is error after you run the following command:

kubectl get pod -n bmc-logging

Resolution

  • Validate the .yaml file for ConfigMap (name: bmc-config-map) definition. Make sure it is valid .yaml.
  • Validate that the value of the ConfigMap definition follows the fluentd configuration syntax.

Error/CrashLoopBackOff status of some pods

Resolution

  1. Run the following command:
    kubectl get pod -n bmc-logging -o wide
  2. Find the nodes to which the Error/CrashLoopBackOff pods are assigned.
  3. Check the health of those nodes.

Collected logs are unavailable in Explorer

Resolution

  1. Find the nodes to which the service pods are assigned by running the following command:
    kubectl get pod -o wide | grep <service name> 
  2. For each node, get the connector pod by running the following command: 
    kubectl get pod -n bmc-logging - o wide
  3. Note the connector pods running on the node where service pods are also running. 
  4. For each of these connector pods, run the following command:
    kubectl exec -it <daemonset pod name> bash -n bmc-logging 
  5. Go to the /fluentd/log path, and run the following command:
    cat fluent.log
  6. In the log file, search for the string "following tail of /var/log/containers/*<name of the service pod>".
    If the string is not present, connector could not find the logs of the service from the node.
  7. Ensure that the service pod logs are stored on the node.
Was this page helpful? Yes No Submitting... Thank you

Comments