Troubleshooting log collection from Kubernetes
Something went wrong error is displayed when you click Create & Download
Resolution
Check the pod logs for tdc-controller-service
service and search for any exceptions.
Error while running the command to create the bmc-logging namespace and other configurations
Resolution
Ensure the following:
- The downloaded file is present in the directory from where you are running the command.
- Command is run from the controller and not from a node.
- You have the appropriate privileges to create Kubernetes entities.
Invalid image error after applying the .yaml file configurations
Resolution
Ensure the following:
- There is a valid image registry URL value present at the containers : env : image path.
- The PARAMETERS.docker_container_registry_path string is not present in the daemonset definition in the
.yaml
file.
Unable to pull image error while applying the .yaml file configurations
Resolution
- If you have not given a value in the Docker Registry Path field, ensure that the URL is entered in the
.yaml
file at the containers : env : image path. - Ensure that a valid image can be pulled by using the connector image repository URL and the URL is entered in the
.yaml
file at the containers : env : image path.
All pods are in error status
Status of all pods is error after you run the following command:
kubectl get pod -n bmc-logging
Resolution
- Validate the
.yaml
file for ConfigMap (name: bmc-config-map) definition. Make sure it is valid.yaml
. - Validate that the value of the ConfigMap definition follows the fluentd configuration syntax.
Error/CrashLoopBackOff status of some pods
Resolution
- Run the following command:
kubectl get pod -n bmc-logging -o wide
- Find the nodes to which the Error/CrashLoopBackOff pods are assigned.
- Check the health of those nodes.
Collected logs are unavailable in Explorer
Resolution
- Find the nodes to which the service pods are assigned by running the following command:
kubectl get pod -o wide | grep <service name>
- For each node, get the connector pod by running the following command:
kubectl get pod -n bmc-logging - o wide
- Note the connector pods running on the node where service pods are also running.
- For each of these connector pods, run the following command:
kubectl exec -it <daemonset pod name> bash -n bmc-logging
- Go to the
/fluentd/log
path, and run the following command:cat fluent.log
- In the log file, search for the string "
following tail of /var/log/containers/*<name of the service pod>
".
If the string is not present, connector could not find the logs of the service from the node. - Ensure that the service pod logs are stored on the node.
Was this page helpful? Yes No
Submitting...
Thank you
Comments
Log in or register to comment.