When the Opstrace instance or parts of it appear to not be healthy then debugging should start with getting insights about the underlying Kubernetes cluster and its deployments.
Make sure use the same
AWS_SECRET_ACCESS_KEY that were used for creating the Opstrace instance.
aws eks update-kubeconfig --name <clustername> --region <awsregion>
aws eks update-kubeconfig --name testcluster --region us-west-2
When the Opstrace instance or parts of it appear to not be healthy then debugging should start with getting an overview over all Kubernetes deployments and individual container states. This can be obtained with the following command:
kubectl describe all --all-namespaces > describe_all.out
This will reveal when a certain container is for example in a crash loop, or when it never started in the first place as of for example an error while pulling the corresponding container image.
When the Opstrace controller (a deployment running in the Opstrace instance) is suspected to have run into a problem then it is important to fetch and inspect its logs:
kubectl logs deployment/opstrace-controller \--all-containers=true --namespace=kube-system > controller.log