Common Issues in Container Management ¶
This page lists some common issues that may be encountered in container management (Kpanda) and provides convenient troubleshooting solutions.
- Permission issues in container management and global management modules
- Helm installation:
- Helm application installation failed with "OOMKilled" message
- Unable to pull kpanda-shell image during Helm application installation
- Helm chart interface does not display the most recently uploaded chart to Helm repo
- Stuck in installing state and unable to remove application for reinstallation after Helm installation failure
- Scheduling exception after removing node affinity and other scheduling policies
- Application backup:
- Why do elastic scaling records still exist after uninstalling VPA, HPA, and CronHPA?
- Why does the console open abnormally in low-version clusters?
- Creating and integrating clusters:
Permission Issues ¶
Regarding permission issues in the container management and global management modules, users often ask why they can see a certain cluster or why they cannot see a cluster. Here are three cases to investigate related permission issues:
-
Permissions in the container management module are divided into cluster permissions and namespace permissions. If a user is bound, they can view the corresponding clusters and resources. For specific permission details, refer to Cluster Permission Description.
-
User authorization in the global management module: Use the admin account to visit Global Management -> User and Access Control -> Users, and find the corresponding user. In the Authorized User Groups tab, if there are roles like Admin, Kpanda Owner, etc., that have container management permissions, then even if there are no cluster or namespace permissions bound in container management, the user can still see all clusters. Refer to the User Authorization Documentation.
-
Workspace binding in the global management module: Log in to Global Management -> Workspaces and Hierarchies to see your authorized workspaces. Click on the workspace name.
-
If the workspace is authorized only to you, you can see your account in the authorization tab and then check the resource group or shared resource tabs. If the resource group is bound to a namespace or shared resources are bound to a cluster, your account will be able to see the corresponding cluster.
-
If you are granted a global management-related role, you will not see your account in the authorization tab, nor will you see the cluster resources bound to the workspace in the container management module.
-
Issues with Helm Installation¶
-
Helm installation failed with "OOMKilled" message
As shown, the container management automatically creates a Job responsible for the application installation. In version v0.6.0, due to improper job resource settings, OOM issues affected the application installation. This bug has been fixed in version 0.6.1. If you upgrade to v0.6.1, it will only take effect in newly created or integrated clusters; existing clusters need to be manually adjusted to take effect.
Click to View How to Adjust the Script
- The following scripts are executed in the global service cluster.
- Find the corresponding cluster; this article uses skoala-dev as an example to obtain the corresponding skoala-dev-setting configmap.
-
After updating the configmap, it will take effect.
kubectl get cm -n kpanda-system skoala-dev-setting -o yaml apiVersion: v1 data: clusterSetting: '{"plugins":[{"name":1,"intelligent_detection":true},{"name":2,"enabled":true,"intelligent_detection":true},{"name":3},{"name":6,"intelligent_detection":true},{"name":7,"intelligent_detection":true},{"name":8,"intelligent_detection":true},{"name":9,"intelligent_detection":true}],"network":[{"name":4,"enabled":true,"intelligent_detection":true},{"name":5,"intelligent_detection":true},{"name":10},{"name":11}],"addon_setting":{"helm_operation_history_limit":100,"helm_repo_refresh_interval":600,"helm_operation_base_image":"release-ci.daocloud.io/kpanda/kpanda-shell:v0.0.6","helm_operation_job_template_resources":{"limits":{"cpu":"50m","memory":"120Mi"},"requests":{"cpu":"50m","memory":"120Mi"}}},"clusterlcm_setting":{"enable_deletion_protection":true},"etcd_backup_restore_setting":{"base_image":"release.daocloud.io/kpanda/etcdbrctl:v0.22.0"}}' kind: ConfigMap metadata: labels: kpanda.io/cluster-plugins: "" name: skoala-dev-setting namespace: kpanda-system ownerReferences: - apiVersion: cluster.kpanda.io/v1alpha1 blockOwnerDeletion: true controller: true kind: Cluster name: skoala-dev uid: f916e461-8b6d-47e4-906e-5e807bfe63d4 uid: 8a25dfa9-ef32-46b4-bc36-b37b775a9632
Modify the clusterSetting -> helm_operation_job_template_resources to appropriate values. The values corresponding to v0.6.1 are cpu: 100m, memory: 400Mi.
-
Unable to pull kpanda-shell image during Helm installation
When integrating an offline environment, clusters often encounter failures to pull the kpanda-shell image when installing Helm applications, as shown:
At this time, simply go to the cluster operation and maintenance - cluster settings page, in the advanced configuration tab, and modify the Helm operation base image to a kpanda-shell image that can be pulled normally by the cluster.
-
Helm chart UI does not display the most recently uploaded chart to Helm repo
At this time, simply refresh the corresponding Helm repository in the Helm repository.
-
Stuck in installing state and unable to remove application for reinstallation after Helm installation failure
At this time, simply go to the custom resource page, find the helmreleases.helm.kpanda.io CRD, and delete the corresponding helmreleases CR.
Scheduling Issues ¶
After removing node affinity and other scheduling policies through Workload, scheduling exceptions occur.
At this time, it may be because the policies were not deleted cleanly. Click edit and delete all policies.
Application Backup Issues¶
Kcoral is the development code for application backup.
-
What is the logic behind Kcoral's detection of Velero status in the working cluster?
- The working cluster has the standard Velero components installed under the velero namespace.
- The velero control plane deployment is running and has reached the desired replica count.
- The velero data plane node agent is running and has reached the desired replica count.
- Velero has successfully connected to the target MinIO (BSL status is Available).
-
How does Kcoral obtain available clusters during cross-cluster backup and restore?
When Kcoral performs cross-cluster backup and restore of applications, it helps users filter the list of clusters that can perform cross-cluster restores on the restore page. The logic is as follows:
- Filters out clusters that have not installed Velero.
- Filters out clusters with abnormal Velero status.
- Retrieves and returns the list of clusters that connect to the same MinIO and Bucket as the target cluster.
Therefore, as long as they connect to the same MinIO and Bucket, and Velero is running, cross-cluster backups (with write permissions) and restores can be performed.
-
Kcoral backed up pods and deployments with the same label, but two pods appear after restore.
The reason for this phenomenon is that during restoration, the Pod labels were modified, causing their labels to not match the parent resource ReplicaSet / Deployment labels at the time of backup, resulting in two times the number of Pods upon restoration.
To avoid this situation, try to avoid modifying the labels of any resources in the associated resources.
Log Issues ¶
Why do elastic scaling records still exist after uninstalling VPA, HPA, and CronHPA?
Although the corresponding components were uninstalled through the Helm Addon market, the records related to application elastic scaling still remain, as shown in the figure:
This is an issue with helm uninstall, as it does not uninstall the corresponding CRD, leading to data residue. At this point, we need to manually uninstall the corresponding CRD to complete the final cleanup.
Console Issues ¶
Why does the console open abnormally in low-version clusters?
In Kubernetes clusters with low versions (below v1.18), opening the console may result in CSR resource request failures. When opening the console, a certificate request is made through the CSR resource by the currently logged-in user in the target cluster. If the cluster version is too low or this feature's controller is not enabled, it will lead to certificate request failures, thus preventing connection to the target cluster.
For the certificate request process, refer to kubernetes website.
Solution:
-
If the cluster version is above v1.18, check if the kube-controller-manager has the CSR feature enabled, and ensure the following controllers are functioning properly:
-
For low-version clusters, the only solution is to upgrade the version.
Issues with Creating and Integrating Clusters¶
-
How to reset a created cluster?
There are two situations for created clusters:
- Failed cluster creation: If the cluster creation fails due to incorrect parameter settings during the creation process, you can select to retry in the failed cluster and reset the parameters to recreate it.
- Successfully created cluster: This cluster can be uninstalled first and then recreated. To uninstall the cluster, you need to disable cluster protection.
-
Failed to install plugins when integrating cluster
For clusters integrated in an offline environment, you need to configure the CRI proxy repository to ignore TLS verification before installing plugins (this needs to be executed on all nodes).
-
Modify the file
/etc/docker/daemon.json
-
Add "insecure-registries": ["172.30.120.243","temp-registry.daocloud.io"],
The content after modification should be as follows:
-
Restart Docker
-
Modify
/etc/containerd/config.toml
-
The content after modification should be as follows:
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://registry-1.docker.io"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."temp-registry.daocloud.io"] endpoint = ["http://temp-registry.daocloud.io"] [plugins."io.containerd.grpc.v1.cri".registry.configs."http://temp-registry.daocloud.io".tls] insecure_skip_verify = true
-
Pay attention to spaces and line breaks to ensure the configuration is correct. After modification, run:
-
-
Why does cluster creation fail when enabling kernel tuning for newly created clusters in advanced settings?
-
Check if the kernel module conntrack is loaded by executing the following command:
-
If the return is empty, it indicates that it is not loaded. Reload it with the following command:
Note
If the kernel module has been upgraded, it may also lead to cluster creation failure.
-
-
kpanda-system
namespace remains in terminating state after cluster disconnection.Check if the APIServices status is normal. The command to check is as follows. If the current status is false, try to repair the APIServices or delete the service.