kubectl
's native authentication to connect to your cluster. You can use the $KUBECONFIG
environment variable to find the location of the Kubernetes configuration file, and configure any credentials required.kubectl
is configured correctly before any commands are run, as shown in the following example:$KUBECONFIG
environment variable to point at /mnt/workspace/.kube/config
, giving you a number of options:/mnt/workspace/.kube/config
.$REGION_NAME
and $CLUSTER_NAME
environment variables must be defined in your Stack's environment.az login
- logs into the Azure CLI.az aks get-credentials
- adds credentials for your cluster to the kubeconfig file.az login
command. This guide outlines two main scenarios.$AKS_CLUSTER_NAME
and $AKS_RESOURCE_GROUP
environment variable configured containing the name of the AKS cluster and the resource group name of the cluster respectively.$ARM_*
environment variables to login as the Service Principal for the integration:gcloud container clusters get-credentials
command. For this to work, you need to use a custom runner image that has the gcloud CLI and kubectl installed.GKE_CLUSTER_NAME
- the name of your cluster.GKE_CLUSTER_REGION
- the region the cluster is deployed to.GCP_PROJECT_NAME
- the name of your GCP project.get-credentials
command configures your kubeconfig file to use the gcloud config config-helper
command to allow token refresh. Unfortunately this command will not work when we only have an access token available. The script provided works around this by manually removing and re-creating the user details in the config file.--zone
flag instead of the --region
flag in the gcloud container clusters get-credentials
command: