Do you have an existing k8s config? Running
aws eks update-kubeconfig --region <region> --name <cluster name>
Generates a ~/.kube/config.
If you already have a ~/.kube/config, there could be a conflict between the file to be generated, and the file that already exists that prevents them from being merged.
If you have a ~/.kube/config file, and you aren't actively using it, running
rm ~/.kube/config
and then attempting
aws eks update-kubeconfig --region us-east-2 --name <cluster name>
afterwards will likely solve your issue.
If you are using your ~/.kube/config file, rename it something else so you could use it later, and then run the eks command again.
See a similar issue here: https://github.com/aws/aws-cli/issues/4843
Answer from emh221 on Stack Overflowrole parameter in eks update-kubeconfig is not being used for aws cli connection
CLI version 1.22.0 breaks aws eks update-kubeconfig
EKS kubeconfig profile setting doesn't work when using environment varialbles for authentication
Appending new content to KubeConfig file
Videos
Do you have an existing k8s config? Running
aws eks update-kubeconfig --region <region> --name <cluster name>
Generates a ~/.kube/config.
If you already have a ~/.kube/config, there could be a conflict between the file to be generated, and the file that already exists that prevents them from being merged.
If you have a ~/.kube/config file, and you aren't actively using it, running
rm ~/.kube/config
and then attempting
aws eks update-kubeconfig --region us-east-2 --name <cluster name>
afterwards will likely solve your issue.
If you are using your ~/.kube/config file, rename it something else so you could use it later, and then run the eks command again.
See a similar issue here: https://github.com/aws/aws-cli/issues/4843
Might be null value set as following so you are getting this error:-
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
run > $HOME/.kube/config will empty exist .kube/config file and then run again following cmd
aws eks update-kubeconfig --region us-east-2 --name <cluster name>
Hey Guys ,
I've deployed a cluster on EKS ( AWS ) via terraform , and wondering if its possible to append new configuration content into the defaulted yaml configuration file without recreating it .
Currently I'm using provisioner for updating the file , but is there a way appending content to the file without changing it ?
resource "null_resource" "merge_kubeconfig" {
triggers = {
always = timestamp()
}
depends_on = [module.eks_cluster]
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
set -e
echo 'Applying Auth ConfigMap with kubectl...'
aws eks wait cluster-active --name '${local.cluster_name}'
aws eks update-kubeconfig --name '${local.cluster_name}' --alias '${local.cluster_name}-${var.region}' --region=${var.region}
EOT }
}Thanks in advance !