[kubernetes] did you specify the right host or port? error on Kubernetes

I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/.

When I run:

kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080

I get:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Why does the command line try to connect to the localhost?

This question is related to kubernetes kubectl

The answer is


I had the same issue after a reboot, I followed the guide described here

So try the following:

$ sudo -i
# swapoff -a
# exit
$ strace -eopenat kubectl version

After that it works fine.


I was getting an error when running

sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Finally for my environment this command parameter works

sudo kubectl --kubeconfig /etc/kubernetes/admin.conf get pods

when executing kubectl as non root.


If you created a cluster on AWS using kops, then kops creates ~/.kube/config for you, which is nice. But if someone else needs to connect to that cluster, then they also need to install kops so that it can create the kubeconfig for you:

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
export CLUSTER_ALIAS=kubernetes-cluster

kubectl config set-context ${CLUSTER_ALIAS} \
    --cluster=${CLUSTER_FULL_NAME} \
    --user=${CLUSTER_FULL_NAME}

kubectl config use-context ${CLUSTER_ALIAS}

kops export cluster --name ${CLUSTER_FULL_NAME} \
  --region=${CLUSTER_REGION} \
  --state=${KOPS_STATE_STORE}

The correct answer, from all above, is to run the commands below:

sudo cp /etc/kubernetes/admin.conf $HOME/

sudo chown $(id -u):$(id -g) $HOME/admin.conf

export KUBECONFIG=$HOME/admin.conf

The issue is that your kubeconfig is not right. To auto-generate it run:

gcloud container clusters get-credentials "CLUSTER NAME"

This worked for me.


This errors means that kubectl is attempting to connect to a Kubernetes apiserver running on your local machine, which is the default if you haven't configured it to talk to a remote apiserver.


I got the same trouble since nearly release, seem must use KUBECONFIG explicit

sudo cp /etc/kubernetes/admin.conf $HOME/

sudo chown $(id -u):$(id -g) $HOME/admin.conf

export KUBECONFIG=$HOME/admin.conf


I was also getting same below error:

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

Then I just execute below command and found everything working fine.

PS C:> .\minikube.exe start

Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 150.53 MB / 150.53 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. PS C:> .\minikube.exe start Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster.


Regardless of your environment (gcloud or not ) , you need to point your kubectl to kubeconfig. By default, kubectl expects the path as $HOME/.kube/config or point your custom path as env variable (for scripting etc ) export KUBECONFIG=/your_kubeconfig_path

Please refer :: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

If you don't have a kubeconfig file for your cluster, create one by referring :: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

It is required to find cluster's ca.crt , apiserver-kubelet-client key and cert.


Reinitialising gcloud with proper account and project worked for me.

gcloud init

After this retrying the below command was successful and kubeconfig entry was generated.

gcloud container clusters get-credentials "cluster_name"

check the cluster info with

kubectl cluster-info

try run with sudo permission mode
example sudo kubectl....


After running "kubeinit" command, kubernetes asks you to run following as regular user

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

But if you run this as a regular user, you will get "The connection to the server localhost:8080 was refused - did you specify the right host or port?" when trying to access as a root user and vice versa. So try accessing "kubectl" as the user who executed the above commands.


Reproduce the same error when doing a tutorial from Udacity called Scalable Microservices with Kubernetes https://classroom.udacity.com/courses/ud615, at the point of Using Kubernetes, Part 3 of Lesson.

Launch a Single Instance:

kubectl run nginx --image=nginx:1.10.0

Error:

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

How I resolved the Error:

Login to Google Cloud Platform

Navigate to Container Engine Google Cloud Platform, Container Engine

Click CONNECT on Cluster

Use login Credentials to access Cluster [NAME] in your Teminal

Proceeded With Work!!!


Make sure your config is set to the project - gcloud config set project [PROJECT_ID]

  1. Run a checklist of the Clusters in the account: gcloud container clusters list

  2. Check the output : NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VE. alpha-cluster asia-south1-a 1.9.7-gke.6 35.200.254.78 f1-micro 1.9.7- NUM_NODES STATUS gke.6 3 RUNNING

  3. Run the following cmd to fetch credentials for your running cluster:

gcloud container clusters get-credentials your-cluster-name --zone your-zone --project your-project

  1. The following output follows:

Fetching cluster endpoint and auth data. kubeconfig entry generated for alpha-cluster.

  1. Try checking details of the node running kubectl such as below to list all pods in the current namespace, with more details:

$ kubectl get nodes -o wide

Should be good to go.


I have a smae issue. in my scenario there is kubernetes API server is not responding. so check you kubernetes API server and controller as well as.


I had same error, this worked for me. Run

minikube status

if the response is

type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

run minikube start

type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

You can proceed


I had this problem using a local docker. The thing to do is check the logs of the containers its spins up to figure out what went wrong. For me it transpired that etcd had fallen over

   $ docker logs <etcdContainerId>
   <snip>
   2016-06-15 09:02:32.868569 C | etcdmain: listen tcp 127.0.0.1:7001: bind: address already in use

Aha! I'd been playing with Cassandra in a docker container and I'd forwarded all the ports since I wasn't sure which it needed exposed and 7001 is one of its ports. Stopping Cassandra, cleaning up the mess and restarting it fixed things.


Solution is this:

minikube delete
minikube start --vm-driver none

I got this issue when using " Bash on Windows " with azure kubernetes

az aks get-credentials -n <myCluster>-g <myResourceGroup>

The config file is autogenerated and placed in '~/.kube/config' file as per OS (which is windows in my case)

To solve this - Run from Bash commandline cp <yourWindowsPathToConfigPrintedFromAbobeCommand> ~/.kube/config