Imansible eseguire localmente Hyperkube (kubernetes) via Docker

Ho seguito questo tutorial per eseguire localmente il cluster di kubernetes in un contenitore Docker. Quando kubectl get nodes , ottengo:

La connessione al server localhost: 8080 è stata rifiutata – hai specificato l'host o la port giusti?

Ho notato che alcuni contenitori avviati da kubelet, come apiserver, sono usciti. Questa è l'output del docker ps -a :

 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 778bc9a9a93c gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube apiserver" 3 seconds ago Exited (255) 2 seconds ago k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_de6ff8f9 12dd99c83c34 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/setup-files.sh IP:1" 3 seconds ago Exited (7) 2 seconds ago k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_3283400b ef7383fa9203 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/setup-files.sh IP:1" 4 seconds ago Exited (7) 4 seconds ago k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87beca1b b3896f4896b1 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube scheduler" 5 seconds ago Up 4 seconds k8s_scheduler.fc12fcbe_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_16584c07 e9b1bc5aeeaa gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube apiserver" 5 seconds ago Exited (255) 4 seconds ago k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87e1ad70 c81dbe181afa gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube controlle" 5 seconds ago Up 4 seconds k8s_controller-manager.70414b65_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_1e30d242 63dfa0fb0881 gcr.io/google_containers/etcd:2.2.1 "/usr/local/bin/etcd " 5 seconds ago Up 4 seconds k8s_etcd.7e452b0b_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_94a862fa 6bb963ef351d gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube proxy --m" 5 seconds ago Up 4 seconds k8s_kube-proxy.9a9f4853_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_6098241c 311e2788de45 gcr.io/google_containers/pause:2.0 "/pause" 5 seconds ago Up 4 seconds k8s_POD.6059dfa2_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_79e4e3e8 3b3cf3ada645 gcr.io/google_containers/pause:2.0 "/pause" 5 seconds ago Up 4 seconds k8s_POD.6059dfa2_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_9eb869b9 aa7efd2154fb gcr.io/google_containers/pause:2.0 "/pause" 5 seconds ago Up 5 seconds k8s_POD.6059dfa2_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_b66baa5f c380b4a9004e gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube kubelet -" 12 seconds ago Up 12 seconds kubelet 

Informazioni

Versione Docker: 1.10.3

Versione di Kubernetes: 1.2.2

OS: Ubuntu 14.04

Comando Docker run

 docker run --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:rw --volume=/var/lib/kubelet/:/var/lib/kubelet:rw --volume=/var/run:/var/run:rw --net=host --pid=host --privileged=true --name=kubelet -d gcr.io/google_containers/hyperkube-amd64:v1.2.2 /hyperkube kubelet --containerized --hostname-override="172.20.34.112" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --allow-privileged=true --v=2 

tronchi di container kubelet

 I0422 11:04:45.158370 541 plugins.go:56] Registering credential provider: .dockercfg I0422 11:05:25.199632 541 plugins.go:291] Loaded volume plugin "kubernetes.io/aws-ebs" I0422 11:05:25.199788 541 plugins.go:291] Loaded volume plugin "kubernetes.io/empty-dir" I0422 11:05:25.199863 541 plugins.go:291] Loaded volume plugin "kubernetes.io/gce-pd" I0422 11:05:25.199903 541 plugins.go:291] Loaded volume plugin "kubernetes.io/git-repo" I0422 11:05:25.199948 541 plugins.go:291] Loaded volume plugin "kubernetes.io/host-path" I0422 11:05:25.199982 541 plugins.go:291] Loaded volume plugin "kubernetes.io/nfs" I0422 11:05:25.200023 541 plugins.go:291] Loaded volume plugin "kubernetes.io/secret" I0422 11:05:25.200059 541 plugins.go:291] Loaded volume plugin "kubernetes.io/iscsi" I0422 11:05:25.200115 541 plugins.go:291] Loaded volume plugin "kubernetes.io/glusterfs" I0422 11:05:25.200170 541 plugins.go:291] Loaded volume plugin "kubernetes.io/persistent-claim" I0422 11:05:25.200205 541 plugins.go:291] Loaded volume plugin "kubernetes.io/rbd" I0422 11:05:25.200249 541 plugins.go:291] Loaded volume plugin "kubernetes.io/cinder" I0422 11:05:25.200289 541 plugins.go:291] Loaded volume plugin "kubernetes.io/cephfs" I0422 11:05:25.200340 541 plugins.go:291] Loaded volume plugin "kubernetes.io/downward-api" I0422 11:05:25.200382 541 plugins.go:291] Loaded volume plugin "kubernetes.io/fc" I0422 11:05:25.200430 541 plugins.go:291] Loaded volume plugin "kubernetes.io/flocker" I0422 11:05:25.200471 541 plugins.go:291] Loaded volume plugin "kubernetes.io/azure-file" I0422 11:05:25.200519 541 plugins.go:291] Loaded volume plugin "kubernetes.io/configmap" I0422 11:05:25.200601 541 server.go:645] Started kubelet E0422 11:05:25.200796 541 kubelet.go:956] Image garbage collection failed: unable to find data for container / I0422 11:05:25.200843 541 server.go:126] Starting to listen read-only on 0.0.0.0:10255 I0422 11:05:25.201531 541 server.go:109] Starting to listen on 0.0.0.0:10250 E0422 11:05:25.201684 541 event.go:202] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping) I0422 11:05:25.206656 541 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0422 11:05:25.206714 541 manager.go:123] Starting to sync pod status with apiserver I0422 11:05:25.206888 541 kubelet.go:2356] Starting kubelet main sync loop. I0422 11:05:25.207036 541 kubelet.go:2365] skipping pod synchronization - [container runtime is down] I0422 11:05:25.333829 541 factory.go:233] Registering Docker factory I0422 11:05:25.336920 541 factory.go:97] Registering Raw factory I0422 11:05:25.392065 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:25.392148 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:25.398401 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:25.492441 541 manager.go:1003] Started watching for new ooms in manager I0422 11:05:25.493365 541 oomparser.go:182] oomparser using systemd I0422 11:05:25.495129 541 manager.go:256] Starting recovery of all containers I0422 11:05:25.583462 541 manager.go:261] Recovery completed I0422 11:05:25.622022 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:25.622065 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:25.622485 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:26.038631 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:26.038753 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:26.039300 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:26.852863 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:26.852892 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:26.853320 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:28.468911 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:28.468937 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:28.469355 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.207357 541 kubelet.go:2388] SyncLoop (ADD, "file"): "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11), k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8), k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)" E0422 11:05:30.207416 541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cache E0422 11:05:30.207465 541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cache E0422 11:05:30.207505 541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cache E0422 11:05:30.209316 541 kubelet.go:1764] Failed creating a mirror pod for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused E0422 11:05:30.209332 541 kubelet.go:1764] Failed creating a mirror pod for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.209396 541 manager.go:1688] Need to restart pod infra container for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)" because it is not found W0422 11:05:30.209828 541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-etcd-172.20.34.112: dial tcp 127.0.0.1:8080: connection refused E0422 11:05:30.209899 541 kubelet.go:1764] Failed creating a mirror pod for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused W0422 11:05:30.212690 541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-proxy-172.20.34.112: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.214297 541 manager.go:1688] Need to restart pod infra container for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)" because it is not found W0422 11:05:30.214935 541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-master-172.20.34.112: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.220596 541 manager.go:1688] Need to restart pod infra container for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)" because it is not found I0422 11:05:31.693419 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:31.693456 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:31.694191 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused 

registri del server api (exited)

 I0425 13:18:55.516154 1 genericapiserver.go:82] Adding storage destination for group batch W0425 13:18:55.516177 1 server.go:383] No RSA key provided, service account token authentication disabled F0425 13:18:55.516185 1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory 

  • Comandi e parametri dinamici nei templates Kubernetes
  • Da contenitori docker a Google Kubernetes
  • esporre il servizio kubernetes sull'installazione del vagabondo locale
  • È ansible ripetere il lavoro di kubernetes?
  • 2 Solutions collect form web for “Imansible eseguire localmente Hyperkube (kubernetes) via Docker”

    Ho riprodotto il tuo problema prima, e ho anche eseguito correttamente il contenitore del kubelet un paio di volte.

    Ecco il command esatto che esegue quando si esegue: export K8S_VERSION=v1.2.2 docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ --name=kubelet \ -d \ gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override="127.0.0.1" \ --address="0.0.0.0" \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --allow-privileged=true --v=2

    Ho rimosso queste 2 impostazioni dal command suggerito dal tutorial perché il DNS non era necessario nel mio caso: --cluster-dns=10.0.0.10 --cluster-domain=cluster.local

    Inoltre, ho iniziato il portle SSH del server in background prima di avviare il contenitore di kubelet, utilizzando questo command:

    docker-machine ssh `docker-machine active` -f -N -L "8080:localhost:8080"

    Inoltre non ho apportto modifiche ai certificati SSL.

    Posso eseguire il contenitore del kubelet con K8S_VERSION = v1.2.2 e K8S_VERSION = 1.2.3.

    Su un successo, osservo che tutti i processi sono "in su"; nessuno "Exited":

     $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 42e6d973f624 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube apiserver" About an hour ago Up About an hour k8s_apiserver.78ec1de_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_5d260d3c 135c020f14b4 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube controlle" About an hour ago Up About an hour k8s_controller-manager.70414b65_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_9b338f27 873656c913fd gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/setup-files.sh IP:1" About an hour ago Up About an hour k8s_setup.e5aa3216_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ff89fc7c 8b12f5f20e8f gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube scheduler" About an hour ago Up About an hour k8s_scheduler.fc12fcbe_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ea90af75 93d9b2387b2e gcr.io/google_containers/etcd:2.2.1 "/usr/local/bin/etcd " About an hour ago Up About an hour k8s_etcd.7e452b0b_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_d66f84f0 f6e45af93ee9 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube proxy --m" About an hour ago Up About an hour k8s_kube-proxy.9a9f4853_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_b0084efc f6748442f2d1 gcr.io/google_containers/pause:2.0 "/pause" About an hour ago Up About an hour k8s_POD.6059dfa2_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_f4758f9b d515c10910c4 gcr.io/google_containers/pause:2.0 "/pause" About an hour ago Up About an hour k8s_POD.6059dfa2_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_3248c1d6 958f4865df9f gcr.io/google_containers/pause:2.0 "/pause" About an hour ago Up About an hour k8s_POD.6059dfa2_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_3850b11e 2611ee951476 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube kubelet -" About an hour ago Up About an hour kubelet 

    Su un path di successo, vedo anche l'output di log simile come quando docker logs kubelet . In particolare, vedo: Unable to register 127.0.0.1 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

    Ma, alla fine, funziona: $ kubectl -s http://localhost:8080 cluster-info Kubernetes master is running at http://localhost:8080 $ kubectl get nodes NAME STATUS AGE 127.0.0.1 Ready 1h 192.168.99.100 NotReady 1h localhost NotReady 1h

    Altri suggerimenti:

    • Potrebbe essere necessario attendere un po 'per il server API per avviarsi. Ad esempio, questo ragazzo usa un ciclo while : until $(kubectl -s http://localhost:8080 cluster-info &> /dev/null); do sleep 1 done until $(kubectl -s http://localhost:8080 cluster-info &> /dev/null); do sleep 1 done

    • Su Mac OS X, ho notato che il Docker VM può diventare instabile each volta che cambia il mio wireless o quando sospendo / riprendo il mio computer porttile. Di solito posso risolvere tali problemi con un docker-machine restart .

    • Durante la sperimentazione con il kubelet, spesso voglio fermare il contenitore del kubelet e fermare / rimuovere tutti i contenitori nel mio docker. Lo faccio eseguendo docker stop kubelet && docker rm -f $(docker ps -aq)

    Informazioni sul mio setup, OS X El Capitan 10.11.2:

     $ docker --version Docker version 1.10.3, build 20f81dd $ kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"} 

    [Io non sono un esperto di kubernetes – solo dopo il mio naso qui].

    il fallimento del kubelet è apparentemente un sintomo conseguente della chiusura della port 8080, che hai notato all'inizio della tua domanda. Non è where devi concentrarsi.

    Ricorda la seguente row nei registri che ci hai mostrato:

     I0422 11:05:28.469355 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused 

    Così, kubelet sta cercando di contattare l'apiserver e la connessione è rifiutata. Non è sorprendente dato che, come notate, è uscito.

    Le linee di registro che ci mostri per l'apiserver mostrano che si lamenta di non avere un certificato. I certificati sono normalmente in /var/run/kubernetes ( noti qui ). Che rientra nel volume di /var/run che è impostato nel command docker per eseguire kubernetes nel tuo tutorial. Vorrei guardare da vicino questa specifica del volume per vedere se hai commesso qualche errore e per vedere se i certificati sono lì come previsto.

    C'è un paio di bit a https://github.com/kubernetes/kubernetes/issues/11000 che potrebbe essere utile per capire cosa stava andando male con i tuoi devurandom , compreso devurandom fornire uno script per la creazione dei cert se questo è ciò che è necessario.

    Suggerimenti per Linux e Windows Server, quali Ubuntu, Centos, Apache, Nginx, Debian e argomenti di rete.