Toggle navigation
Home
安装部署
Archives
Tags
Kubernetes部署
2019-05-06 06:51:53
71
0
0
louyj
#Requirement swapoff -a configure proxy #install ss client yum install python-setuptools && easy_install pip pip install shadowsocks #configuration vi /etc/shadowsocks.json { "server":"xxx.com", "server_port":xxxx, "local_port":1080, "password":"xxx", "timeout":600, "method":"aes-256-cfb" } #start ss client sslocal -c /etc/shadowsocks.json -d start install polipo convert ssproxy to httpproxy git clone https://github.com/jech/polipo.git cd polipo make all make install configure polipo vi /etc/polipo.cfg socksParentProxy = "127.0.0.1:1080" socksProxyType = socks5 logFile = /var/log/polipo logLevel = 99 logSyslog = true start polipo nohup polipo -c /etc/polipo.cfg & #Install docker # Install Docker from CentOS/RHEL repository: yum install -y docker ---------------------------- # or install Docker CE 18.06 from Docker's CentOS repositories: ## Install prerequisites. yum install yum-utils device-mapper-persistent-data lvm2 ## Add docker repository. yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo ## Install docker. yum update && yum install docker-ce-18.06.1.ce ## Create /etc/docker directory. mkdir /etc/docker # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-opts": [ "overlay2.override_kernel_check=true" ], "bip": "172.17.0.1/24", "data-root": "/docker", } EOF // "bip": "192.168.35.1/24" // ip link del docker0 mkdir -p /etc/systemd/system/docker.service.d #proxy with polipo vi /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://localhost:8123" # Restart docker. systemctl daemon-reload systemctl restart docker #Install kubeadm, kubelet and kubectl Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g. cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF # Set SELinux in permissive mode (effectively disabling it) setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes #Initializing your master export HTTP_PROXY=http://localhost:8123 export HTTPS_PROXY=http://localhost:8123 kubeadm init --pod-network-cidr=10.244.0.0/16 or docker pull coredns/coredns:1.2.6 docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6 kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=index.docker.io/mirrorgooglecontainers --image-repository=gcr.akscn.io/google_containers --image-repository=index.docker.io/mirrorgooglecontainers To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf systemctl enable kubelet && systemctl start kubelet #Installing a pod network add-on Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is Running in the output of kubectl get pods --all-namespaces #Control plane node isolation By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run: kubectl taint nodes --all node-role.kubernetes.io/master- With output looking something like: node "test-01" untainted taint "node-role.kubernetes.io/master:" not found taint "node-role.kubernetes.io/master:" not found #Joining your nodes The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine: Run the command that was output by kubeadm init. For example: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> If you do not have the token, you can get it by running the following command on the master node: kubeadm token list By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node: kubeadm token create #Controlling your cluster from machines other than the master(Optional) In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the administrator kubeconfig file from your master to your workstation like this: scp root@<master ip>:/etc/kubernetes/admin.conf . kubectl --kubeconfig ./admin.conf get nodes #Proxying API Server to localhost(Optional) If you want to connect to the API Server from outside the cluster you can use kubectl proxy scp root@<master ip>:/etc/kubernetes/admin.conf . kubectl --kubeconfig ./admin.conf proxy You can now access the API Server locally at http://localhost:8001/api/v1 #Tear down(Optional) To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down. Talking to the master with the appropriate credentials, run: kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name> Then, on the node being removed, reset all kubeadm installed state: kubeadm reset If you wish to start over simply run kubeadm init or kubeadm join with the appropriate arguments mkdir /etc/cni/net.d -p vi 10-flannel.conflist { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } #Modify coredns ConfigMap kubectl get configmap coredns --namespace=kube-system -o yaml # modify upstream, proxy kubectl edit configmap coredns --namespace=kube-system -o yaml apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream 172.17.36.65 fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . 172.17.36.65 cache 30 loop reload loadbalance } kind: ConfigMap metadata: creationTimestamp: "2018-12-05T14:44:00Z" name: coredns namespace: kube-system resourceVersion: "125839" selfLink: /api/v1/namespaces/kube-system/configmaps/coredns uid: 33f0641a-f89c-11e8-b73e-005056b9b279 #Deploying the Dashboard UI kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml Accessing the Dashboard UI There are multiple ways you can access the Dashboard UI; either by using the kubectl command-line interface, or by accessing the Kubernetes master apiserver using your web browser ##Command line proxy You can access Dashboard using the kubectl command-line tool by running the following command: kubectl proxy Kubectl will handle authentication with apiserver and make Dashboard available at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ ##Master server You may access the UI directly via the Kubernetes master apiserver. Open a browser and navigate to https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/, where <master-ip> is IP address or domain name of the Kubernetes master. Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually. ##Admin privileges IMPORTANT: Make sure that you know what you are doing before proceeding. Granting admin privileges to Dashboard's Service Account might be a security risk. You can grant full admin privileges to Dashboard's Service Account by creating below ClusterRoleBinding. Copy the YAML file based on chosen installation method and save as, i.e. `dashboard-admin.yaml`. Use `kubectl create -f dashboard-admin.yaml` to deploy it. Afterwards you can use Skip option on login page to access Dashboard. Official release apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
Pre:
Mysql Deployment
Next:
Rook Ceph Deployment
0
likes
71
Weibo
Wechat
Tencent Weibo
QQ Zone
RenRen
Submit
Sign in
to leave a comment.
No Leanote account?
Sign up now.
0
comments
More...
Table of content
No Leanote account? Sign up now.