准备

  • 节点准备
    • 10.0.2.7 k8s-master
    • 10.0.2.8 k8s-node1
    • 10.0.2.9 k8s-node2
  • 网络准备
    • 节点 cidr:10.0.2.0/24
    • service cidr: 10.2.0.0/24
    • podd cidr: 10.244.0.0/16
  • linux环境准备
  • docker环境
  • 同步时钟

下载k8s包&cp到master

  • 下载
    此篇以1.24.2为例
  • cp到master
scp kubernetes-v1.24.2.tar k8s-master:~/

安装etcd(master)

  • cd kubernetes-v1.24.2/cby
  • tar -xvf etcd-v3.5.4-linux-amd64.tar.gz
  • cd etcd-v3.5.4-linux-amd64
  • cp etcd* /usr/bin/
  • vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
# etcd 数据存储目录
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target
  • 创建目录/var/lib/etcd/,否则会启动失败
mkdir /var/lib/etcd/
  • 启动etcd
systemctl enable etcd.service
systemctl start etcd.service
  • 查看etcd
systemctl status etcd

etcd可部署ca证书或集群部署,此处为了简单就不搞证书了,如需要点击此处查看

安装证书生成工具cfssl(master)

wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 
chmod +x cfssl_1.6.1_linux_amd64 cfssljson_1.6.1_linux_amd64 cfssl-certinfo_1.6.1_linux_amd64 
mv cfssl_1.6.1_linux_amd64 /usr/bin/cfssl
mv cfssljson_1.6.1_linux_amd64 /usr/bin/cfssljson
mv cfssl-certinfo_1.6.1_linux_amd64  /usr/bin/cfssl-certinfo

master部署

准备

  • cd kubernetes-v1.24.2/cby
  • tar -xvf kubernetes-server-linux-amd64.tar.gz
  • cp kubernetes/server/bin/kube-apiserver /usr/bin/
  • cp kubernetes/server/bin/kube-scheduler /usr/bin/
  • cp kubernetes/server/bin/kube-controller-manager /usr/bin/
  • cp kubernetes/server/bin/kubectl /usr/bin/
  • mkdir /opt/kubernetes/logs/

kube-apiserver

创建证书

  • 创建token文件
cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

格式:token,用户名,UID,用户组
token也可自行生成替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  • 创建k8s 的apiserver ca证书
    • 创建证书目录
    cd ~/TLS/k8s
    
    • 生成ca-config.json
    cat > ca-config.json << EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    • 生成 ca-csr.json
    cat > ca-csr.json << EOF
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
    • 生成证书
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    
    • 会生成ca.pem和ca-key.pem文件。
    • 复制证书至/opt/kubernetes/ssl/
    cp ca.pem /opt/kubernetes/ssl/
    cp ca-key.pem /opt/kubernetes/ssl/
    
  • 使用自签CA签发kube-apiserver HTTPS证书
    • 创建server-csr.json文件
    cat > server-csr.json << EOF
    {
        "CN": "kubernetes",
        "hosts": [
          "10.2.0.1",
          "10.0.2.7",
          "10.0.2.8",
          "10.0.2.9",
          "172.0.0.1",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    

    注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

    • 生成证书
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    
    • 会生成server.pem和server-key.pem文件。
    • 复制证书至/opt/kubernetes/ssl/
    cp server.pem /opt/kubernetes/ssl/
    cp server-key.pem /opt/kubernetes/ssl/
    

部署apiserver

  • 创建apiserver配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://10.0.2.7:2379 \\   # 可配置多个ip 
--bind-address=10.0.2.7 \\
--secure-port=6443 \\
--advertise-address=10.0.2.7 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.2.0.0/24 \\
--enable-admission-plugins=NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
  • 创建systemd管理文件kube-apiserver.service
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/usr/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver
systemctl status kube-apiserver

kube-controller-manager

创建证书

  • 切换工作目录
cd ~/TLS/k8s
  • 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
  • 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

创建kube-config-kube-controller-manager.kubeconfig

shell执行以下命令

KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://10.0.2.7:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
  --client-certificate=./kube-controller-manager.pem \
  --client-key=./kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig --bind-address=127.0.0.1 --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 --service-cluster-ip-range=10.2.0.0/24 --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --cluster-signing-duration=87600h0m0s"
EOF
  • cluster-cidr pod的cidr
  • service-cluster-ip-range service cidr

systemd管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/usr/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

启动kube-controller-manager

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

kube-scheduler

生成kube-scheduler证书

  • 切换工作目录
cd ~/TLS/k8s
  • 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
  • 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

生成kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://10.0.2.7:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig --bind-address=127.0.0.1"
EOF

3. systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/usr/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

启动kube-scheduler

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

kubectl

生成kubectl连接集群的证书

  • admin-csr.json
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
  • 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

生成kubeconfig文件

mkdir /root/.kube

KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://10.0.2.7:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

执行kubectl

[root@k8s-master k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}
controller-manager   Healthy   ok
scheduler            Healthy   ok

如上输出说明Master节点组件运行正常。

授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

Node

安装containerd

在安装docker时会自动安装containerd,如果未安装通过yum install -y containerd.io安装

  • 创建Containerd的配置文件

默认的containerd的配置(通过Docker安装的默认配置)是没有enable cri服务的。

cat /etc/containerd/config.toml | grep disabled_plugins
disabled_plugins = ["cri"]

这里可以直接修改此配置文件,或者通过下面的命令来生成默认的containerd的配置。

containerd config default > /etc/containerd/config.toml
  • 编辑配置文件
vi /etc/containerd/config.toml
-----
SystemdCgroup = false 改为 SystemdCgroup = true


# sandbox_image = "k8s.gcr.io/pause:3.6"
改为:
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"


------
  • 重启containerd
# systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
# systemctl restart containerd
  • 查看image
crictl images
crictl ps

准备

  • 在所有worker node创建工作目录:
mkdir -p /opt/kubernetes/{cfg,ssl,logs} 
  • 从master节点拷贝kubelet kube-proxy
scp /root/k8s/kubernetes-v1.24.2/cby/kubernetes/server/bin/kubelet  k8s-node1:/usr/bin/
scp /root/k8s/kubernetes-v1.24.2/cby/kubernetes/server/bin/kube-proxy  k8s-node1:/usr/bin/
scp /root/k8s/kubernetes-v1.24.2/cby/kubernetes/server/bin/kubelet  k8s-node2:/usr/bin/
scp /root/k8s/kubernetes-v1.24.2/cby/kubernetes/server/bin/kube-proxy  k8s-node2:/usr/bin/

kubelet

  • 生成 kubelet bootstrap.kubeconfig

    因为生成bootstrap.kubeconfig证书需要ca证书,故在master节点上执行后scp到node

    • scp ca证书
    scp /opt/kubernetes/ssl/ca.pem k8s-node1:/opt/kubernetes/ssl/ca.pem
    scp /opt/kubernetes/ssl/ca.pem k8s-node2:/opt/kubernetes/ssl/ca.pem
    
    • 生成kubelet初次加入集群引导kubeconfig文件
    KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
    
    • apiserver IP:PORT
    KUBE_APISERVER="https://10.0.2.7:6443" 
    
    • 与token.csv里保持一致
    TOKEN="c47ffb939f5ca36231d9e3121a252940"
    
    • 生成 kubelet bootstrap kubeconfig 配置文件
    kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/kubernetes/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-credentials "kubelet-bootstrap" \
      --token=${TOKEN} \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-context default \
      --cluster=kubernetes \
      --user="kubelet-bootstrap" \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
    
    • scp至node
    scp /opt/kubernetes/cfg/bootstrap.kubeconfig k8s-node1:/opt/kubernetes/cfg/bootstrap.kubeconfig
    scp /opt/kubernetes/cfg/bootstrap.kubeconfig k8s-node2:/opt/kubernetes/cfg/bootstrap.kubeconfig
    
  • 配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.2.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

clusterDNS为service ip,最后一次为2

  • 创建配置文件/opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-node1 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet-config.yml --cert-dir=/opt/kubernetes/ssl --container-runtime=remote  --runtime-request-timeout=15m  --container-runtime-endpoint=unix:///run/containerd/containerd.sock  --cgroup-driver=systemd --node-labels=node.kubernetes.io/node='' --feature-gates=IPv6DualStack=true"

--hostname-override=k8s-node1调整为对应的node name

  • systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/usr/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 启动kubelet
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
  • 批准kubelet证书申请并加入集群

    • 查看kubelet证书请求
    [root@k8s-master cfg]# kubectl get csr
    NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
    node-csr-eVCzZEg9GokVZI1vpa7vZpQ6PXhCt3sC2Q0058PuTdg   9m43s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
    node-csr-kQo_MV-tD7PByWoK4wNM4p_0GDwiQQ0V7dzwlEV9UUU   95s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
    
    • 批准申请
    kubectl certificate approve node-csr-eVCzZEg9GokVZI1vpa7vZpQ6PXhCt3sC2Q0058PuTdg
    kubectl certificate approve node-csr-kQo_MV-tD7PByWoK4wNM4p_0GDwiQQ0V7dzwlEV9UUU
    
    • 查看节点
    [root@rocksrvs01 bin]# kubectl get node
    NAME          STATUS     ROLES    AGE   VERSION
    rockysrvs01   NotReady   <none>   9s    v1.24.2
    

    注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

kube-proxy

  • 创建证书-在master节点操作
    • 切换工作目录
    cd ~/TLS/k8s
    
    • 创建证书请求文件
    cat > kube-proxy-csr.json << EOF
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    
    • 生成证书
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    
    • scp至node
    scp kube-proxy* k8s-node1:~/TLS/k8s/
    scp kube-proxy* k8s-node2:~/TLS/k8s/
    
  • 生成kube-proxy.kubeconfig文件
cd ~/TLS/k8s/

KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://10.0.2.7:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • 配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 10.244.0.0/16
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true
EOF

调整hostnameOverride

  • 创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
  • systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/usr/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy

部署calico网络

calico介绍

网络组件有很多种,只需要部署其中一个即可,推荐Calico。

Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。

Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。

此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。

calico部署(master)

  • 下载Calico
wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate
  • 编辑calico.yaml
vim +4434 calico.yaml
...
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
...
  • 部署calico
kubectl apply -f calico.yaml
  • 查看calico
kubectl get pods -n kube-system
  • deploy未创建pod,原因是controller-manager报错,错误参考 问题->kube-controller-manager
  • 创建pod成功

授权apiserver访问kubelet

如果此时执行kubectl exec命令,会报无权限

[root@k8s-master calico]# kubectl -n kube-system exec -it calico-node-tp4p5 -- sh
Defaulted container "calico-node" out of: calico-node, upgrade-ipam (init), install-cni (init), mount-bpffs (init)
error: unable to upgrade connection: Forbidden (user=kubernetes, verb=create, resource=nodes, subresource=proxy)

授权

  • 创建apiserver-to-kubelet-rbac.yaml
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
  • apply
kubectl apply -f apiserver-to-kubelet-rbac.yaml
  • 再次执行kubectl exec正常

部署Dashboard

  • 下载yaml
wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yaml
  • vim dashboard.yaml
----
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
----

添加nodePort&type

  • apply
kubectl apply -f dashborad.yaml
  • 查看pod&service
kubectl get pods -n kubernetes-dashboard
kubectl get service -n kubernetes-dashboard
  • 创建用户
wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml

kubectl apply -f dashboard-user.yaml

  • 创建token
[root@k8s-master dashboard]# kubectl -n kubernetes-dashboard create token admin-user

输出结果

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZsOHYwNVVRMWZGcXR5cGQxaUpIakZyM3RkVmM4ZmFMOXd0eXY2M3dSZ3MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjYwMTE1OTAxLCJpYXQiOjE2NjAxMTIzMDEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMThhZWFmMjEtYWVlOS00NjY0LWFkOGItMzNhZDQ2ZDZlMzk1In19LCJuYmYiOjE2NjAxMTIzMDEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.Oz9pyMGUfEvpZfRzL4iX8H0KvRMAQPKE6bBkmu4Av2n9UaOZvIJ6kPBLe9YwSsiX5zBJgs2BUnOMvYkaj9d2nt96kuLnLxSD2o2Z07_8vfWzS0a8IBQIAq6JdOmiMnO9cAdvVMUoouU6DtQcPuShyiVrNLENZ5YBrRdKPe82Ua3PgKLA8unglXiC8mXubvSxvbrVIAOhhlhoMiOox4eESwdM0o3IPeUmRl1klNXcsyv_ilTenvVT-5xOXmT16EF9kdy-MPkhIey8ajgzKi7PRTfp1zFYreLuwrh2rD4lpivs31sdjc1ecmq4uxz00pp8L-cFnFCx7ccsPR_Fzeyjgw
  • 浏览器访问
https://10.0.2.7:30001

握手失败

  • 重新生成kubernetes-dashboard-certs
    • 删除默认创建的secret
    kubectl delete secret kubernetes-dashboard-certs  -n kubernetes-dashboard
    
    • 重新创建secret,主要用来指定证书的存放路径
    kubectl create secret generic kubernetes-dashboard-certs -n kubernetes-dashboard --from-file=dashboard.crt=/opt/kubernetes/ssl/server.pem --from-file=dashboard.key=/opt/kubernetes/ssl/server-key.pem
    
    • 删除dashboard的pod,主要让它重新运行,加载证书
    kubectl delete pod -n kubernetes-dashboard --all
    
  • 再次访问
https://10.0.2.7:30001

可以正常访问

  • 输入上面的token,如果无效可重新创建admin-user and create token

部署CoreDNS

  • 修改clusterIP
vi kubernetes-v1.24.2/coredns/coredns.yaml
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.2.0.2
  • apply
kubectl apply -f kubernetes-v1.24.2/coredns/coredns.yaml
  • 查看
kubectl get pods -n kube-system

检查

  • 部署busybox
    值得注意的是1.24版本deploy version为
apiVersion: apps/v1
kind: Deployment
  • 部署nginx
  • expose nginx
kubectl expose deploy nginx
  • 进入busybox
    • wget nginx ip(不同node上)
    • wget nginx service ip
    • wget nginx service

问题

kube-apiserver启动问题

  • context deadline exceeded
    原因是kube-apiserver访问etcd不通,查看etcd问题,调整为能够正常访问的etcdip及端口
  • issuer URL must use https scheme, got: api
    调整service-account-issuer--service-account-issuer=https://kubernetes.default.svc.cluster.local
  • systemctl启动失败
    /opt/kubernetes/cfg/kube-apiserver.conf文件的换行都去掉启动成功

kube-controller-manager

  • kube-controller-manager启动报错
    F0809 17:45:17.818785   18032 controllermanager.go:223] error start
    ing controllers: failed to mark cidr[10.0.2.0/24] at idx [0] as occ
    upied for node: k8s-node2: cidr 10.0.2.0/24 is out the range of cluster cidr 10.244.0.0/24
    
    • 原因是 调整过controller-manager的 cluster cidr
    • 查看node podCIDR
    [root@k8s-master calico]# kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
    10.0.2.0/24
    
    • delete node
    kubectl delete node k8s-node1
    systemctl stop kubelet
    
    • delete crt
    rm -f /opt/kubernetes/ssl/kubelet*
    
    • start kubelet
    systemctl start kubelet
    
    • approve crt
    kubectl get csr
    kubectl certificate approve node-csr-0xiikMdhEcTVtbr7v0Tt5ISBckfq_VRHVTXlE2_50YI
    

kubelet

  • kubelet start 报错 unknown service runtime.v1alpha2.RuntimeService

    • 修改/etc/containerd/config.toml
    SystemdCgroup=true
    
    • 然后重启containerd&kubelet
    systemctl restart containerd
    systemctl restart kubelet
    
  • kubelet 注册节点--hostname-override错误,重启kubelet无效

    • 原因在是通过之前申请的证书访问kube-apiserver导致报错
    Unable to register node with API server" err="nodes \"k8s-node2\" is forbidden: node \"rockysrvs01\" is not allowed to modify node \"k8s-node2\"" node="k8s-node2
    
    • 解决方案

      • 查看node节点证书
      [root@k8s-node2 ssl]# ls
      ca.pem                                  kubelet.crt
      kubelet-client-2022-08-09-15-15-02.pem  kubelet.key
      kubelet-client-current.pem
      
      • 删除批准的证书
      rm -rf kubelet*
      
      • 重启kubelet
      • 重新批准证书申请
      kubectl certificate approve node-csr-VR-cGkilb6UfkkJFbz7FM30jlycp2IqKk46E192aN1E
      
  • NetworkPluginNotReady

E0809 20:32:23.233077   29697 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

calico未安装正确

kubectl exec

[root@k8s-master calico]# kubectl -n kube-system exec -it calico-node-tp4p5 -- sh
Defaulted container "calico-node" out of: calico-node, upgrade-ipam (init), install-cni (init), mount-bpffs (init)
Error from server: error dialing backend: dial tcp 10.0.2.5:10250: connect: no route to host

发现在执行kubectl exec时,调用的kubelet ip不正确,查看节点hosts是否配置的不对

kube-proxy

  • 启动失败
"Can't set sysctl, kernel version doesn't satisfy minimum version requirements" sysctl="net/ipv4/vs/conn_reuse_mode" minimumKernelVersion="4.1"

升级内核5+

参考