news 2025/12/25 11:24:05

kubernetes1.28.2部署

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
kubernetes1.28.2部署

一、基础环境

1.1 主机配置

IP主机名操作系统内核版本
192.168.133.128harborCentos7.93.10.0-1160
192.168.133.134k8s-master01Anolis7.95.10.222-1
192.168.133.135k8s-node01Anolis7.95.10.222-1
192.168.133.136k8s-node02Anolis7.95.10.222-1

1.2 主机名与hosts解析

# hostnamectl set-hostname harbor# hostnamectl set-hostname k8s-master01# hostnamectl set-hostname k8s-node01# hostnamectl set-hostname k8s-node02所有节点都添加# cat >> /etc/hosts <<EOF192.168.133.128 harbor 192.168.133.134 k8s-master01 192.168.133.135 k8s-node01 192.168.133.136 k8s-node02 EOF

1.3 关闭防火墙与selinux

所有节点# systemctl stop firewalld && systemctl disable firewalld# setenforce 0# sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config

1.4 安装基础软件

所有节点# yum install -y wget tree bash-completion lrzsz psmisc net-tools vim chrony

1.5 升级内核

所有k8s节点# wget https://dl.lamp.sh/kernel/el7/kernel-ml-5.10.222-1.el7.x86_64.rpm# wget https://dl.lamp.sh/kernel/el7/kernel-ml-devel-5.10.222-1.el7.x86_64.rpm# rpm -ivh kernel-ml-*# grub2-set-default 0# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg# grub2-mkconfig -o /boot/grub2/grub.cfg

1.6 配置时间同步

所有节点# vim /etc/chrony.conf:3,6 s/^/# #注释掉原有行server ntp1.aliyun.com iburst 重启# systemctl restart chronyd验证# chronyc sources

1.7 禁用swap分区

所有节点# swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab && free -h

1.8 修改内核参数并重载

所有k8s节点# cat >> /etc/sysctl.d/k8s.conf << EOFvm.swappiness=0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF# modprobe br_netfilter && modprobe overlay && sysctl -p /etc/sysctl.d/k8s.conf

1.9 配置ipvs

所有k8s节点# yum install ipset ipvsadm -y# cat <<EOF> /etc/sysconfig/modules/ipvs.modules#!/bin/bashmodprobe--ip_vs modprobe--ip_vs_rr modprobe--ip_vs_wrr modprobe--ip_vs_sh modprobe--nf_conntrack EOF# chmod +x /etc/sysconfig/modules/ipvs.modules && /bin/bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

1.10 重启

# shutdown -r now

二、harbor准备

2.1 docker离线安装

2.2 harbor离线安装

三、k8s节点准备

3.1 daemon.json修改

# cat <<EOF> /etc/docker/daemon.json{"exec-opts":["native.cgroupdriver=systemd"],"data-root":"/data/docker","insecure-registries":["192.168.133.128"],"log-opts":{"max-size":"300m","max-file":"5"},"max-concurrent-downloads": 3,"max-concurrent-uploads": 5,"live-restore": true}EOF# systemctl daemon-reload# systemctl restart docker

四、cri环境配置

4.1 下载并查看版本

所有k8s节点# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.21/cri-dockerd-0.3.21.amd64.tgz# tar xf cri-dockerd-0.3.21.amd64.tgz# mv cri-dockerd/cri-dockerd /usr/local/bin# cri-dockerd --version

4.2 配置启动服务

配置cri-dockerd服务# cat > /etc/systemd/system/cri-dockerd.service <<EOF[Unit]Description=CRI InterfaceforDocker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target[Service]Type=notify ExecStart=/usr/local/bin/cri-dockerd--pod-infra-container-image=192.168.133.128/google_containers/pause:3.9--network-plugin=cni--cni-conf-dir=/etc/cni/net.d--cni-bin-dir=/opt/cni/bin--container-runtime-endpoint=unix:///var/run/cri-dockerd.sock--cri-dockerd-root-directory=/var/lib/dockershim--docker-endpoint=unix:///var/run/docker.sock--cri-dockerd-root-directory=/data/docker# --pod-infra-container-image=192.168.133.128/google_containers/pause:3.9根据自己情况修改# --cri-dockerd-root-directory=/data/docker 根据自己情况修改docker根目录ExecReload=/bin/kill-s HUP$MAINPIDTimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process[Install]WantedBy=multi-user.target EOF 配置cri-dockerd.socket# cat > /etc/systemd/system/cri-dockerd.socket <<EOF[Unit]Description=CRI Docker Socketforthe API PartOf=cri-docker.service[Socket]ListenStream=/var/run/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker[Install]WantedBy=sockets.target EOF# systemctl daemon-reload# systemctl enable cri-dockerd.service --now

五、kubernetes部署

所有k8s节点# cat << EOF> /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF# yum list --showduplicates kubeadm |sort -Vr |head -n 10# yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2 --disableexcludes=kubernetes# vim /etc/sysconfig/kubeletKUBELET_EXTRA_ARGS="--cgroup-driver=systemd"# systemctl enable kubelet

六、集群初始化

6.1 master节点初始化

查看集群初始化所用镜像# kubeadm config images list --kubernetes-version=1.28.2# kubeadm init --kubernetes-version=1.28.2 \--apiserver-advertise-address=192.168.133.134 \--image-repository 192.168.133.128/google_containers \--service-cidr=10.96.0.0/12 \--pod-network-cidr=10.224.0.0/16 \--ignore-preflight-errors=Swap \--cri-socket=unix:///var/run/cri-dockerd.sock 执行过程[init]UsingKubernetes version: v1.28.2[preflight]Running pre-flight checks[preflight]Pulling images requiredforsetting up a Kubernetes cluster[preflight]This might take a minute or two,depending on the speed of your internet connection[preflight]You can also perform this action in beforehandusing'kubeadm config images pull'[certs]UsingcertificateDir folder"/etc/kubernetes/pki"[certs]Generating"ca"certificate and key[certs]Generating"apiserver"certificate and key[certs]apiserver serving cert is signedforDNS names[k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local]and IPs[10.96.0.1 192.168.133.134][certs]Generating"apiserver-kubelet-client"certificate and key[certs]Generating"front-proxy-ca"certificate and key[certs]Generating"front-proxy-client"certificate and key[certs]Generating"etcd/ca"certificate and key[certs]Generating"etcd/server"certificate and key[certs]etcd/server serving cert is signedforDNS names[k8s-master01 localhost]and IPs[192.168.133.134 127.0.0.1 ::1][certs]Generating"etcd/peer"certificate and key[certs]etcd/peer serving cert is signedforDNS names[k8s-master01 localhost]and IPs[192.168.133.134 127.0.0.1 ::1][certs]Generating"etcd/healthcheck-client"certificate and key[certs]Generating"apiserver-etcd-client"certificate and key[certs]Generating"sa"key and public key[kubeconfig]Usingkubeconfig folder"/etc/kubernetes"[kubeconfig]Writing"admin.conf"kubeconfig file[kubeconfig]Writing"kubelet.conf"kubeconfig file[kubeconfig]Writing"controller-manager.conf"kubeconfig file[kubeconfig]Writing"scheduler.conf"kubeconfig file[etcd]Creating static Pod manifestforlocal etcd in"/etc/kubernetes/manifests"[control-plane]Usingmanifest folder"/etc/kubernetes/manifests"[control-plane]Creating static Pod manifestfor"kube-apiserver"[control-plane]Creating static Pod manifestfor"kube-controller-manager"[control-plane]Creating static Pod manifestfor"kube-scheduler"[kubelet-start]Writing kubelet environment file with flags to file"/var/lib/kubelet/kubeadm-flags.env"[kubelet-start]Writing kubelet configuration to file"/var/lib/kubelet/config.yaml"[kubelet-start]Starting the kubelet[wait-control-plane]Waitingforthe kubelet to boot up the control plane as static Podsfromdirectory"/etc/kubernetes/manifests".This can take up to 4m0s[apiclient]All control plane components are healthy after 6.001787 seconds[upload-config]Storing the configuration used in ConfigMap"kubeadm-config"in the"kube-system"Namespace[kubelet]Creating a ConfigMap"kubelet-config"in namespace kube-system with the configurationforthe kubelets in the cluster[upload-certs]Skipping phase.Please see--upload-certs[mark-control-plane]Marking the node k8s-master01 as control-plane by adding the labels:[node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane]Marking the node k8s-master01 as control-plane by adding the taints[node-role.kubernetes.io/control-plane:NoSchedule][bootstrap-token]Usingtoken: 199uo0.2lrc26dxj31sdvrw[bootstrap-token]Configuring bootstrap tokens,cluster-info ConfigMap,RBAC Roles[bootstrap-token]Configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token]Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in orderfornodes to get long term certificate credentials[bootstrap-token]Configured RBAC rules to allow the csrapprover controller automatically approve CSRsfroma Node Bootstrap Token[bootstrap-token]Configured RBAC rules to allow certificate rotationforall node client certificates in the cluster[bootstrap-token]Creating the"cluster-info"ConfigMap in the"kube-public"namespace[kubelet-finalize]Updating"/etc/kubernetes/kubelet.conf"to point to a rotatable kubelet client certificate and key[addons]Applied essential addon: CoreDNS[addons]Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully!Tostartusingyour cluster,you need to run the following as a regular user: mkdir-p$HOME/.kube sudocp-i/etc/kubernetes/admin.conf$HOME/.kube/config sudo chown $(id-u):$(id-g)$HOME/.kube/config Alternatively,ifyou are the root user,you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.Run"kubectl apply -f [podnetwork].yaml"with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.133.134:6443--token 199uo0.2lrc26dxj31sdvrw \--discovery-token-ca-cert-hash sha256:cd627e4a9cdf397a74c71619df219570f3f0cb462ed32ba50f63b327d11330ae 执行以下内容# mkdir -p $HOME/.kube# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config# sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.2 node节点加入

# kubeadm join 192.168.133.134:6443 --token 199uo0.2lrc26dxj31sdvrw \--discovery-token-ca-cert-hash sha256:cd627e4a9cdf397a74c71619df219570f3f0cb462ed32ba50f63b327d11330ae \--cri-socket=unix:///var/run/cri-dockerd.sock#注意:初始化完成后弹出的加入节点命令在节点上执行时会报找不到socket文件,手动指定自己的socket文件“--cri-socket=unix:///var/run/cri-dockerd.sock”

6.3 网络配置

# wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml --no-check-certificate修改后镜像源# grep image: calico.yamlimage: 192.168.133.128/calico/cni:v3.27.3 image: 192.168.133.128/calico/cni:v3.27.3 image: 192.168.133.128/calico/node:v3.27.3 image: 192.168.133.128/calico/node:v3.27.3 image: 192.168.133.128/calico/kube-controllers:v3.27.3# vim calico.yaml# 这里是被注释上的,打开即可-name: CALICO_IPV4POOL_CIDR value:"10.224.0.0/16"#更改为初始化pod的地址范围安装calico.yaml# kubectl apply -f calico.yaml

6.4 验证

验证node# kubectl get nodesNAME STATUS ROLES AGE VERSION k8s-master01 Ready control-plane 9m35s v1.28.2 k8s-node01 Ready <none> 6m42s v1.28.2 k8s-node02 Ready <none> 6m39s v1.28.2 验证pod# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGE calico-kube-controllers-65cc788d67-hzr5w 1/1 Running 0 107s calico-node-2fzt2 1/1 Running 0 107s calico-node-46zsh 1/1 Running 0 107s calico-node-qwchf 1/1 Running 0 107s coredns-78dbbd7d4d-bcnmz 1/1 Running 0 10m coredns-78dbbd7d4d-hc6rr 1/1 Running 0 10m etcd-k8s-master01 1/1 Running 0 10m kube-apiserver-k8s-master01 1/1 Running 0 10m kube-controller-manager-k8s-master01 1/1 Running 0 10m kube-proxy-cspjf 1/1 Running 0 7m50s kube-proxy-fxkc5 1/1 Running 0 10m kube-proxy-gwc7d 1/1 Running 0 7m53s kube-scheduler-k8s-master01 1/1 Running 0 10m

7 命令补全配置

# echo "source <(kubectl completion bash)" >> ~/.bashrc && echo "source <(kubeadm completion bash)" >> ~/.bashrc && source ~/.bashrc

8 安装nginx验证

# vim nginx.yaml---apiVersion: v1 kind: ReplicationController metadata: name: nginx-web spec: replicas: 2 selector: name: nginx template: metadata: labels: name: nginx spec: containers:-name: nginx image: 192.168.133.128/nginx/nginx:1.28.0 ports:-containerPort: 80---apiVersion: v1 kind: Service metadata: name: nginx-service-nodeport spec: ports:-port: 80 targetPort: 80 nodePort: 30001 protocol: TCPtype: NodePort selector: name: nginx# kubectl apply -f nginx.yaml# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-web-8zbrb 1/1 Running 0 11s 10.224.85.193 k8s-node01 <none> <none> nginx-web-cq5tp 1/1 Running 0 11s 10.224.58.196 k8s-node02 <none> <none># kubectl get svc -o wideNAMETYPECLUSTER-IP EXTERNAL-IP PORT(S)AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m <none> nginx-service-nodeport NodePort 10.111.12.8 <none> 80:30001/TCP 33s name=nginx
版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2025/12/24 1:05:15

金融级数据保护,手把手教你用PHP实现RSA加密全流程

第一章&#xff1a;金融级数据安全的挑战与RSA加密价值在金融系统中&#xff0c;数据的机密性、完整性和身份可验证性是安全架构的核心要求。随着网络攻击手段日益复杂&#xff0c;传统安全机制已难以应对中间人攻击、数据篡改和身份伪造等威胁。RSA加密算法作为非对称加密的基…

作者头像 李华
网站建设 2025/12/24 1:35:33

企业核心竞争力的评估方法

企业核心竞争力的评估方法 关键词:企业核心竞争力、评估方法、指标体系、数学模型、实际应用 摘要:本文围绕企业核心竞争力的评估方法展开深入探讨。首先介绍了研究的背景、目的、预期读者和文档结构等内容。接着阐述了企业核心竞争力的核心概念及其内在联系,给出了相关的原…

作者头像 李华
网站建设 2025/12/24 6:05:07

记录va_list重复使用导致的crash

博主介绍&#xff1a;程序喵大人 35 - 资深C/C/Rust/Android/iOS客户端开发10年大厂工作经验嵌入式/人工智能/自动驾驶/音视频/游戏开发入门级选手《C20高级编程》《C23高级编程》等多本书籍著译者更多原创精品文章&#xff0c;首发gzh&#xff0c;见文末&#x1f447;&#x…

作者头像 李华
网站建设 2025/12/24 10:40:01

二十三种设计模式(十)--外观模式

外观模式 Facade 外观模式是开发过程中经常不经意间就用到的模式. 当我们编写一个功能相对复杂的模块时, 要对外提供一个简单的调用接口, 就用到了外观模式. 外观模式的核心价值就是对外提供简单易用的接口, 屏蔽内部复杂的逻辑, 协调多个子系统之间的交互顺序和依赖关系. 多个…

作者头像 李华
网站建设 2025/12/23 4:25:04

FSNotes深度体验:从笔记混乱到高效管理的完美蜕变

FSNotes深度体验&#xff1a;从笔记混乱到高效管理的完美蜕变 【免费下载链接】fsnotes Notes manager for macOS/iOS 项目地址: https://gitcode.com/gh_mirrors/fs/fsnotes 你是否曾经在十几个笔记应用间反复切换&#xff0c;却始终找不到那款"刚刚好"的工具…

作者头像 李华