k8s離線安裝包 三步安裝,簡單到難以置信node
說句實在話,kubeadm的代碼寫的真心通常,質量不是很高。linux
幾個關鍵點來先說一下kubeadm乾的幾個核心的事:git
<!--more-->docker
代碼入口 cmd/kubeadm/app/cmd/init.go
建議你們去看看cobrajson
找到Run函數來分析下主要流程:bootstrap
if res, _ := certsphase.UsingExternalCA(i.cfg); !res { if err := certsphase.CreatePKIAssets(i.cfg); err != nil { return err }
if err := kubeconfigphase.CreateInitKubeConfigFiles(kubeConfigDir, i.cfg); err != nil { return err }
controlplanephase.CreateInitStaticPodManifestFiles(manifestDir, i.cfg); if len(i.cfg.Etcd.Endpoints) == 0 { if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(manifestDir, i.cfg); err != nil { return fmt.Errorf("error creating local etcd static pod manifest file: %v", err) } }
if err := waitForAPIAndKubelet(waiter); err != nil { ctx := map[string]string{ "Error": fmt.Sprintf("%v", err), "APIServerImage": images.GetCoreImage(kubeadmconstants.KubeAPIServer, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage), "ControllerManagerImage": images.GetCoreImage(kubeadmconstants.KubeControllerManager, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage), "SchedulerImage": images.GetCoreImage(kubeadmconstants.KubeScheduler, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage), } kubeletFailTempl.Execute(out, ctx) return fmt.Errorf("couldn't initialize a Kubernetes cluster") }
if err := markmasterphase.MarkMaster(client, i.cfg.NodeName); err != nil { return fmt.Errorf("error marking master: %v", err) }
if err := nodebootstraptokenphase.UpdateOrCreateToken(client, i.cfg.Token, false, i.cfg.TokenTTL.Duration, kubeadmconstants.DefaultTokenUsages, []string{kubeadmconstants.NodeBootstrapTokenAuthGroup}, tokenDescription); err != nil { return fmt.Errorf("error updating or creating token: %v", err) }
if err := dnsaddonphase.EnsureDNSAddon(i.cfg, client); err != nil { return fmt.Errorf("error ensuring dns addon: %v", err) } if err := proxyaddonphase.EnsureProxyAddon(i.cfg, client); err != nil { return fmt.Errorf("error ensuring proxy addon: %v", err) }
筆者批判代碼無腦式的一個流程到底,要是筆者操刀定抽象成接口 RenderConf Save Run Clean等,DNS kube-porxy以及其它組件去實現,而後問題就是沒把dns和kubeproxy的配置渲染出來,多是它們不是static pod的緣由, 而後就是join時的bug下文提到api
循環的調用了這一坨函數,咱們只須要看其中一兩個便可,其它的都差很少app
certActions := []func(cfg *kubeadmapi.MasterConfiguration) error{ CreateCACertAndKeyfiles, CreateAPIServerCertAndKeyFiles, CreateAPIServerKubeletClientCertAndKeyFiles, CreateServiceAccountKeyAndPublicKeyFiles, CreateFrontProxyCACertAndKeyFiles, CreateFrontProxyClientCertAndKeyFiles, }
根證書生成:ide
//返回了根證書的公鑰和私鑰 func NewCACertAndKey() (*x509.Certificate, *rsa.PrivateKey, error) { caCert, caKey, err := pkiutil.NewCertificateAuthority() if err != nil { return nil, nil, fmt.Errorf("failure while generating CA certificate and key: %v", err) } return caCert, caKey, nil }
k8s.io/client-go/util/cert 這個庫裏面有兩個函數,一個生成key的一個生成cert的:函數
key, err := certutil.NewPrivateKey() config := certutil.Config{ CommonName: "kubernetes", } cert, err := certutil.NewSelfSignedCACert(config, key)
config裏面咱們也能夠填充一些別的證書信息:
type Config struct { CommonName string Organization []string AltNames AltNames Usages []x509.ExtKeyUsage }
私鑰就是封裝了rsa庫裏面的函數:
"crypto/rsa" "crypto/x509" func NewPrivateKey() (*rsa.PrivateKey, error) { return rsa.GenerateKey(cryptorand.Reader, rsaKeySize) }
自簽證書,因此根證書裏只有CommonName信息,Organization至關於沒設置:
func NewSelfSignedCACert(cfg Config, key *rsa.PrivateKey) (*x509.Certificate, error) { now := time.Now() tmpl := x509.Certificate{ SerialNumber: new(big.Int).SetInt64(0), Subject: pkix.Name{ CommonName: cfg.CommonName, Organization: cfg.Organization, }, NotBefore: now.UTC(), NotAfter: now.Add(duration365d * 10).UTC(), KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign, BasicConstraintsValid: true, IsCA: true, } certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &tmpl, &tmpl, key.Public(), key) if err != nil { return nil, err } return x509.ParseCertificate(certDERBytes) }
生成好以後把之寫入文件:
pkiutil.WriteCertAndKey(pkiDir, baseName, cert, key); certutil.WriteCert(certificatePath, certutil.EncodeCertPEM(cert))
這裏調用了pem庫進行了編碼
encoding/pem func EncodeCertPEM(cert *x509.Certificate) []byte { block := pem.Block{ Type: CertificateBlockType, Bytes: cert.Raw, } return pem.EncodeToMemory(&block) }
而後咱們看apiserver的證書生成:
caCert, caKey, err := loadCertificateAuthorithy(cfg.CertificatesDir, kubeadmconstants.CACertAndKeyBaseName) //從根證書生成apiserver證書 apiCert, apiKey, err := NewAPIServerCertAndKey(cfg, caCert, caKey)
這時須要關注AltNames了比較重要,全部須要訪問master的地址域名都得加進去,對應配置文件中apiServerCertSANs字段,其它東西與根證書無差異
config := certutil.Config{ CommonName: kubeadmconstants.APIServerCertCommonName, AltNames: *altNames, Usages: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, }
能夠看到建立了這些文件
return createKubeConfigFiles( outDir, cfg, kubeadmconstants.AdminKubeConfigFileName, kubeadmconstants.KubeletKubeConfigFileName, kubeadmconstants.ControllerManagerKubeConfigFileName, kubeadmconstants.SchedulerKubeConfigFileName, )
k8s封裝了兩個渲染配置的函數:
區別是你的kubeconfig文件裏會不會產生token,好比你進入dashboard須要一個token,或者你調用api須要一個token那麼請生成帶token的配置
生成的conf文件基本一直只是好比ClientName這些東西不一樣,因此加密後的證書也不一樣,ClientName會被加密到證書裏,而後k8s取出來當用戶使用
因此重點來了,咱們作多租戶時也要這樣去生成。而後給該租戶綁定角色。
return kubeconfigutil.CreateWithToken( spec.APIServer, "kubernetes", spec.ClientName, certutil.EncodeCertPEM(spec.CACert), spec.TokenAuth.Token, ), nil return kubeconfigutil.CreateWithCerts( spec.APIServer, "kubernetes", spec.ClientName, certutil.EncodeCertPEM(spec.CACert), certutil.EncodePrivateKeyPEM(clientKey), certutil.EncodeCertPEM(clientCert), ), nil
而後就是填充Config結構體嘍, 最後寫到文件裏,略
"k8s.io/client-go/tools/clientcmd/api return &clientcmdapi.Config{ Clusters: map[string]*clientcmdapi.Cluster{ clusterName: { Server: serverURL, CertificateAuthorityData: caCert, }, }, Contexts: map[string]*clientcmdapi.Context{ contextName: { Cluster: clusterName, AuthInfo: userName, }, }, AuthInfos: map[string]*clientcmdapi.AuthInfo{}, CurrentContext: contextName, }
這裏返回了apiserver manager scheduler的pod結構體,
specs := GetStaticPodSpecs(cfg, k8sVersion) staticPodSpecs := map[string]v1.Pod{ kubeadmconstants.KubeAPIServer: staticpodutil.ComponentPod(v1.Container{ Name: kubeadmconstants.KubeAPIServer, Image: images.GetCoreImage(kubeadmconstants.KubeAPIServer, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage), Command: getAPIServerCommand(cfg, k8sVersion), VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeAPIServer)), LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeAPIServer, int(cfg.API.BindPort), "/healthz", v1.URISchemeHTTPS), Resources: staticpodutil.ComponentResources("250m"), Env: getProxyEnvVars(), }, mounts.GetVolumes(kubeadmconstants.KubeAPIServer)), kubeadmconstants.KubeControllerManager: staticpodutil.ComponentPod(v1.Container{ Name: kubeadmconstants.KubeControllerManager, Image: images.GetCoreImage(kubeadmconstants.KubeControllerManager, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage), Command: getControllerManagerCommand(cfg, k8sVersion), VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeControllerManager)), LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeControllerManager, 10252, "/healthz", v1.URISchemeHTTP), Resources: staticpodutil.ComponentResources("200m"), Env: getProxyEnvVars(), }, mounts.GetVolumes(kubeadmconstants.KubeControllerManager)), kubeadmconstants.KubeScheduler: staticpodutil.ComponentPod(v1.Container{ Name: kubeadmconstants.KubeScheduler, Image: images.GetCoreImage(kubeadmconstants.KubeScheduler, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage), Command: getSchedulerCommand(cfg), VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeScheduler)), LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeScheduler, 10251, "/healthz", v1.URISchemeHTTP), Resources: staticpodutil.ComponentResources("100m"), Env: getProxyEnvVars(), }, mounts.GetVolumes(kubeadmconstants.KubeScheduler)), } //獲取特定版本的鏡像 func GetCoreImage(image, repoPrefix, k8sVersion, overrideImage string) string { if overrideImage != "" { return overrideImage } kubernetesImageTag := kubeadmutil.KubernetesVersionToImageTag(k8sVersion) etcdImageTag := constants.DefaultEtcdVersion etcdImageVersion, err := constants.EtcdSupportedVersion(k8sVersion) if err == nil { etcdImageTag = etcdImageVersion.String() } return map[string]string{ constants.Etcd: fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "etcd", runtime.GOARCH, etcdImageTag), constants.KubeAPIServer: fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-apiserver", runtime.GOARCH, kubernetesImageTag), constants.KubeControllerManager: fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-controller-manager", runtime.GOARCH, kubernetesImageTag), constants.KubeScheduler: fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-scheduler", runtime.GOARCH, kubernetesImageTag), }[image] } //而後就把這個pod寫到文件裏了,比較簡單 staticpodutil.WriteStaticPodToDisk(componentName, manifestDir, spec);
建立etcd的同樣,很少廢話
這個錯誤很是容易遇到,看到這個基本就是kubelet沒起來,咱們須要檢查:selinux swap 和Cgroup driver是否是一致
setenforce 0 && swapoff -a && systemctl restart kubelet若是不行請保證 kubelet的Cgroup driver與docker一致,docker info|grep Cg
go func(errC chan error, waiter apiclient.Waiter) { // This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything special if err := waiter.WaitForHealthyKubelet(40*time.Second, "http://localhost:10255/healthz"); err != nil { errC <- err } }(errorChan, waiter) go func(errC chan error, waiter apiclient.Waiter) { // This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything special if err := waiter.WaitForHealthyKubelet(60*time.Second, "http://localhost:10255/healthz/syncloop"); err != nil { errC <- err } }(errorChan, waiter)
我就是在此發現coreDNS的
if features.Enabled(cfg.FeatureGates, features.CoreDNS) { return coreDNSAddon(cfg, client, k8sVersion) } return kubeDNSAddon(cfg, client, k8sVersion)
而後coreDNS的yaml配置模板直接是寫在代碼裏的:
/app/phases/addons/dns/manifests.go
CoreDNSDeployment = ` apiVersion: apps/v1beta2 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns spec: replicas: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: serviceAccountName: coredns tolerations: - key: CriticalAddonsOnly operator: Exists - key: {{ .MasterTaintKey }} ...
而後渲染模板,最後調用k8sapi建立,這種建立方式能夠學習一下,雖然有點拙劣,這地方寫的遠不如kubectl好
coreDNSConfigMap := &v1.ConfigMap{} if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), configBytes, coreDNSConfigMap); err != nil { return fmt.Errorf("unable to decode CoreDNS configmap %v", err) } // Create the ConfigMap for CoreDNS or update it in case it already exists if err := apiclient.CreateOrUpdateConfigMap(client, coreDNSConfigMap); err != nil { return err } coreDNSClusterRoles := &rbac.ClusterRole{} if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), []byte(CoreDNSClusterRole), coreDNSClusterRoles); err != nil { return fmt.Errorf("unable to decode CoreDNS clusterroles %v", err) } ...
這裏值得一提的是kubeproxy的configmap真應該把apiserver地址傳入進來,容許自定義,由於作高可用時須要指定虛擬ip,得修改,很麻煩
kubeproxy大差不差,不說了,想改的話改: app/phases/addons/proxy/manifests.go
kubeadm join比較簡單,一句話就能夠說清楚,獲取cluster info, 建立kubeconfig,怎麼建立的kubeinit裏面已經說了。帶上token讓kubeadm有權限
能夠拉取
return https.RetrieveValidatedClusterInfo(cfg.DiscoveryFile) cluster info內容 type Cluster struct { // LocationOfOrigin indicates where this object came from. It is used for round tripping config post-merge, but never serialized. LocationOfOrigin string // Server is the address of the kubernetes cluster (https://hostname:port). Server string `json:"server"` // InsecureSkipTLSVerify skips the validity check for the server's certificate. This will make your HTTPS connections insecure. // +optional InsecureSkipTLSVerify bool `json:"insecure-skip-tls-verify,omitempty"` // CertificateAuthority is the path to a cert file for the certificate authority. // +optional CertificateAuthority string `json:"certificate-authority,omitempty"` // CertificateAuthorityData contains PEM-encoded certificate authority certificates. Overrides CertificateAuthority // +optional CertificateAuthorityData []byte `json:"certificate-authority-data,omitempty"` // Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields // +optional Extensions map[string]runtime.Object `json:"extensions,omitempty"` } return kubeconfigutil.CreateWithToken( clusterinfo.Server, "kubernetes", TokenUser, clusterinfo.CertificateAuthorityData, cfg.TLSBootstrapToken, ), nil
CreateWithToken上文提到了再也不贅述,這樣就能去生成kubelet配置文件了,而後把kubelet啓動起來便可
kubeadm join的問題就是渲染配置時沒有使用命令行傳入的apiserver地址,而用clusterinfo裏的地址,這不利於咱們作高可用,可能咱們傳入一個虛擬ip,可是配置裏仍是apiser的地址
掃碼關注sealyun探討可加QQ羣:98488045