OpenShift 4.1 探索(持續更新)

由於在OpenShift 4.1環境中不建議直接登陸集羣主機操做,所以不少操做可能須要在外部的Client VM上完成。固然用rhel的worker node的同事也能夠和原來習慣保持一致。node

這裏記錄一下常常遇到的一些問題:web

  •  如何查看密碼

在4.1集羣安裝完後,系統會打印一句話出來,好比docker

INFO Creating infrastructure resources...         *********************************************************************************************
INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster-8447.sandbox.opentlc.com:6443... 
INFO API v1.13.4+3a25c9b up                       
INFO Waiting up to 30m0s for bootstrapping to complete... 
INFO Destroying the bootstrap resources...        
INFO Waiting up to 30m0s for the cluster at https://api.cluster.sandbox.opentlc.com:6443 to initialize... 
INFO Waiting up to 10m0s for the openshift-console route to be created... 
INFO Install complete!                            
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/cluster/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster.sandbox.opentlc.com 
INFO Login to the console with user: kubeadmin, password: TyCzM-ShJPQ-cgepT-dkDwq 

必定要拷貝出來啊。。。若是萬一沒有拷貝,那在哪裏還能找到呢?bootstrap

能夠在安裝目錄(cluster名)下,有個叫.openshift_install.log的文件,在那裏能夠找到api

 

  • 設置集羣訪問

export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig
echo "export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig" >>$HOME/.bashrc

 

  • 上傳鏡像到內部鏡像倉庫

暴露image-registry路由,缺省不暴露route,只暴露image-registry.openshift-image-registry.svc服務tomcat

[root@clientvm 0 ~]# oc get svc -n openshift-image-registry
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
image-registry   ClusterIP   172.30.134.180   <none>        5000/TCP   5h2m

 

oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge

Podman登陸bash

oc login -u kubeadm

HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
podman login -u kubeadm -p $(oc whoami -t) --tls-verify=false $HOST

胡亂搞了一個Dockerfile,而後buildapp

podman build -t default-route-openshift-image-registry.apps.cluster-8447.sandbox452.opentlc.com/myproject/mytomcat:slim .

 

[root@clientvm 127 ~/cluster-8447]# podman images
REPOSITORY                                                                                           TAG      IMAGE ID       CREATED              SIZE
default-route-openshift-image-registry.apps.cluster-8447.sandbox452.opentlc.com/myproject/mytomcat   slim     ec32b2cdbea2   About a minute ago   518 MB
<none>                                                                                               <none>   0426c1689356   5 minutes ago        500 MB
docker.io/library/openjdk                                                                            8-jdk    08ded5f856cc   6 days ago           500 MB

 

而後push鏡像,切記使用--tls-verify=falsetcp

[root@clientvm 125 ~]# podman push default-route-openshift-image-registry.apps.cluster-d60b.sandbox509.opentlc.com/myproject/mytomcat:slim --tls-verify=false 
Getting image source signatures
Copying blob ea23cfa0bea9 done
Copying blob 2bf534399aca done
Copying blob eb25e0278d41 done
Copying blob 46ff59048438 done
Copying blob f613cd1e50cc done
Copying blob 1c95c77433e8 done
Copying blob 6d520b2e1077 done
Copying config 7670309228 done
Writing manifest to image destination
Copying config 7670309228 done
Writing manifest to image destination
Storing signatures

 push完能夠看到imagestreamide

 

生成應用

[root@clientvm 0 ~/cluster-8447]# oc new-app mytomcat:slim
--> Found image ec32b2c (6 minutes old) in image stream "myproject/mytomcat" under tag "slim" for "mytomcat:slim"

    * This image will be deployed in deployment config "mytomcat"
    * Port 8080/tcp will be load balanced by service "mytomcat"
      * Other containers can access this service through the hostname "mytomcat"
    * WARNING: Image "myproject/mytomcat:slim" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    deploymentconfig.apps.openshift.io "mytomcat" created
    service "mytomcat" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/mytomcat' 
    Run 'oc status' to view your app.

 

  • 添加用戶

OpenShift 4.1 的用戶認證這塊也基於Operator實現,和3.11很大的區別在於3.11是配置在master-config.yaml下面,缺省是HTPasswd,

而4是缺省identity provider是沒有的,須要基於authentication的cr配置出來。

在Cluster Setting的Global Configuration裏面能夠看到OAuth這項內容。

點擊進去能夠看到identity Provider爲空

缺省只能用kubeadmin登陸,若是須要添加用戶,首先須要建立CR(Custom Resource)

若是咱們仍是以原來的HTPasswd方式,步驟以下:

1.在客戶端建立users.htpasswd文件,並寫入用戶

htpasswd -c -B -b users.htpasswd admin welcome1

若是要添加多個用戶,用下面命令

htpasswd -b users.htpasswd eric welcome1
htpasswd -b users.htpasswd alice welcome1

2. 在openshift-config下建立secret

oc create secret generic htpass-secret --from-file=htpasswd=/root/users.htpasswd -n openshift-config

若是是之後在文件中又添加了用戶,能夠用下面命令更新

oc create secret generic htpass-secret --from-file=htpasswd=/root/users.htpasswd -n openshift-config --dry-run -o yaml | oc apply -f -

完成後能夠在openshift-config下看到這個secret, 選擇edit secret,看到裏面包含得用戶名

3. 更新CR, 寫一個yaml文件, 這一步也能夠直接在界面上添加。

[root@clientvm 0 ~]# cat htpass.yaml 
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: my_htpasswd_provider
    mappingMethod: claim
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret

cluster的CR已經存在,全部經過apply去更新

[root@clientvm 0 ~]# oc apply -f htpass.yaml 
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
oauth.config.openshift.io/cluster configured

完成後能夠看到在Oauth裏面包含了一個my_htpasswd_provider

 檢查Pod的狀態(在openshift-authentication project下),若是沒有從新更新,就手工delete Pod讓他從新裝載一遍.

 

 用oc get users看一看,怎麼什麼都沒有。。。這裏有個坑,只有登陸事後的用戶才能看到,因此直接登陸吧

[root@clientvm 0 ~]# oc login -u eric
Authentication required for https://api.cluster-8447.sandbox452.opentlc.com:6443 (openshift)
Username: eric
Password: 
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

再切換回 kubeadmin用戶,就能夠看到了

[root@clientvm 0 ~]# oc get users
NAME    UID                                    FULL NAME   IDENTITIES
admin   463b2706-c3d9-11e9-b6ad-0a580a81001f               my_htpasswd_provider:admin
alice   d73b3e6f-c3db-11e9-ba6d-0a580a80001a               my_htpasswd_provider:alice
eric    4c8b7952-c3de-11e9-ab5a-0a580a82001b               my_htpasswd_provider:eric

 設置爲集羣管理員

oc adm policy add-cluster-role-to-user cluster-admin admin

Console上LogOut

鼠標點擊選擇my_htpasswd_provider,必定要選這個,若是選上面的是不會讓你登陸的,而後用用戶名登陸就能夠了。

相關文章
相關標籤/搜索