JBoss Data Grid 7.2 在OpenShift環境中的Quick Start

 爲了在容器環境中運行,首先須要弄清楚的是在傳統環境下如何運行,因此咱們從傳統環境開始。html

 

先去http://access.redhat.com下載相應介質,主要是 jboss-datagrid-7.2.0-server.zip和jboss-datagrid-7.2.0-tomcat8-session-client.zipjava

前者用於jboss data grid的啓動,後者用於客戶端tomcat經過Client-Server方式去鏈接和操做node

 

1. 安裝

直接解壓就是安裝,但要注意若是是須要多個server構成一個集羣,須要創建兩個目錄分別解壓,我試過只修改配置不成,由於還有git

其餘文件在進程啓動之後須要進行同時寫入。因此最佳辦法是每一個實例分別創建一個目錄。github

修改配置文件cluster.xml,若是須要加入定義的Cache,能夠添加下面這一段web

<subsystem xmlns="urn:infinispan:server:endpoint:6.0">
        <hotrod-connector socket-binding="hotrod"  cache-container="clusteredcache">
         <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/>
        </hotrod-connector>
        .........
      <subsystem xmlns="urn:infinispan:server:core:6.0" default-cache-container="clusteredcache">
                   <cache-container name="clusteredcache" default-cache="default" statistics="true">
                       <transport executor="infinispan-transport" lock-timeout="60000"/>
                    ......
               <distributed-cache name="directory-dist-cache" mode="SYNC" owners="2" remote-                   timeout="30000" start="EAGER">
              <locking isolation="READ_COMMITTED" acquire-timeout="30000" striping="false"/>
              <eviction strategy="LRU" max-entries="20" />
              <transaction mode="NONE"/>
              </distributed-cache>
             ..............
  </cache-container>

若是不須要定義,能夠用缺省的配置,也就是default,配置爲分佈式docker

<distributed-cache name="default"/>

修改server2的端口,主要是標黑的port-offset,標黑的那段json

<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:100}">
        <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
        <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
        <socket-binding name="hotrod" port="11222"/>
        <socket-binding name="hotrod-internal" port="11223"/>
        <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45700"/>
        <socket-binding name="jgroups-tcp" port="7600"/>
        <socket-binding name="jgroups-tcp-fd" port="57600"/>
        <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45688"/>
        <socket-binding name="jgroups-udp-fd" port="54200"/>
        <socket-binding name="memcached" port="11211"/>
        <socket-binding name="rest" port="8080"/>
        <socket-binding name="rest-multi-tenancy" port="8081"/>
        <socket-binding name="rest-ssl" port="8443"/>
        <socket-binding name="txn-recovery-environment" port="4712"/>
        <socket-binding name="txn-status-manager" port="4713"/>
        <outbound-socket-binding name="remote-store-hotrod-server">
            <remote-destination host="remote-host" port="11222"/>
        </outbound-socket-binding>
        <outbound-socket-binding name="remote-store-rest-server">
            <remote-destination host="remote-host" port="8080"/>
        </outbound-socket-binding>
    </socket-binding-group>

2.啓動

standalone.bat -c=clustered1.xml -Djboss.node.name=server1

standalone.bat -c=clustered2.xml -Djboss.node.name=server2

 從日誌中能夠看到server2的加入,並進行數據的rebalance.windows

 

3.監控和操做

  • CLI操做

能夠經過bin/cli.sh或者cli.bat進行直接對緩存的讀取。windows環境中基本命令以下緩存

[disconnected /] connect 127.0.0.1:9990
[standalone@127.0.0.1:9990 /] container clustered
[standalone@127.0.0.1:9990 cache-container=clustered] cache
ISPN019029: No cache selected yet
[standalone@127.0.0.1:9990 cache-container=clustered] cache default
[standalone@127.0.0.1:9990 distributed-cache=default] cache
default
[standalone@127.0.0.1:9990 distributed-cache=default] put 1 ericnie
[standalone@127.0.0.1:9990 distributed-cache=default] get 1
ericnie

container的值,能夠從cluster.xml的配置中找到,截取一段,cache值也是同樣,缺省爲default.

<subsystem xmlns="urn:infinispan:server:core:8.5" default-cache-container="clustered">
            <cache-container name="clustered" default-cache="default" statistics="true">
                <transport lock-timeout="60000"/>
                <global-state/>
                <distributed-cache-configuration name="transactional">
                    <transaction mode="NON_XA" locking="PESSIMISTIC"/>
                </distributed-cache-configuration>
                <distributed-cache-configuration name="async" mode="ASYNC"/>
                <replicated-cache-configuration name="replicated"/>
                <distributed-cache-configuration name="persistent-file-store">
                    <file-store shared="false" fetch-state="true" passivation="false"/>
                </distributed-cache-configuration>
                <distributed-cache-configuration name="indexed">
                    <indexing index="LOCAL" auto-config="true"/>
                </distributed-cache-configuration>

 

  • 監控層面

驚聞Jboss ON要end of life,之後更多須要走prometheus或者openshift容器化的監控手段了,因此果斷來個最基本的jmx監控。

啓動jconsole, 基於jmx鏈接本地或者遠程端口(9990),在MBean中找到jboss.datagrid-infinispan

  • 查看集羣屬性,CacheManager->clustered

 

  •  查看Cache Entry

 

4.客戶端訪問

在tomcat的webapp下創建一個項目jdg,而後創建WEB-INF,在lib下面把以前的jar包拷入。

寫一段客戶端訪問代碼.

<%@ page language="java" import="java.util.*" pageEncoding="gbk"%>
<%@ page import="org.infinispan.client.hotrod.RemoteCache,org.infinispan.client.hotrod.RemoteCacheManager,org.infinispan.client.hotrod.configuration.ConfigurationBuilder,com.redhat.lab.jdg.*,java.utils.*" %>
<html>
  <head>
    <title>My JSP starting page</title>
  </head>
  
  <body>
    <h1>
        
     <%
       try {
              ConfigurationBuilder builder = new ConfigurationBuilder();
                  builder.addServer().host("127.0.0.1")
                   .port(Integer.parseInt("11322"));
                  RemoteCacheManager  cacheManager = new RemoteCacheManager(builder.build());
                  RemoteCache<String, User> cache = cacheManager.getCache();
          
          
        
                    User user = new User();
                    user.setFirstName("John");
                    user.setLastName("Doe");
                    cache.put("jdoe", user);
                    System.out.println("John Doe has been put into the cache");
                    out.println("John Doe has been put into the cache");
            
          
                    if (cache.containsKey("jdoe")) {
                            System.out.println("jdoe key is indeed in the cache");
                            out.println("jdoe key is indeed in the cache");
                    }
        
                    if (cache.containsKey("jane")) {
                            System.out.println("jane key is indeed in the cache");
                            out.println("jane key is indeed in the cache");
                    }
        
                    user = cache.get("jdoe");
                    System.out.println("jdoe's firstname is " +
                    user.getFirstName());
                    
                    out.println("jdoe's firstname is " +
                    user.getFirstName());
                    
            
        } catch (Exception e) {    
            e.printStackTrace();
    }
    
    
     %>
    </h1>
  </body>
</html>

 而後是各類驗證

 

 

5.OpenShift部署

首先找到官方鏡像地址

https://github.com/openshift/library/tree/master/official/datagrid

打開imagestreams/jboss-datagrid72-openshift-rhel7.json,而後pull到本地

docker pull registry.redhat.io/jboss-datagrid-7/datagrid72-openshift:1.2

pull前先用docker login登陸網站redhat.io(3.11的新特徵 :()

而後查看Service catalog

 

 我們就來搞這個7.2的Ephemeral, no https了

oc get templates -n openshift

.....
datagrid72-basic                                    An example Red Hat JBoss Data Grid application. For more information about us...   17 (11 blank)     6

........

 

而後修改鏡像地址

oc edit template datagrid72-basic -n openshift

 切換到openshift命名空間,導入ImageStream

oc project openshift

oc import-image datagrid72-openshift:1.2   --from=registry.example.com/jboss-datagrid-7/datagrid72-openshift:1.2  --insecure --confirm

 

一切就緒,開始創建

輸入一個Cache名,而後建立.

建立完成

Scale Pod,而後檢查Pod日誌,可見新的pod已經加入集羣。

 

6.OpenShift環境中的驗證

Openshift環境中JDG提供了訪問的三種模式

  • memcached, 基於memcache協議
  • hotrod,基於TCP,適合Client Server
  • Rest(對應datagrid-app),適合基於http協議,所以暴露對外路由。

本來想法是,修改hotrod,加入nodePort,而後經過OpenShift外面的tomcat或Java Client進行訪問,但嘗試了一下,發現不行,

Client端會直接找JDG Pod的實際地址創建鏈接,而後發現沒法訪問。所以須要將tomcat部署到OpenShift內部進行嘗試。

 

  • 在同一項目中訪問,修改jsp代碼爲
                ConfigurationBuilder builder = new ConfigurationBuilder();
                                builder.addServer().host(System.getenv("DATAGRID_APP_HOTROD_SERVICE_HOST"))
                                .port(Integer.parseInt(System.getenv("DATAGRID_APP_HOTROD_SERVICE_PORT"));
                                RemoteCacheManager  cacheManager = new RemoteCacheManager(builder.build());

這裏hotrod的地址是經過存放在tomcat pod中的環境變量DATAGRID_APP_HOTROD_SERVICE_HOST已及ATAGRID_APP_HOTROD_SERVICE_PORT獲取

訪問成功。

  • 不一樣項目中訪問,須要在java代碼中根據服務名獲取服務地址。

 不一樣的項目訪問,在pod的環境變量中不會有其餘項目的環境變量,所以須要根據服務名獲取服務地址,核心代碼爲

                        InetAddress address = InetAddress.getByName("datagrid-app-hotrod.jdg");
        System.out.println(address.getHostAddress());



                ConfigurationBuilder builder = new ConfigurationBuilder();
                                builder.addServer().host(address.getHostAddress())
                                .port(Integer.parseInt("11333"));

訪問hotrod的地址,經過InetAddress.getByName("datagrid-app-hotrod.jdg"),帶上服務名以及項目名獲取。驗證無誤。

貼一個jdg-write.jsp的完整代碼:

<%@ page language="java" import="java.util.*" pageEncoding="gbk"%>
<%@ page import="org.infinispan.client.hotrod.RemoteCache,org.infinispan.client.hotrod.RemoteCacheManager,org.infinispan.client.hotrod.configuration.ConfigurationBuilder,com.redhat.lab.jdg.*,java.net.*,java.utils.*" %>
<html>
  <head>
    <title>My JSP starting page</title>
  </head>

  <body>
    <h1>

     <%
       try {

                        InetAddress address = InetAddress.getByName("datagrid-app-hotrod.jdg");
        System.out.println(address.getHostAddress());



                ConfigurationBuilder builder = new ConfigurationBuilder();
                                builder.addServer().host(address.getHostAddress())
                                .port(Integer.parseInt("11333"));
                                RemoteCacheManager  cacheManager = new RemoteCacheManager(builder.build());


                RemoteCache<String, User> cache = cacheManager.getCache("samples");

                                        User user = new User();
                                        user.setFirstName("John");
                                        user.setLastName("Doe");
                                        cache.put("jdoe", user);
                                        System.out.println("John Doe has been put into the cache");
                                        out.println("John Doe has been put into the cache");


                                        if (cache.containsKey("jdoe")) {
                                                        System.out.println("jdoe key is indeed in the cache");
                                                        out.println("jdoe key is indeed in the cache");
                                        }

                                        if (cache.containsKey("jane")) {
                                                        System.out.println("jane key is indeed in the cache");
                                                        out.println("jane key is indeed in the cache");
                                        }

                                        user = cache.get("jdoe");
                                        System.out.println("jdoe's firstname is " +
                                        user.getFirstName());

                                        out.println("jdoe's firstname is " +
                                        user.getFirstName());


                } catch (Exception e) {
            e.printStackTrace();
    }


     %>
    </h1>
  </body>
</html>

調試到了第11個版本終於解決了各類低級錯誤,成功運行,反覆調試修改後主要執行的三個命令記錄以下

docker build -t registry.example.com/jdg/tomcatsample:v1 .

docker push registry.example.com/jdg/tomcatsample:v1

oc import-image tomcatsample:v1   --from=registry.example.com/jdg/tomcatsample:v1  --insecure --confirm

而後就依賴於Deployconfig的根據鏡像變化從而觸發Pod更新的trigger了。

造成客戶端鏡像的Tomcat 的Dockerfile

[root@master client]# cat Dockerfile 
FROM registry.example.com/tomcat:8-slim

RUN mkdir -p /usr/local/tomcat/webapps/jdg
COPY samples/jdg/* /usr/local/tomcat/webapps/jdg/


USER root

RUN unzip -d /usr/local/tomcat/webapps/jdg/ /usr/local/tomcat/webapps/jdg/WEB-INF.zip

CMD [ "/usr/local/tomcat/bin/catalina.sh","run" ]
相關文章
相關標籤/搜索