oozie 運行demo

昨晚裝好了oozie,能啓動了,而且配置了mysql做爲數據庫,好了,今天要執行oozie自帶的demo了,好傢伙,一執行就報錯!報錯不少,就不一一列舉了,就說我最後解決的方法吧。javascript

oozie job -oozie http://localhost:11000/oozie -config examples/apps/map-reduce/job.properties –runjava

這句話須要在oozie的目錄裏面執行,而後在網上查了不少資料,最後搞定了,須要修改三個配置文件。mysql

在說修改配置文件以前,還漏了一些東西,先補上,首先咱們須要解壓目錄下面的oozie-examples.tar.gz,oozie-client-3.3.2.tar.gz,web

oozie-sharelib-3.3.2.tar.gz,而後把examples和share目錄上傳到fs上面去。sql

hadoop fs -put examples examples數據庫

hadoop fs -put share shareapache

而後在/etc/profile配置oozie-client的環境變量。app

接下來講怎麼解決的oozie的吧。ide

1.修改oozie的conf目錄下的oozie-site.xmloop

增長如下內容:

 

<property>  
       <name>oozie.services</name>  
        <value>  
            org.apache.oozie.service.SchedulerService,   
            org.apache.oozie.service.InstrumentationService,   
            org.apache.oozie.service.CallableQueueService,   
            org.apache.oozie.service.UUIDService,   
            org.apache.oozie.service.ELService,   
            org.apache.oozie.service.AuthorizationService,      
            org.apache.oozie.service.MemoryLocksService,   
            org.apache.oozie.service.DagXLogInfoService,   
            org.apache.oozie.service.SchemaService,   
            org.apache.oozie.service.LiteWorkflowAppService,   
            org.apache.oozie.service.JPAService,   
            org.apache.oozie.service.StoreService,   
            org.apache.oozie.service.CoordinatorStoreService,   
            org.apache.oozie.service.SLAStoreService,   
            org.apache.oozie.service.DBLiteWorkflowStoreService,   
            org.apache.oozie.service.CallbackService,   
            org.apache.oozie.service.ActionService,   
            org.apache.oozie.service.ActionCheckerService,   
            org.apache.oozie.service.RecoveryService,   
            org.apache.oozie.service.PurgeService,   
            org.apache.oozie.service.CoordinatorEngineService,   
            org.apache.oozie.service.BundleEngineService,   
            org.apache.oozie.service.DagEngineService,   
            org.apache.oozie.service.CoordMaterializeTriggerService,   
            org.apache.oozie.service.StatusTransitService,   
            org.apache.oozie.service.PauseTransitService, 
        org.apache.oozie.service.HadoopAccessorService  
        </value>  
        <description>  
            All services to be created and managed by Oozie Services singleton.   
            Class names must be separated by commas.   
        </description>  
    </property>

<property> 
       <name>oozie.service.ProxyUserService.proxyuser.cenyuhai.hosts</name> 
       <value>*</value> 
       <description> 
           List of hosts the '#USER#' user is allowed to perform 'doAs' 
           operations.

           The '#USER#' must be replaced with the username o the user who is 
           allowed to perform 'doAs' operations.

           The value can be the '*' wildcard or a list of hostnames.

           For multiple users copy this property and replace the user name 
           in the property name. 
       </description> 
   </property>

   <property> 
       <name>oozie.service.ProxyUserService.proxyuser.cenyuhai.groups</name> 
       <value>*</value> 
       <description> 
           List of groups the '#USER#' user is allowed to impersonate users 
           from to perform 'doAs' operations.

           The '#USER#' must be replaced with the username o the user who is 
           allowed to perform 'doAs' operations.

           The value can be the '*' wildcard or a list of groups.

           For multiple users copy this property and replace the user name 
           in the property name. 
       </description> 
   </property> 
View Code

 

2.修改oozie-env.sh,增長如下內容

export OOZIE_CONF=${OOZIE_HOME}/conf 
export OOZIE_DATA=${OOZIE_HOME}/data 
export OOZIE_LOG=${OOZIE_HOME}/logs 
export CATALINA_BASE=${OOZIE_HOME}/oozie-server 
export CATALINA_TMPDIR=${OOZIE_HOME}/oozie-server/temp 
export CATALINA_OUT=${OOZIE_LOG}/catalina.out

 

3.修改全部節點的hadoop的配置文件core-site.xml,

<property> 
    <name>hadoop.proxyuser.cenyuhai.hosts</name> 
    <value>hadoop.Master</value> 
 </property> 
 <property> 
    <name>hadoop.proxyuser.cenyuhai.groups</name> 
    <value>cenyuhai</value> 
</property>

而後重啓就能夠執行了,裏面的cenyuhai是個人本機帳號。

 

補充:在進行完上述配置以後,做業能夠提交了,可是提交了MR做業以後,在web頁面中查看,遇到了一個錯誤:

 JA006: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused

 這個問題排查了好久,都沒有獲得解決 ,最後經過修改job.properties,把jobTracker從localhost:9001改爲下面的全稱才行,這個可能跟個人hadoop的

 jobTracker設置有關,因此遇到有這方面問題的童鞋能夠試試。

nameNode=hdfs://192.168.1.133:9000
jobTracker=http://192.168.1.133:9001

接下來咱們接着運行hive的demo,運行以前記得修改hive的demo的job.properties,改成上面寫的那樣。

而後提交,提交成功了,可是在web頁面上查看狀態爲KILLED,被幹掉了。。。

錯誤代碼:JA018,錯誤消息:org/apache/hadoop/hive/cli/CliDriver

而後我就想着多是jar包的問題,刪掉share目錄下的hive目錄裏的全部jar包,而後把本身機器上的hive的全部jar包複製到該目錄下。

而後上傳到共享目錄上:

hadoop fs -put share share

再次提交,就能夠查看到成功的狀態啦!

oozie job -oozie http://localhost:11000/oozie -config examples/apps/hive/job.properties -run

可是這個坑爹的玩意兒,實際上是把數據插入到了Derby中。。。無語了,雖然現實成功了,可是沒有用。。。由於咱們配置了外置的mysql數據庫,那怎麼辦呢?

須要修改workflow.xml,把其中的configuration的配置節改爲下面的樣子。

<configuration>
                <property>
                    <name>mapred.job.queue.name</name>
                    <value>${queueName}</value>
                </property>
        <property>
                <name>hive.metastore.local</name>
                <value>true</value>
            </property>
            <property>
                <name>javax.jdo.option.ConnectionURL</name>
                <value>jdbc:mysql://192.168.1.133:3306/hive?createDatabaseIfNotExist=true</value>
            </property>
            <property>
                    <name>javax.jdo.option.ConnectionDriverName</name>
                <value>com.mysql.jdbc.Driver</value>
            </property>
            <property>
                <name>javax.jdo.option.ConnectionUserName</name>
                    <value>hive</value>
            </property>
            <property>
                <name>javax.jdo.option.ConnectionPassword</name>
                <value>mysql</value>
               </property>
        <property>  
              <name>hive.metastore.warehouse.dir</name>  
              <value>/user/hive/warehouse</value>
        </property> 
 </configuration>
View Code

而後提交以後,在hive中就能夠查詢到你所創建的表啦,oh,yeah!

相關文章
相關標籤/搜索