Apache Kylin安裝

上傳安裝包web

使用ftp工具將安裝包上傳至服務器的/usr/local路徑數據庫

解壓

$tar –xvf apache-kylin-1.3-HBase-1.1-SNAPSHOT-bin.tarexpress

設置環境變量

在/etc/profile 文件添加如下內容:apache

export KYLIN_HOME=/usr/local/apache-kylin-1.3-HBase-1.1-SNAPSHOT-bintomcat

export HCAT_HOME=/usr/hdp/2.3.2.0-2950/hive-hcatalog服務器

 

以後執行sourceapp

$source /etc/profileless

配置

進入/usr/local/apache-kylin-1.3-HBase-1.1-SNAPSHOT-bin/conf目錄,編輯kylin.propertiesssh

須要修改和添加參數解釋:ide

 

參數

格式

含義

kylin.rest.servers

Hostname:7070

Hostname爲kylin server 服務器ip

7070爲kylin http端口

kylin.metadata.url

kylin_metadata@hbase

kylin_metadata爲kylin在hbase中建的hbase系統庫名

kylin.storage.url

hbase

默認參數

kylin.hdfs.working.dir

/kylin

kylin在hdfs上的工做目錄,安裝時候須要在hdfs建立這個參數值的目錄,而且賦予能夠kylin運行的讀寫權限

kylin.hbase.cluster.fs

hdfs://mycluster/apps/hbase/data_new

hbase在數據庫目錄,具體值和hbase的配置文件的中的hbase.rootdir參數值相同

kylin.route.hive.enabled

true

默認

kylin.route.hive.url

jdbc:hive2:// HiveServer2ip:10000

Hiveserver2ip爲hive的server組件安裝ip,1000位默認的jdbc端口

 

剩餘參數不須要修改,直接默認

啓動kylin

$cd /usr/local/apache-kylin-1.3-HBase-1.1-SNAPSHOT-bin/bin

$./kylin.sh start

檢查是否啓動成功

  1. 查看7070端口是否監聽

netstat –an 7070

  1. 直接登陸http://kylinserverip:7070/kylin

===============================================================

放一份系統的kylin配置文件

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

## Config for Kylin Engine ##

# List of web servers in use, this enables one web server instance to sync up with other servers.
kylin.rest.servers=hadoop00:7070,hadoop01:7070,hadoop02:7070,hadoop04:7070,hadoop05:7070

#set display timezone on UI,format like[GMT+N or GMT-N]
kylin.rest.timezone=GMT+8
kylin.query.cache.enabled=true
# The metadata store in hbase
kylin.metadata.url=kylin_metadata@hbase

# The storage for final cube file in hbase
kylin.storage.url=hbase
kylin.job.yarn.app.rest.check.status.url=http://hadoop02:8088/ws/v1/cluster/apps/${job_id}?
kylin.job.yarn.app.rest.check.interval.seconds=20
kylin.query.security.enabled=false
# Temp folder in hdfs, make sure user has the right access to the hdfs directory
kylin.hdfs.working.dir=/kylin

# HBase Cluster FileSystem, which serving hbase, format as hdfs://hbase-cluster:8020
# leave empty if hbase running on same cluster with hive and mapreduce
kylin.hbase.cluster.fs=hdfs://mycluster/apps/hbase/data
kylin.route.hive.enabled=true
kylin.route.hive.url=jdbc:hive2://hadoop00:10000

kylin.job.mapreduce.default.reduce.input.mb=500

kylin.server.mode=all

# If true, job engine will not assume that hadoop CLI reside on the same server as it self
# you will have to specify kylin.job.remote.cli.hostname, kylin.job.remote.cli.username and kylin.job.remote.cli.password
# It should not be set to "true" unless you're NOT running Kylin.sh on a hadoop client machine 
# (Thus kylin instance has to ssh to another real hadoop client machine to execute hbase,hive,hadoop commands)
kylin.job.run.as.remote.cmd=false

# Only necessary when kylin.job.run.as.remote.cmd=true
kylin.job.remote.cli.hostname=

# Only necessary when kylin.job.run.as.remote.cmd=true
kylin.job.remote.cli.username=

# Only necessary when kylin.job.run.as.remote.cmd=true
kylin.job.remote.cli.password=

# Used by test cases to prepare synthetic data for sample cube
kylin.job.remote.cli.working.dir=/tmp/kylin

# Max count of concurrent jobs running
kylin.job.concurrent.max.limit=10

# Time interval to check hadoop job status
kylin.job.yarn.app.rest.check.interval.seconds=10

# Hive database name for putting the intermediate flat tables
kylin.job.hive.database.for.intermediatetable=kylin

#default compression codec for htable,snappy,lzo,gzip,lz4
#kylin.hbase.default.compression.codec=lzo

# The cut size for hbase region, in GB.
# E.g, for cube whose capacity be marked as "SMALL", split region per 10GB by default
kylin.hbase.region.cut.small=10
kylin.hbase.region.cut.medium=20
kylin.hbase.region.cut.large=100

# HBase min and max region count
kylin.hbase.region.count.min=1
kylin.hbase.region.count.max=500

## Config for Restful APP ##
# database connection settings:
ldap.server=
ldap.username=
ldap.password=
ldap.user.searchBase=
ldap.user.searchPattern=
ldap.user.groupSearchBase=
ldap.service.searchBase=OU=
ldap.service.searchPattern=
ldap.service.groupSearchBase=
acl.adminRole=
acl.defaultRole=
ganglia.group=
ganglia.port=8664

## Config for mail service

# If true, will send email notification;
mail.enabled=false
mail.host=
mail.username=
mail.password=
mail.sender=

###########################config info for web#######################

#help info ,format{name|displayName|link} ,optional
kylin.web.help.length=4
kylin.web.help.0=start|Getting Started|
kylin.web.help.1=odbc|ODBC Driver|
kylin.web.help.2=tableau|Tableau Guide|
kylin.web.help.3=onboard|Cube Design Tutorial|
#hadoop url link ,optional
kylin.web.hadoop=
#job diagnostic url link ,optional
kylin.web.diagnostic=
#contact mail on web page ,optional
kylin.web.contact_mail=

###########################config info for front#######################

#env DEV|QA|PROD
deploy.env=PROD

###########################config info for sandbox#######################
kylin.sandbox=true


###########################config info for kylin monitor#######################
# hive jdbc url
kylin.monitor.hive.jdbc.connection.url=jdbc:hive2://hadoop00:10000

#config where to parse query log,split with comma ,will also read $KYLIN_HOME/tomcat/logs/ by default
kylin.monitor.ext.log.base.dir = /tmp/kylin_log1,/tmp/kylin_log2

#will create external hive table to query result csv file
#will set to kylin_query_log by default if not config here
kylin.monitor.query.log.parse.result.table = kylin_query_log
相關文章
相關標籤/搜索