linux安裝卸載MySQL以及密碼設置+Hive測試

linux系統卸載MYSQLjava

1,先經過yum方式卸載mysql及相關組件 命令:yum remove mysql*
    2.經過命令:rpm -qa|grep -i mysql   查找系統的有關於mysql的文件
    3.而後經過命令:sudo rpm -e --nodeps 包名刪除mysql有關軟件
    4.卸載後/etc/my.cnf不會刪除,須要進行手工刪除經過命令:rm -rf /etc/my.cnf
    須要刪除配置文件/etc/my.cnf和數據庫文件/var/lib/mysql  刪除命令 rm- rf 文件名/文件夾名
    5.最後再次經過命令 rpm -qa|grep -i mysql來確認系統中是否還含有mysql相關的文件,若沒有,則表示卸載乾淨

Linux系統安裝MySQLnode

1.下載MySQL的Linux版本注意:下載好的MySQL你須要上傳到Linux上才行,同時使用tar -zxvf壓縮文件名解壓
    2.進入Linux系統後,先切換成root用戶,root用戶有更高的權限,有權限卸載系統服務,su - root  回車,而後輸入密碼
    3.查看系統是否已經安裝MySQL rpm -qa | grep mysql     或 rpm -qa | grep -i mysql
    4.安裝命令   rpm -ivh 服務名
    咱們須要安裝MySQL服務端(Server)和客戶端(client)
    rpm -ivh MySQL-server-5.6.30-1. linux_glibc2.5. x86_64.rpm
    rpm -ivh MySQL-client-5.6.30-1. linux_glibc2.5. x86_64.rpm
    注意: 必須安裝客戶端,不然你在Linux上經過命令是不能進入MySQL的,如輸入命令mysql會提示錯誤.
    開啓MySQL服務 service mysql start
    5.安裝完成後,能夠經過命令netstat -nat查看Linux的端口監控,看看Linux有沒有在監控3306端口
    
    yum install 安裝方式
    
    1.一、查看有沒有安裝過:yum list installed mysql*
    1.二、查看有沒有安裝包:yum list mysql*
    1.三、安裝mysql客戶端:yum install mysql
    1.四、安裝mysql 服務器端:yum install mysql-server
    1.五、數據庫字符集設置mysql配置文件/etc/my.cnf中加入default-character-set=utf8
    1.六、啓動mysql服務:service mysqld start或者/etc/init.d/mysqld start
    1.七、開機啓動:sudo chkconfig mysqld on,chkconfig --list | grep mysql*
    mysqld             0:關閉    1:關閉    2:啓用    3:啓用    4:啓用    5:啓用    6:關閉
    1.八、中止:service mysqld stop
    1.九、開啓登陸建立root管理員:mysqladmin -u root password 123456
    登陸: mysql -u root -p輸入密碼便可
    2.0、忘記密碼:
    service mysqld stop
    mysqld_safe --user=root --skip-grant-tables
    mysql -u root
    use mysql
    update user set password=password("new_pass") where user="root";
    flush privileges;

MySQL修改初始密碼mysql

注意:先stop你的myslq服務,service mysql stop或者  /etc/init.d/mysqld stop
    
    1.若沒有root權限,這種狀況下,咱們能夠採用相似安全模式的方法修改初始密碼,先執行命令  mysqld_safe --skip-grant-tables &   (設置成安全模式)&,表示在後臺運行,再也不後臺運行的話,就再打開一個終端咯
    
    <1># mysql
    mysql> use mysql;
    mysql> UPDATE user SET password=password("123456") WHERE user='root';    (會提示修改爲功query ok)
    mysql> flush privileges;
    mysql> exit;
    
    <2>在mysql系統外,使用mysqladmin
    # mysqladmin -u root -p password "test123"
    Enter password: 【輸入原來的密碼】
    
    <3>. 能夠登陸mysql系統的狀況下,經過登陸mysql系統修改
    # mysql -uroot -p
    Enter password: 【輸入原來的密碼】
    mysql>use mysql;
    mysql> update user set password=password("123456") where user='root';
    mysql> flush privileges;
    mysql> exit; 
    
    2.將MySQL加入到系統啓動項中 chkconfig mysql on 查看MySQL是否加入到系統啓動項中  chkconfig --list | grep mysql
    3.登陸你的MySQL系統  mysql -uroot -p回車,而後輸入你的密碼
    4.添加系統mysql組和mysql用戶:執行命令:groupadd mysql和useradd -r -g mysql mysql
    5.修改當前data目錄擁有者爲mysql用戶:執行命令 chown -R mysql:mysql data
    6.把mysql客戶端放到默認路徑:ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql
    注意:建議使用軟鏈過去,不要直接包文件複製,便於系統安裝多個版本的mysql

MYSQL服務的狀態、啓動、中止、重啓命令linux

service mysql start      或    /etc/init.d/mysql start
    service mysql stop      或    /etc/init.d/mysql stop
    service mysql restart   或    /etc/init.d/mysql restart
    service mysql status    或    /etc/init.d/mysql status

hive的安裝及配置sql

1.啓動設置mysql
    啓動mysql服務
    sudo service mysql start

    2.設置爲開機自啓動
    sudo chkconfig mysql on

    3.設置root用戶登陸密碼
    sudo /usr/bin/mysqladmin -u root password 'root123'

    4.登陸mysql  以root用戶身份登陸
    mysql -uroot -proot123
    
    5.建立hive用戶,數據庫等
    insert into mysql.user(Host,User,Password) values("localhost","hive",password("hive"));
    create database hive;
    grant all on hive.* to hive@'%'  identified by 'hive';
    grant all on hive.* to hive@'localhost'  identified by 'hive';
    flush privileges; 

    6.退出mysql 
    exit
    
    7.驗證hive用戶
    mysql -uhive -phive
    show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | hive               |
    | test               |
    +--------------------+
    3 rows in set (0.00 sec)
    退出mysql
    exit

安裝hiveshell

1,解壓安裝包
    cd  ~
    tar -zxvf apache-hive-1.1.0-bin.tar.gz
    2,創建軟鏈接
    ln -s apache-hive-1.1.0-bin hive
    3,添加環境變量
    vi  .bash_profile
    導入下面的環境變量
    export HIVE_HOME=/home/hdpsrc/hive
    export PATH=$PATH:$HIVE_HOME/bin

    使其有效
    source .bash_profile

配置hive
4.修改hive-site.xml
cp hive/conf/hive-default.xml.template hive/conf/hive-site.xml
編輯hive-site.xml數據庫

主要修改如下參數
    <property> 
       <name>javax.jdo.option.ConnectionURL </name> 
       <value>jdbc:mysql://Master:3306/hive </value> 
    </property> 
     
    <property> 
       <name>javax.jdo.option.ConnectionDriverName </name> 
       <value>com.mysql.jdbc.Driver </value> 
    </property>

    <property> 
       <name>javax.jdo.option.ConnectionPassword </name> 
       <value>hive </value> 
    </property> 
     
    <property> 
       <name>hive.hwi.listen.port </name> 
       <value>9999 </value> 
       <description>This is the port the Hive Web Interface will listen on </descript ion> 
    </property> 

    <property> 
       <name>datanucleus.autoCreateSchema </name> 
       <value>true</value> 
    </property> 
     
    <property> 
       <name>datanucleus.fixedDatastore </name> 
       <value>false</value> 
    </property> 
    </property> 

      <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive</value>
        <description>Username to use against metastore database</description>
      </property>

      <property>
        <name>hive.exec.local.scratchdir</name>
        <value>/home/hdpsrc/hive/iotmp</value>
        <description>Local scratch space for Hive jobs</description>
      </property>
      <property>
        <name>hive.downloaded.resources.dir</name>
        <value>/home/hdpsrc/hive/iotmp</value>
        <description>Temporary local directory for added resources in the remote file system.</description>
      </property>
      <property>
        <name>hive.querylog.location</name>
        <value>/home/hdpsrc/hive/iotmp</value>
        <description>Location of Hive run time structured log file</description>
      </property>
      
    5,拷貝mysql-connector-java-5.1.6-bin.jar 到hive 的lib下面
    mv /home/hdpsrc/Desktop/mysql-connector-java-5.1.6-bin.jar /home/hdpsrc/hive/lib/
    cp  mysql-connector-java-5.1.1.18-bin  /usr/hive/lib 

    6,把jline-2.12.jar拷貝到hadoop相應的目錄下,替代jline-0.9.94.jar,不然啓動會報錯
    cp /home/hdpsrc/hive/lib/jline-2.12.jar /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/
    mv /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar 
    /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar.bak /
    7,穿件hive臨時文件夾
    mkdir /home/hdpsrc/hive/iotmp
    
    四,啓動測試hive
    
    初始化hive元數據倉庫 
            該執行目錄$HIVE_HOME/bin 
            bin]#./schematool -initSchema -dbType mysql -userName hive -passWord hive
            
    啓動hadoop後,執行hive命令
    
    #hive

    測試輸入 show database;
    hive> show databases;
    OK
    default
    Time taken: 0.907 seconds, Fetched: 1 row(s)

hive 產生的log 的路徑apache

<property>
          <name>hive.querylog.location</name>
          <value>${system:java.io.tmpdir}/${system:user.name}</value>
          <description>Location of Hive run time structured log file</description>
       </property>
       修改hive-log4j.properties配置文件

      cp hive-log4j.properties.template  hive-log4j.proprties

       # list of properties
      property.hive.log.level = INFO
      property.hive.root.logger = DRFA
      property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}
      property.hive.log.file = hive.log
      property.hive.perflogger.log.level = INFO
      

    1) 在mysql裏建立hive用戶,並賦予其足夠權限
    [root@node01 mysql]# mysql -u root -p
    Enter password:

    mysql> create user 'hive' identified by 'hive';
    Query OK, 0 rows affected (0.00 sec)

    mysql> grant all privileges on *.* to 'hive' with grant option;
    Query OK, 0 rows affected (0.00 sec)

    mysql> flush privileges;
    Query OK, 0 rows affected (0.01 sec)

    2)測試hive用戶是否能正常鏈接mysql,並建立hive數據庫
    [root@node01 mysql]# mysql -u hive -p
    Enter password:

    mysql> create database hive;
    Query OK, 1 row affected (0.00 sec)

    mysql> use hive;
    Database changed
    mysql> show tables;
    Empty set (0.00 sec)

    3)解壓縮hive安裝包
    tar -xzvf hive-0.9.0.tar.gz
    [hadoop@node01 ~]$ cd hive-0.9.0
    [hadoop@node01 hive-0.9.0]$ ls
    bin  conf  docs  examples  lib  LICENSE  NOTICE  README.txt  RELEASE_NOTES.txt  scripts  src

    4)下載mysql鏈接java的驅動 並拷入hive home的lib下
    [hadoop@node01 ~]$ mv mysql-connector-java-5.1.24-bin.jar ./hive-0.9.0/lib

    5)修改環境變量,把Hive加到PATH
    /etc/profile
    export HIVE_HOME=/home/hadoop/hive-0.9.0
    export PATH=$PATH:$HIVE_HOME/bin

    6)修改hive-env.sh
    [hadoop@node01 conf]$ cp hive-env.sh.template hive-env.sh
    [hadoop@node01 conf]$ vi hive-env.sh

    7)拷貝hive-default.xml 並命名爲 hive-site.xml
    修改四個關鍵配置 爲上面mysql的配置
    [hadoop@node01 conf]$ cp hive-default.xml.template hive-site.xml
    [hadoop@node01 conf]$ vi hive-site.xml
    <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
      <description>JDBC connect string for a JDBC metastore</description>
    </property>

    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
      <description>Driver class name for a JDBC metastore</description>
    </property>

    <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>hive</value>
      <description>username to use against metastore database</description>
    </property>

    <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>hive</value>
      <description>password to use against metastore database</description>
    </property>

    8)啓動Hadoop,打開hive shell 測試
    [hadoop@node01 conf]$ start-all.sh

    hive> load data inpath 'hdfs://node01:9000/user/hadoop/access_log.txt'
        > overwrite into table records;
    Loading data to table default.records
    Moved to trash: hdfs://node01:9000/user/hive/warehouse/records
    OK
    Time taken: 0.526 seconds
    hive> select ip, count(*) from records
        > group by ip;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks not specified. Estimated from input data size: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapred.reduce.tasks=<number>
    Starting Job = job_201304242001_0001, Tracking URL = http://node01:50030/jobdetails.jsp?jobid=job_201304242001_0001
    Kill Command = /home/hadoop/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=192.168.231.131:9001 -kill job_201304242001_0001
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2013-04-24 20:11:03,127 Stage-1 map = 0%,  reduce = 0%
    2013-04-24 20:11:11,196 Stage-1 map = 100%,  reduce = 0%
    2013-04-24 20:11:23,331 Stage-1 map = 100%,  reduce = 100%
    Ended Job = job_201304242001_0001
    MapReduce Jobs Launched:
    Job 0: Map: 1  Reduce: 1   HDFS Read: 7118627 HDFS Write: 9 SUCCESS
    Total MapReduce CPU Time Spent: 0 msec
    OK
    NULL    28134
    Time taken: 33.273 seconds

    records在HDFS中就是一個文件:
    [hadoop@node01 home]$ hadoop fs -ls /user/hive/warehouse/records
    Found 1 items
    -rw-r--r--   2 hadoop supergroup    7118627 2013-04-15 20:06 /user/hive/warehouse/records/access_log.txt
相關文章
相關標籤/搜索