elasticsearch安裝 centos6.5

版本: elasticsearch-6.4.3.tar.gzjava

jdk:1.7node

注意:安裝elasticsearch 不能用root用戶安裝mysql

1.建立用戶

groupadd esgroup

useradd esuser -g esgroup -p espawd

 

2.安裝elasticsearch

用新建的用戶安裝(不可用root用戶安裝會報錯)linux

tar -zxvf elasticsearch-6.4.3.tar.gz -C /home/hadoop/opt/

elasticsearch.yml 修改

vim elasticsearch-6.4.3/config/elasticsearch.yml

添加內容git

network.host: 192.168.179.142
http.port: 9200

#由於Centos6不支持SecComp,而ES默認bootstrap.system_call_filter爲true進行檢測github

elasticsearch.yml添加內容
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

用root用戶修改/etc/sysctl.conf 

vim /etc/sysctl.conf

在文件最後面添加內容:web

vm.max_map_count=262144

保存退出後,使用sysctl -p 刷新生效spring

[root@localhost ~]# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
vm.max_map_count = 262144
[root@localhost ~]#

root用戶 修改文件/etc/security/limits.conf

    vim /etc/security/limits.conf

 

添加以下內容:sql

 
* hard nofile 65536
* soft nofile 65536
* soft nproc 2048
* hard nproc 4096 

啓動elasticesearch 可能還會報以下錯誤數據庫

max number of threads [1024] for user [lish] likely too low, increase to at least [4096]

解決:切換到root用戶,進入limits.d目錄下修改配置文件。

vi /etc/security/limits.d/90-nproc.conf

修改以下內容:

* soft nproc 1024

#修改成

* soft nproc 4096

3.啓動elasticsearch 

完成上面配置修改後,切換到es 用戶,目錄切換到 elasticsearch 安裝目錄下執行

cd elasticsearch-6.4.3/bin/

./elasticsearch

 

在瀏覽器輸入<IP>:9200 驗證是否啓動成功,若是瀏覽器輸出以下信息,表明安裝啓動成功

{
  "name" : "node-1",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "8okSnhNzRr6Xo233szO0Vg",
  "version" : {
    "number" : "6.3.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "053779d",
    "build_date" : "2018-07-20T05:20:23.451332Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

 

4.安裝過程當中啓動報錯解決

異常信息1:expecting token of type [START_OBJECT] but found [VALUE_STRING]]; 

錯誤緣由:elasticsearch.yml 文件內部錯誤 
解決辦法:仔細檢查yml文件中的配置項書寫格式: (空格)name:(空格)value

---------------------------------------------------------------------------------
異常信息2:java.lang.UnsupportedOperationException: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
 錯誤緣由:Centos6不支持SecComp,而ES默認bootstrap.system_call_filter爲true進行檢測,因此致使檢測失敗,失敗後直接致使ES不能啓動
 
解決辦法:修改elasticsearch.yml 添加一下內容 :

bootstrap.memory_lock: false
bootstrap.system_call_filter: false

---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
異常信息3:BindTransportException[Failed to bind to [9300-9400]
解決辦法 打開配置文件elasticsearch.yml 將 network.host: 192.168.0.1 修改成本機IP 0.0.0.0


--------------------------------------------------------------------------------------------
異常信息4:max number of threads [1024] for user [lish] likely too low, increase to at least [2048]

解決辦法:切換到root用戶,進入limits.d目錄下修改配置文件。

vi /etc/security/limits.d/90-nproc.conf 

修改以下內容:

* soft nproc 1024

#修改成

* soft nproc 2048

 

5.安裝kibana

 

 tar -zxvf kibana-6.4.3-linux-x86_64.tar.gz -C /home/hadoop/opt/kibana/

 vim kibana/kibana-6.4.3-linux-x86_64/config/kibana.yml 

添加內容

server.port: 5601
server.host: "192.168.179.142"
elasticsearch.url: "http://192.168.179.142:9200"

啓動

./kibana

驗證

http://192.168.179.142:5601/app/kibana

 kibnan基本操做命令

建立索引庫

#test1 索引庫的名稱

PUT /test1/
{
  "settings": {
     
    "index":{
      #設置默認索引分片個數,默認爲5片
      "number_of_shards":5,
       #數據備份數,若是隻有一臺機器,設置爲0
      "number_of_replicas":0
    }
  }
  
}

GET test1  

查看索引

GET /test1/
--------------------------
GET /test1/_settings

 

添加文檔

PUT /test1/user/1
{
  "name":"zhangsan",
  "age":32,
  "interests":["music","eat"]
}

#返回信息爲如下
{
  "_index": "test1",
  "_type": "user",
  "_id": "1",
  "_version": 1,
  "result": "created",
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 0,
  "_primary_term": 1
}

 

不指定id,用post請求

注:不指定id自動生成uuid值

POST /test1/user/
{
  "name":"lisi",
  "age":23,
  "interests":["forestry"]
  
}

#返回信息以下

{
  "_index": "test1",
  "_type": "user",
  "_id": "hSE-am4BgZQzNHtis52h",
  "_version": 1,
  "result": "created",
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 1,
  "_primary_term": 1
}

 

查看文檔

GET /test1/user/1

GET /test1/user/hSE-am4BgZQzNHtis52h

GET /test1/user/1?_source=age,name

#返回值以下

{
"_index": "test1",
"_type": "user",
"_id": "1",
"_version": 1,
"found": true,
"_source": {
"name": "zhangsan",
"age": 32
}
}

 

跟新文檔

PUT /test1/user/1
{
  
  "name":"wangwu",
  "age":43,
  "interests":["music"]
  
}

#返回值以下
{
  "_index": "test1",
  "_type": "user",
  "_id": "1",
  "_version": 2,
  "result": "updated",
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 2,
  "_primary_term": 1
}

刪除一個文檔

DELETE /test1/user/1

刪除一個索引庫

DELETE /test1

 

 

 

springboot整合elasticsearch

application.properties

server.port=8080
spring.application.name=springboot-es
#集羣名稱
spring.data.elasticsearch.cluster-name=moescluster
#集羣地址9300端口是集羣節點之間通訊的端口號
spring.data.elasticsearch.cluster-nodes=192.168.179.142:9300

dao接口

package com.zf.mo.springbootes.dao;

import com.zf.mo.springbootes.entity.User;
import org.springframework.data.repository.CrudRepository;

public interface UserRepository extends CrudRepository <User,String>{

}

entity實體類

package com.zf.mo.springbootes.entity;

import lombok.Data;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;

import java.io.Serializable;
@Data
@Document(indexName = "mo",type = "user")
public class User implements Serializable {

    @Id
    private String id;
    private String name;
    private String password;
    private Integer age;
    private String gender;

}

 

controller類

package com.zf.mo.springbootes.controller;

import com.zf.mo.springbootes.dao.UserRepository;
import com.zf.mo.springbootes.entity.User;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class ElasticsearchController {

    @Autowired
    private UserRepository userRepository;

    @PostMapping("/add")
    public User add(@RequestBody User user){
        return userRepository.save(user);
    }

    @RequestMapping("/findById")
    public User findById(String id){
       return  userRepository.findById(id).get();
    }
}

啓動類

package com.zf.mo.springbootes;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.elasticsearch.repository.config.EnableElasticsearchRepositories;

@SpringBootApplication
@EnableElasticsearchRepositories()
public class SpringbootEsApplication {

    public static void main(String[] args) {
        SpringApplication.run(SpringbootEsApplication.class, args);
    }

}

 

測試

添加數據

 

 根據id查詢文檔

 

 

 

 

_search 查詢全部哦

 

 根據多個編號查詢文檔

 

 

根據範圍查詢

 

 

根據參數查詢

 

 

排序查詢

 

 

 

 

Elasticsearch的DSL查詢 

  • term

  • 詞元
  • 表明徹底匹配,⽽且不進⾏分詞器分析,⽂檔中必須包含整個內容 

 

 

  • match
  • 表明⼀種模糊查詢,只包含其中的⼀部分關鍵字就能夠查詢

 

 

  • filter 

 

 

https://blog.csdn.net/supermao1013/article/details/84261526 
 

分詞器 

  • 什麼是分詞器
  • ES默認的中⽂分詞器對中⽂分詞⽀持不好。默認狀況下是⼀箇中⽂字分⼀個詞。例如:"你好世界"。分詞後的結果爲「你」,「好」,"世","界"。
  • 基於上⾯的問題,咱們須要使⽤另外的分詞器(IK分詞器)

默認分詞器

 

 

 

 

安裝ik分詞器

 
 
 
分詞器資料本身打包
git clone https://github.com/medcl/elasticsearch-analysis-ik
mvn package

或者下載地址 zip包

https://github.com/medcl/elasticsearch-analysis-ik

 

  • 注意:es-ik分詞器的版本⼀定要與es安裝的版本對應
  • 將其下載的elasticsearch-analysis-ik-6.4.3.zip上傳到linux,解壓放到ik⽂件夾中,而後將ik⽂件夾
  • 拷⻉到/elasticsearh-6.4.3/plugins下。
 
unzip elasticsearch-analysis-ik-6.4.3.zip -d ./ik
cp -r ./ik /usr/local/elasticsearch/elasticsearch-6.4.3/plugins/
  • 重啓es
 
 

 

 

 

 

⾃定義擴展字典

  概念

  • 事先配置好⼀些新詞,而後ES就會參考擴展字典,從⽽知道這是⼀個詞,而後就能夠進⾏分詞.

  如何定義擴展字典

  • 在/usr/local/elasticsearch/elasticsearch-6.4.3/plugins/ik/config⽂件夾下建⽴⼀個custom⽂件夾
  • 在custom⽂件夾下建⽴⼀個new_word.dic⽂件,想這個⽂件中加⼊新詞 
 

 

 

 
  • 修改IKAnalyzer.cfg.xml⽂件
 

 

 

 ES中的⽂檔映射

1 概念

  • ES中映射就是⽤來定義⼀個⽂檔的結構,該結構包括了有哪些字段(Field),字段的類型,字段使
  • ⽤什麼分詞器,字段有哪些屬性....
  •  動態映射
  • ES不須要事先定義好⽂檔映射,在⽂檔真正寫⼊到索引庫時根據⽂檔的字段⾃動識別
  •  靜態映射
  • 必須事先定義好映射,包括⽂檔的各個字段以及類型
  •  類型
  • 字符串類型:text和keyword。text能夠分詞,⽽且須要建⽴索引,⽽且不能⽀持聚合和排序;
  • keyword不能分詞,⽤來實現聚合,排序和過濾
  • 基本類型
  • 整型:long,integer,short,byte
  • 浮點型:double,float
  • 布爾型:boolean
  • ⽇期型:date
  • ⼆進制型:binary
  • 數組類型:[]
  • 複雜類型
  • 對象型:{}
  • 地理類型
  • 特定類型
  • 案例:建立⽂檔映射

 

 ELK搭建

安裝logstash

tar -zxvf logstash-6.4.3.tar.gz

cd /home/hadoop/opt/logstash/logstash-6.4.3/config

#建立文件
touch logstash.conf

logstash.conf 文件內容(讀取文件存儲到elasticsearch中)文件內容以下:

input {
        file {
          path => "/home/hadoop/opt/elasticsearch/elasticsearch-6.4.3/logs/*.log"
          start_position => beginning

        }

}

filter {

}

output {

         elasticsearch {
            hosts => "192.168.179.11:9200"
         }

}

 

logstash配置導入mysql數據到es

input {
    jdbc {
        jdbc_driver_library => "/home/hadoop/opt/logstash-6.4.3/lib/mysql-connector-java-5.1.48.jar"
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        jdbc_connection_string => "jdbc:mysql://192.168.246.1:3306/clouddb03?autoReconnect=true&useSSL=false"
        jdbc_user => "root"
        jdbc_password => "root"
        jdbc_default_timezone => "Asia/Shanghai"
        statement => "SELECT * FROM  dept"
    }

}

output {
    elasticsearch {
        index => "mysqldb"
        document_type => "dept"
        document_id => "%{elkid}"
        hosts => ["192.168.179.142:9200"]
    }

}

 

 

啓動logstash

sh logstash -f /home/hadoop/opt/logstash/logstash-6.4.3/config/logstash.conf  &

若是提示--path.data的問題,則須要指定path.data的路徑,隨便找個路徑就行,

個人是這樣啓動:sh logstash -f 

/home/hadoop/opt/logstash/logstash-6.4.3/config/logstash.conf --path.data=/home/elk/logstash-6.4.2/logs &

完了能夠看到kibana上面有logstash推送過去的日誌了

 logstash 導入數據庫

相關文章
相關標籤/搜索