用Elasticsearch来做全文检索

  • 用Elasticsearch来做全文检索已关闭评论
  • 115,102 views
  • A+
所属分类:elasticsearch

(内含elasticsearch-5.6.4+logstash-5.6.4+kibana-5.6.4的整合)

Elastic 本质上是一个分布式数据库,允许多台服务器协同工作,每台服务器可以运行多个 Elastic 实例。
单个 Elastic 实例称为一个节点(node)。一组节点构成一个集群(cluster)。快速地储存、搜索和分析海量数据。维基百科、Stack Overflow、Github 都采用它。

解压和安装步骤

[hadoop@master Downloads]$ tar -zxvf elasticsearch-5.6.4.tar.gz -C ../ELK
[hadoop@master bin]$ pwd
/home/hadoop/ELK/elasticsearch-5.6.4/bin
[hadoop@master bin]$
[hadoop@master config]$ pwd
/home/hadoop/ELK/elasticsearch-5.6.4/config
[hadoop@master elasticsearch-5.6.4]$ pwd
/home/hadoop/ELK/elasticsearch-5.6.4
[hadoop@master elasticsearch-5.6.4]$ mkdir data logs

[hadoop@master elasticsearch-5.6.4]$ cd config
[hadoop@master config]$ pwd
/home/hadoop/ELK/elasticsearch-5.6.4/config
[hadoop@master config]$ vi elasticsearch.yml
cluster.name : es_cluster
node.name : node0
path.data : /home/hadoop/ELK/elasticsearch-5.6.4/data
path.logs : /home/hadoop/ELK/elasticsearch-5.6.4/logs
network.host : master
http.port : 9200
discovery.zen.ping.unicast.hosts: ["172.16.31.220", "172.16.31.221","172.16.31.224"]  
discovery.zen.minimum_master_nodes: 3
http.cors.enabled: true
http.cors.allow-origin: "*"
备注:network.host : master意味着只有master可以访问,其他的ip是不能访问的,如果配置为0.0.0.0,那么就是所有的ip都可以访问
[hadoop@master bin]$ ls -al
total 352
drwxr-xr-x. 2 hadoop hadoop   4096 Jan  4 20:26 .
drwxr-xr-x. 9 hadoop hadoop    155 Jan  4 20:32 ..
-rwxr-xr-x. 1 hadoop hadoop   8075 Nov  1 02:54 elasticsearch
-rw-r--r--. 1 hadoop hadoop   3343 Nov  1 02:54 elasticsearch.bat
-rw-r--r--. 1 hadoop hadoop   1023 Nov  1 02:54 elasticsearch.in.bat
-rwxr-xr-x. 1 hadoop hadoop    367 Nov  1 02:54 elasticsearch.in.sh
-rwxr-xr-x. 1 hadoop hadoop   2550 Nov  1 02:54 elasticsearch-keystore
-rw-r--r--. 1 hadoop hadoop    743 Nov  1 02:54 elasticsearch-keystore.bat
-rwxr-xr-x. 1 hadoop hadoop   2540 Nov  1 02:54 elasticsearch-plugin
-rw-r--r--. 1 hadoop hadoop    731 Nov  1 02:54 elasticsearch-plugin.bat
-rw-r--r--. 1 hadoop hadoop  11239 Nov  1 02:54 elasticsearch-service.bat
-rw-r--r--. 1 hadoop hadoop 104448 Nov  1 02:54 elasticsearch-service-mgr.exe
-rw-r--r--. 1 hadoop hadoop 103936 Nov  1 02:54 elasticsearch-service-x64.exe
-rw-r--r--. 1 hadoop hadoop  80896 Nov  1 02:54 elasticsearch-service-x86.exe
-rwxr-xr-x. 1 hadoop hadoop    223 Nov  1 02:54 elasticsearch-systemd-pre-exec
-rwxr-xr-x. 1 hadoop hadoop   2514 Nov  1 02:54 elasticsearch-translog
-rw-r--r--. 1 hadoop hadoop   1435 Nov  1 02:54 elasticsearch-translog.bat
[hadoop@master bin]$ pwd
/home/hadoop/ELK/elasticsearch-5.6.4/bin

最基础的启动命令

[hadoop@master bin]$ pwd
/home/hadoop/ELK/elasticsearch/bin
[hadoop@master bin]$ ./elasticsearch
[hadoop@master bin]$ nohup ./elasticsearch &
[2018-01-07T13:24:39,507][INFO ][o.e.n.Node               ] [node0] initializing ...
[2018-01-07T13:24:39,764][INFO ][o.e.e.NodeEnvironment    ] [node0] using [1] data paths, mounts [[/home (/dev/mapper/cl-home)]], net usable_space [373.4gb], net total_space [396.7gb], spins? [possibly], types [xfs]
[2018-01-07T13:24:39,765][INFO ][o.e.e.NodeEnvironment    ] [node0] heap size [247.5mb], compressed ordinary object pointers [true]
[2018-01-07T13:24:39,766][INFO ][o.e.n.Node               ] [node0] node name [node0], node ID [PXbvgL2TQvauCmRQHRFVFA]
[2018-01-07T13:24:39,767][INFO ][o.e.n.Node               ] [node0] version[5.6.4], pid[3836], build[8bbedf5/2017-10-31T18:55:38.105Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
[2018-01-07T13:24:39,767][INFO ][o.e.n.Node               ] [node0] JVM arguments [-Xms256m, -Xmx256m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/hadoop/ELK/elasticsearch]
[2018-01-07T13:24:44,189][INFO ][o.e.p.PluginsService     ] [node0] loaded module [aggs-matrix-stats]
[2018-01-07T13:24:44,189][INFO ][o.e.p.PluginsService     ] [node0] loaded module [ingest-common]
[2018-01-07T13:24:44,189][INFO ][o.e.p.PluginsService     ] [node0] loaded module [lang-expression]
[2018-01-07T13:24:44,189][INFO ][o.e.p.PluginsService     ] [node0] loaded module [lang-groovy]
[2018-01-07T13:24:44,189][INFO ][o.e.p.PluginsService     ] [node0] loaded module [lang-mustache]
[2018-01-07T13:24:44,189][INFO ][o.e.p.PluginsService     ] [node0] loaded module [lang-painless]
[2018-01-07T13:24:44,190][INFO ][o.e.p.PluginsService     ] [node0] loaded module [parent-join]
[2018-01-07T13:24:44,190][INFO ][o.e.p.PluginsService     ] [node0] loaded module [percolator]
[2018-01-07T13:24:44,190][INFO ][o.e.p.PluginsService     ] [node0] loaded module [reindex]
[2018-01-07T13:24:44,190][INFO ][o.e.p.PluginsService     ] [node0] loaded module [transport-netty3]
[2018-01-07T13:24:44,190][INFO ][o.e.p.PluginsService     ] [node0] loaded module [transport-netty4]
[2018-01-07T13:24:44,191][INFO ][o.e.p.PluginsService     ] [node0] loaded plugin [x-pack]
[2018-01-07T13:24:46,960][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.XPackPlugin
[2018-01-07T13:24:48,275][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/3883] [Main.cc@128] controller (64 bit): Version 5.6.4 (Build 324db309768a2c) Copyright (c) 2017 Elasticsearch BV
[2018-01-07T13:24:48,363][INFO ][o.e.d.DiscoveryModule    ] [node0] using discovery type [zen]
[2018-01-07T13:24:49,976][INFO ][o.e.n.Node               ] [node0] initialized
[2018-01-07T13:24:49,976][INFO ][o.e.n.Node               ] [node0] starting ...
[2018-01-07T13:24:50,404][INFO ][o.e.t.TransportService   ] [node0] publish_address {192.168.88.80:9300}, bound_addresses {192.168.88.80:9300}
[2018-01-07T13:24:50,416][INFO ][o.e.b.BootstrapChecks    ] [node0] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2018-01-07T13:24:53,532][INFO ][o.e.c.s.ClusterService   ] [node0] new_master {node0}{PXbvgL2TQvauCmRQHRFVFA}{_k0EZLiERa-FbQ2HmbiXOA}{master}{192.168.88.80:9300}{ml.max_open_jobs=10, ml.enabled=true}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-01-07T13:24:53,565][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node0] publish_address {192.168.88.80:9200}, bound_addresses {192.168.88.80:9200}
[2018-01-07T13:24:53,565][INFO ][o.e.n.Node               ] [node0] started
[2018-01-07T13:24:53,729][INFO ][o.e.g.GatewayService     ] [node0] recovered [0] indices into cluster_state
[2018-01-07T13:24:53,978][INFO ][o.e.x.m.MachineLearningTemplateRegistry] [node0] successfully created .ml-state index template
[2018-01-07T13:24:54,070][INFO ][o.e.x.m.MachineLearningTemplateRegistry] [node0] successfully created .ml-meta index template
[2018-01-07T13:24:54,173][INFO ][o.e.x.m.MachineLearningTemplateRegistry] [node0] successfully created .ml-notifications index template
[2018-01-07T13:24:54,415][INFO ][o.e.x.m.MachineLearningTemplateRegistry] [node0] successfully created .ml-anomalies- index template
[2018-01-07T13:24:54,893][INFO ][o.e.l.LicenseService     ] [node0] license [31d01cfa-7b68-4e8a-bc31-f652a84c33bf] mode [trial] - valid
[2018-01-07T13:25:00,428][INFO ][o.e.c.m.MetaDataCreateIndexService] [node0] [.monitoring-es-6-2018.01.07] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2018-01-07T13:25:00,660][INFO ][o.e.c.m.MetaDataCreateIndexService] [node0] [.watches] creating index, cause [auto(bulk api)], templates [watches], shards [1]/[1], mappings [watch]
[2018-01-07T13:25:01,009][INFO ][o.e.c.m.MetaDataMappingService] [node0] [.watches/SsWoX0kGQWWrS8Z8DqqxnA] update_mapping [watch]

[hadoop@master config]$ su - root
Password: 
Last login: Thu Nov 23 01:10:58 CST 2017 on tty1
[root@master ~]# vi /etc/sysctl.conf
[root@master ~]# sysctl -p
vm.max_map_count = 262144
[root@master ~]#

vi elasticsearch
-Des.max-open-files=ture

vi /etc/security/limits.conf
* soft nofile 65536

* hard nofile 65536
* soft nproc 2048
* hard nproc 4096
* -memlock unlimited

vi /etc/security/limits.d/90-nproc.conf
* soft nproc 2048

[hadoop@master config]$ vi jvm.options 
[hadoop@master config]$ pwd
/home/hadoop/ELK/elasticsearch-5.6.4/config
将-Xmx2g改成-Xmx256m(初始堆大小和最大的堆大小,都修改为256M)

启动成功后,可以通过这个http链接看到json字符串

http://192.168.88.80:9200/
[hadoop@master bin]$ ./elasticsearch-plugin install -h        ,这个命令用于查看可以在线下载哪些安装包
Install a plugin

The following official plugins may be installed by name:
  analysis-icu
  analysis-kuromoji
  analysis-phonetic
  analysis-smartcn
  analysis-stempel
  analysis-ukrainian
  discovery-azure-classic
  discovery-ec2
  discovery-file
  discovery-gce
  ingest-attachment
  ingest-geoip
  ingest-user-agent
  lang-javascript
  lang-python
  mapper-attachments
  mapper-murmur3
  mapper-size
  repository-azure
  repository-gcs
  repository-hdfs
  repository-s3
  store-smb
  x-pack       -->官方推荐这个,已经不推荐head了

[hadoop@master bin]$ ./elasticsearch-plugin install x-pack
-> Downloading x-pack from elastic
[=================================================] 100%   
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: plugin requires additional permissions     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.io.FilePermission \\.\pipe\* read,write
* java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries
* java.lang.RuntimePermission getClassLoader
* java.lang.RuntimePermission setContextClassLoader
* java.lang.RuntimePermission setFactory
* java.security.SecurityPermission createPolicy.JavaPolicy
* java.security.SecurityPermission getPolicy
* java.security.SecurityPermission putProviderProperty.BC
* java.security.SecurityPermission setPolicy
* java.util.PropertyPermission * read,write
* java.util.PropertyPermission sun.nio.ch.bugLevel write
* javax.net.ssl.SSLPermission setHostnameVerifier
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@        WARNING: plugin forks a native controller        @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
This plugin launches a native controller that is not subject to the Java
security manager nor to system call filters.

Continue with installation? [y/N]y
-> Installed x-pack

Elasticsearch安装x-pack插件后,无法正常按照之前的参数来进行CRUL操作,因为安装的x-pack的插件中新增了Shield的安全机制。
[hadoop@master ELK]$ curl -u elastic:changeme http://master:9200
{
  "name" : "node0",
  "cluster_name" : "es_cluster",
  "cluster_uuid" : "b22cZBo3T0arseMzMpqUHw",
  "version" : {
    "number" : "5.6.4",
    "build_hash" : "8bbedf5",
    "build_date" : "2017-10-31T18:55:38.105Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}
[hadoop@master ELK]$

所以,在上面的命令中要指定一个用户名、密码,这样才可以调用成功

 

创建和删除索引

[hadoop@master ELK]$ curl -u elastic:changeme -X PUT 'master:9200/myfirst_index'
{"acknowledged":true,"shards_acknowledged":true,"index":"myfirst_index"}
[hadoop@master ELK]$ curl -u elastic:changeme -X DELETE 'master:9200/myfirst_index'
{"acknowledged":true}
[hadoop@master ELK]$
 

中文分词插件:https://github.com/medcl/elasticsearch-analysis-ik

[hadoop@master bin]$ ./elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.0.0/elasticsearch-analysis-ik-6.0.0.zip
如果从网络下载安装村存在问题,那么也可以从官网上下载已经预编译好的版本,以这种方式进行安装
[hadoop@master plugins]$ unzip ../../../Downloads/elasticsearch-analysis-ik-5.6.4.zip -d .
Archive:  ../../../Downloads/elasticsearch-analysis-ik-5.6.4.zip
   creating: ./elasticsearch/
   creating: ./elasticsearch/config/
  inflating: ./elasticsearch/config/extra_main.dic  
  inflating: ./elasticsearch/config/extra_single_word.dic  
  inflating: ./elasticsearch/config/extra_single_word_full.dic  
  inflating: ./elasticsearch/config/extra_single_word_low_freq.dic  
  inflating: ./elasticsearch/config/extra_stopword.dic  
  inflating: ./elasticsearch/config/IKAnalyzer.cfg.xml  
  inflating: ./elasticsearch/config/main.dic  
  inflating: ./elasticsearch/config/preposition.dic  
  inflating: ./elasticsearch/config/quantifier.dic  
  inflating: ./elasticsearch/config/stopword.dic  
  inflating: ./elasticsearch/config/suffix.dic  
  inflating: ./elasticsearch/config/surname.dic  
  inflating: ./elasticsearch/plugin-descriptor.properties  
  inflating: ./elasticsearch/elasticsearch-analysis-ik-5.6.4.jar  
  inflating: ./elasticsearch/httpclient-4.5.2.jar  
  inflating: ./elasticsearch/httpcore-4.4.4.jar  
  inflating: ./elasticsearch/commons-logging-1.2.jar  
  inflating: ./elasticsearch/commons-codec-1.9.jar  


[hadoop@master bin]$ pwd
/home/hadoop/ELK/elasticsearch/bin
[hadoop@master bin]$ ./elasticsearch-plugin install lang-python
-> Downloading lang-python from elastic
[=================================================] 100%   
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: plugin requires additional permissions     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission createClassLoader
* java.lang.RuntimePermission getClassLoader
* org.elasticsearch.script.ClassPermission <<STANDARD>>
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
-> Installed lang-python

在安装好分词插件之后,我们创建一个名为SPSS的索引

curl -u elastic:changeme -X PUT 'master:9200/spss' -d '
{
  "mappings": {
    "person": {
      "properties": {
        "user": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_max_word"
        },
        "title": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_max_word"
        },
        "desc": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_max_word"
        }
      }
    }
  }
}'
返回:{"acknowledged":true,"shards_acknowledged":true,"index":"spss"}

插入数据

curl -u elastic:changeme -X PUT 'master:9200/spss/person/1' -d '
{
  "user": "数据分析账户",
  "title": "SPSS分析师",
  "desc": "对汇总的交易数据进行分析"
}'
返回:{"_index":"spss","_type":"person","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"created":true}
curl -u elastic:changeme -X POST 'master:9200/spss/person' -d '
{
  "user": "mongodb工程师",
  "title": "工程师",
  "desc": "mongodb数据库管理员"
}'
返回:{"_index":"spss","_type":"person","_id":"AWDPbAjYXioyANhiVH2_","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"created":true}

查询数据

[hadoop@master bin]$ curl -u elastic:changeme 'master:9200/spss/person/1?pretty=true'
{
  "_index" : "spss",
  "_type" : "person",
  "_id" : "1",
  "_version" : 1,
  "found" : true,
  "_source" : {
    "user" : "数据分析账户",
    "title" : "SPSS分析师",
    "desc" : "对汇总的交易数据进行分析"
  }
}

查询不到数据的情况

[hadoop@master bin]$ curl -u elastic:changeme 'master:9200/spss/person/abc?pretty=true'
{
  "_index" : "spss",
  "_type" : "person",
  "_id" : "abc",
  "found" : false
}

删除数据

curl -u elastic:changeme -X DELETE 'master:9200/spss/person/1'
返回:{"found":true,"_index":"spss","_type":"person","_id":"1","_version":2,"result":"deleted","_shards":{"total":2,"successful":1,"failed":0}}
更新数据
curl -u elastic:changeme -X PUT 'master:9200/spss/person/1' -d '
{
    "user" : "数据分析账户",
    "title" : "SPSS分析师->鹅厂",
    "desc" : "数据库管理,软件开发"
}'

查询返回所有记录

[hadoop@master bin]$ curl -u elastic:changeme 'master:9200/spss/person/_search'
{"took":23,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":2,"max_score":1.0,"hits":[{"_index":"spss","_type":"person","_id":"AWDPbAjYXioyANhiVH2_","_score":1.0,"_source":
{
  "user": "mongodb工程师",
  "title": "工程师",
  "desc": "mongodb数据库管理员"
}},{"_index":"spss","_type":"person","_id":"1","_score":1.0,"_source":
{
    "user" : "数据分析账户",
    "title" : "SPSS分析师->鹅厂",
    "desc" : "数据库管理,软件开发"
}}]}}[hadoop@master bin]$

全文检索

curl -u elastic:changeme 'master:9200/spss/person/_search'  -d '
{
  "query" : { "match" : { "desc" : "开发" }}
}'
返回:{"took":25,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":1,"max_score":0.28582606,"hits":[{"_index":"spss","_type":"person","_id":"1","_score":0.28582606,"_source":
{
    "user" : "数据分析账户",
    "title" : "SPSS分析师->鹅厂",
    "desc" : "数据库管理,软件开发"
}}]}}[hadoop@master bin]$

全文检索的分页查询

curl -u elastic:changeme 'master:9200/spss/person/_search'  -d '
{
  "query" : { "match" : { "desc" : "开发" }},
  "from": 0,
  "size": 10
}'

逻辑运算 or 的概念

curl -u elastic:changeme 'master:9200/spss/person/_search'  -d '
{
  "query" : { "match" : { "desc" : "开发 数据" }},
  "from": 0,
  "size": 10
}'

逻辑运算 and 的概念

curl -u elastic:changeme 'master:9200/spss/person/_search'  -d '
{
  "query": {
    "bool": {
      "must": [
        { "match": { "desc": "开发" } },
        { "match": { "desc": "数据" } }
      ]
    }
  }
}'

Logstash

[hadoop@master ELK]$ pwd
/home/hadoop/ELK
[hadoop@master ELK]$ tar zxvf ../Downloads/logstash-5.6.4.tar.gz -C .

[hadoop@master config]$ ls -al
total 20
drwxrwxr-x.  2 hadoop hadoop   93 Jan  4 22:19 .
drwxrwxr-x. 11 hadoop hadoop  247 Jan  4 22:19 ..
-rw-r--r--.  1 hadoop hadoop 1875 Nov  1 04:10 jvm.options
-rw-r--r--.  1 hadoop hadoop 3958 Nov  1 04:10 log4j2.properties
-rw-r--r--.  1 hadoop hadoop 5755 Nov  1 04:10 logstash.yml
-rw-r--r--.  1 hadoop hadoop 1702 Nov  1 04:10 startup.options
[hadoop@master config]$ pwd
/home/hadoop/ELK/logstash-5.6.4/config

测试
[hadoop@master bin]$ pwd
/home/hadoop/ELK/logstash-5.6.4/bin
[hadoop@master bin]$ ./logstash -e 'input { stdin { } } output { stdout {}}'
的安装是否是成功的,第一个测试仅仅是用于测试键盘输入后,在界面直接显示输入的内容
 

测试配置文件是否正确

/home/hadoop/ELK/logstash-5.6.4/bin
[hadoop@master bin]$ logstash -e 'input { stdin { } } output { stdout {}}'
bash: logstash: command not found...
[hadoop@master bin]$ ./logstash -e 'input { stdin { } } output { stdout {}}'
Sending Logstash's logs to /home/hadoop/ELK/logstash-5.6.4/logs which is now configured via log4j2.properties
[2018-01-07T12:14:03,742][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/hadoop/ELK/logstash-5.6.4/modules/fb_apache/configuration"}
[2018-01-07T12:14:03,813][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/hadoop/ELK/logstash-5.6.4/modules/netflow/configuration"}
[2018-01-07T12:14:04,478][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2018-01-07T12:14:04,613][INFO ][logstash.pipeline        ] Pipeline main started
The stdin plugin is now waiting for input:
[2018-01-07T12:14:05,070][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
test my logstatash
2018-01-07T04:14:30.684Z master test my logstatash

nohup bin/logstash -f etc/ &
kill -9 pid

插件的安装和使用

./bin/logstash-plugin install kv
./bin/logstash-plugin install urldecode
./bin/logstash-plugin install grok
./bin/logstash-plugin install useragent

cd config
input {
  log4j {
    mode => "server"
    host => "master"
    port => 4567
  }
}
filter {
  #Only matched data are send to output.
}
output {
  elasticsearch {
    action => "index"
    hosts  => "master:9200"
    index  => "applog"
  }
}

./bin/logstash agent -f config/log4j_to_es.conf

在java中的log4j的配置文件的配置

log4j.rootLogger=INFO,console
# for package com.demo.elk, log would be sent to socket appender.
log4j.logger.com.demo.elk=DEBUG, socket

# appender socket
log4j.appender.socket=org.apache.log4j.net.SocketAppender
log4j.appender.socket.Port=4567
log4j.appender.socket.RemoteHost=centos2
log4j.appender.socket.layout=org.apache.log4j.PatternLayout
log4j.appender.socket.layout.ConversionPattern=%d [%-5p] [%l] %m%n
log4j.appender.socket.ReconnectionDelay=10000

# appender console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d [%-5p] [%l] %m%n

package com.demo.elk;

import org.apache.log4j.Logger;

public class Application {
    private static final Logger LOGGER = Logger.getLogger(Application.class);
    public static void main(String[] args) throws Exception {
        for (int i = 0; i < 10; i++) {
            LOGGER.error("Info log [" + i + "].");
            Thread.sleep(500);
        }
    }
}

输出日志信息,存储到elastic中

[hadoop@master bin]$ ./logstash -e 'input { stdin{} }  output {  elasticsearch { hosts => "192.168.88.80:9200" user => "elastic" password => "changeme" index => "matishan" } }'
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:108 warning: already initialized constant DEFAULT_MAX_POOL_SIZE
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:110 warning: already initialized constant DEFAULT_REQUEST_TIMEOUT
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:111 warning: already initialized constant DEFAULT_SOCKET_TIMEOUT
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:112 warning: already initialized constant DEFAULT_CONNECT_TIMEOUT
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:113 warning: already initialized constant DEFAULT_MAX_REDIRECTS
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:114 warning: already initialized constant DEFAULT_EXPECT_CONTINUE
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:115 warning: already initialized constant DEFAULT_STALE_CHECK
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:590 warning: already initialized constant ISO_8859_1
/home/hadoop/ELK/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:641 warning: already initialized constant KEY_EXTRACTION_REGEXP
Sending Logstash's logs to /home/hadoop/ELK/logstash/logs which is now configured via log4j2.properties
[2018-01-07T15:45:41,415][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/hadoop/ELK/logstash/modules/fb_apache/configuration"}
[2018-01-07T15:45:41,424][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/hadoop/ELK/logstash/modules/netflow/configuration"}
[2018-01-07T15:45:42,482][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@192.168.88.80:9200/]}}
[2018-01-07T15:45:42,487][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@192.168.88.80:9200/, :path=>"/"}
[2018-01-07T15:45:42,836][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@192.168.88.80:9200/"}
[2018-01-07T15:45:42,945][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-01-07T15:45:42,977][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-01-07T15:45:43,003][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-01-07T15:45:43,266][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.88.80:9200"]}
[2018-01-07T15:45:43,272][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2018-01-07T15:45:43,295][INFO ][logstash.pipeline        ] Pipeline main started
The stdin plugin is now waiting for input:
[2018-01-07T15:45:43,416][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

在linux的浏览器中可以直接打开:http://master:9200/matishan/_search     ,需要输入用户名elastic、密码changeme
在浏览器中,就可以看到
{"took":4,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":1,"max_score":1.0,"hits":[{"_index":"matishan","_type":"logs","_id":"AWDPmefuXioyANhiVImf","_score":1.0,"_source":{"@version":"1","host":"master","@timestamp":"2018-01-07T07:50:58.009Z","message":"maxspssinformation"}}]}}

kibana

[hadoop@master ELK]$ pwd
/home/hadoop/ELK
[hadoop@master ELK]$ tar zxvf ./kibana-5.6.4-linux-x86_64.tar.gz -C .
[hadoop@master config]$ pwd
/home/hadoop/ELK/kibana/config

编辑配置信息:

server.port: 8601
server.host: "192.168.88.80"
elasticsearch.url: "http://192.168.88.80:9200"
elasticsearch.username: "elastic"
elasticsearch.password: "changeme"
这里应为中文分词插件带来的用户名、密码的认证,因此这里也需要配置一下

[hadoop@master bin]$ ./kibana
  log   [08:10:04.315] [info][status][plugin:kibana@5.6.4] Status changed from uninitialized to green - Ready
  log   [08:10:04.446] [info][status][plugin:elasticsearch@5.6.4] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [08:10:04.501] [info][status][plugin:console@5.6.4] Status changed from uninitialized to green - Ready
  log   [08:10:04.558] [info][status][plugin:metrics@5.6.4] Status changed from uninitialized to green - Ready
  log   [08:10:04.862] [info][status][plugin:timelion@5.6.4] Status changed from uninitialized to green - Ready
  log   [08:10:04.877] [info][listening] Server running at http://192.168.88.80:8601
  log   [08:10:04.881] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow
  log   [08:10:09.922] [info][status][plugin:elasticsearch@5.6.4] Status changed from yellow to yellow - No existing Kibana index found
  log   [08:10:11.053] [info][status][plugin:elasticsearch@5.6.4] Status changed from yellow to green - Kibana index ready
  log   [08:10:11.054] [info][status][ui settings] Status changed from yellow to green - Ready

在linux的浏览器中

输入:http://master:8601    ,之后输入需要输入用户名elastic、密码changeme,就可以看到kibana的相关界面了
我们需要创建一个查询的索引,就是之前我们创建的:matishan,之后就可以在elsatic中输入任何内容,就可以直接在kibana中查询到输入的内容了
  • 我的微信
  • 微信扫一扫
  • weinxin
  • 微信公众号
  • 微信公众号扫一扫
  • weinxin
avatar