當我們搭建好Docker集群后就要解決如何收集日志的問題 ELK就提供了一套完整的解決方案 本文主要介紹使用Docker搭建ELK 收集Docker集群的日志
ELK簡介
ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成
Elasticsearch是個開源分布式搜索引擎,它的特點有:分布式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。
Logstash是一個完全開源的工具,他可以對你的日志進行收集、過濾,并將其存儲供以后使用
Kibana 也是一個開源和免費的工具,它Kibana可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數據日志。
使用Docker搭建ELK平臺
首先我們編輯一下 logstash的配置文件 logstash.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
input { udp { port => 5000 type => json } } filter { json { source => "message" } } output { elasticsearch { hosts => "elasticsearch:9200" #將logstash的輸出到 elasticsearch 這里改成你們自己的host } } |
然后我們還需要需要一下Kibana 的啟動方式
編寫啟動腳本 等待elasticserach 運行成功后啟動
1
2
3
4
5
6
7
8
9
10
|
#!/usr/bin/env bash # Wait for the Elasticsearch container to be ready before starting Kibana. echo "Stalling for Elasticsearch" while true ; do nc -q 1 elasticsearch 9200 2>/dev/null && break done echo "Starting Kibana" exec kibana |
修改Dockerfile 生成自定義的Kibana鏡像
1
2
3
4
5
6
7
8
9
10
|
FROM kibana:latest RUN apt-get update && apt-get install -y netcat COPY entrypoint.sh /tmp/entrypoint.sh RUN chmod +x /tmp/entrypoint.sh RUN kibana plugin --install elastic/sense CMD [ "/tmp/entrypoint.sh" ] |
同時也可以修改一下Kibana 的配置文件 選擇需要的插件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
|
# Kibana is served by a back end server. This controls which port to use. port: 5601 # The host to bind the server to. host: "0.0.0.0" # The Elasticsearch instance to use for all your queries. elasticsearch_url: "http://elasticsearch:9200" # preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false, # then the host you use to connect to *this* Kibana instance will be sent. elasticsearch_preserve_host: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations # and dashboards. It will create a new index if it doesn't already exist. kibana_index: ".kibana" # If your Elasticsearch is protected with basic auth, this is the user credentials # used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana # users will still need to authenticate with Elasticsearch (which is proxied thorugh # the Kibana server) # kibana_elasticsearch_username: user # kibana_elasticsearch_password: pass # If your Elasticsearch requires client certificate and key # kibana_elasticsearch_client_crt: /path/to/your/client.crt # kibana_elasticsearch_client_key: /path/to/your/client.key # If you need to provide a CA certificate for your Elasticsarech instance, put # the path of the pem file here. # ca: /path/to/your/CA.pem # The default application to load. default_app_id: "discover" # Time in milliseconds to wait for elasticsearch to respond to pings, defaults to # request_timeout setting # ping_timeout: 1500 # Time in milliseconds to wait for responses from the back end or elasticsearch. # This must be > 0 request_timeout: 300000 # Time in milliseconds for Elasticsearch to wait for responses from shards. # Set to 0 to disable. shard_timeout: 0 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying # startup_timeout: 5000 # Set to false to have a complete disregard for the validity of the SSL # certificate. verify_ssl: true # SSL for outgoing requests from the Kibana Server (PEM formatted) # ssl_key_file: /path/to/your/server.key # ssl_cert_file: /path/to/your/server.crt # Set the path to where you would like the process id file to be created. # pid_file: /var/run/kibana.pid # If you would like to send the log output to a file you can set the path below. # This will also turn off the STDOUT log output. log_file: ./kibana. log # Plugins that are included in the build, and no longer found in the plugins/ folder bundled_plugin_ids: - plugins/dashboard/index - plugins/discover/index - plugins/doc/index - plugins/kibana/index - plugins/markdown_vis/index - plugins/metric_vis/index - plugins/settings/index - plugins/table_vis/index - plugins/vis_types/index - plugins/visualize/index |
好了下面我們編寫一下 Docker-compose.yml 方便構建
端口之類的可以根據自己的需求修改 配置文件的路徑根據你的目錄修改一下 整體系統配置要求較高 請選擇配置好點的機器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
elasticsearch: image: elasticsearch:latest command: elasticsearch -Des.network.host=0.0.0.0 ports: - "9200:9200" - "9300:9300" logstash: image: logstash:latest command: logstash -f /etc/logstash/conf.d/logstash.conf volumes: - ./logstash/config:/etc/logstash/conf.d ports: - "5001:5000/udp" links: - elasticsearch kibana: build: kibana/ volumes: - ./kibana/config/:/opt/kibana/config/ ports: - "5601:5601" links: - elasticsearch |
1
2
|
#好了命令 就可以直接啟動ELK了 docker-compose up -d |
訪問之前的設置的kibanna的5601端口就可以看到是否啟動成功了
使用logspout收集Docker日志
下一步我們要使用logspout對Docker日志進行收集 我們根據我們的需求修改一下logspout鏡像
編寫配置文件 modules.go
1
2
3
4
5
6
7
|
package main import ( _ "github.com/looplab/logspout-logstash" _ "github.com/gliderlabs/logspout/transports/udp" ) |
編寫Dockerfile
1
2
|
FROM gliderlabs/logspout:latest COPY ./modules.go /src/modules.go |
重新構建鏡像后 在各個節點運行即可
1
2
|
docker run -d --name= "logspout" --volume=/var/run/docker.sock:/var/run/docker.sock \ jayqqaa12/logspout logstash: //你的logstash地址 |
現在打開Kibana 就可以看到收集到的 docker日志了
注意Docker容器應該選擇以console輸出 這樣才能采集到
好了我們的Docker集群下的ELK 日志收集系統就部署完成了
如果是大型集群還需要添加logstash 和elasticsearch 集群 這個我們下回分解。
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持服務器之家。