S_lion's Studio

k8s部署EFK

字数统计: 2k阅读时长: 10 min
2021/12/27 Share

EFK架构分别代表Elasticsearch、Filebeat、Kibana。

下面所验证的是通过filebeat来收集pods的控制台日志,通过es来检索存储收集到的日志,通过kibana来进行图形化的展示。

环境描述

主机名 ip地址 系统环境 系统版本 内核版本
slions-pc1 192.168.100.10 cpu:x4 mem:8GB filesystem:40GB CentOS 7.6.1810 3.10.0-957.el7.x86_64

kubernetes version: 1.19.0

filebeat: 6.8.8

elasticsearch: 6.8.8

kibana: 6.8.8

安装EFK

将efk部署在一个单独的namespaces中,提前创建:

1
[root@slions-pc1 ~]# kubectl create namespace agree-logging

部署elasticsearch

  1. 部署前,需要调整es节点”限制一个进程可以拥有的VMA(虚拟内存区域)的数量”,以下通过初始化容器来进行环境设置。
  2. 此次部署就一个节点,要将此es节点赋予master、node、ingest的角色。
  3. es实际存储路径为hostpath设置的/es-data

configmap

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@slions-pc1 es-statefulset]# cat es-statefulset-cm.yaml
apiVersion: v1
data:
cluster.name: "elasticsearch-cluster"
node.name: "${HOSTNAME}"
bootstrap.memory_lock: "false"
discovery.zen.ping.unicast.hosts: "elasticsearch-discovery"
discovery.zen.minimum_master_nodes: "1"
discovery.zen.ping_timeout: "5s"
node.master: "true"
node.data: "true"
node.ingest: "true"
ES_JAVA_OPTS: "-Xms1000m -Xmx1000m"
TZ: "Asia/Shanghai"
kind: ConfigMap
metadata:
name: es-statefulsets-configmaps
namespace: agree-logging
labels:
app: acaas-logcenter
k8s-app: agree-logging

rbac

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[root@slions-pc1 acaas]# cat es-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: agree-logging
labels:
app: acaas-logcenter
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
app: acaas-logcenter
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: agree-logging
name: elasticsearch-logging
labels:
app: acaas-logcenter
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: agree-logging
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""

statefulset

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
[root@slions-pc1 es-statefulset]# cat es-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: agree-logging
labels:
app: acaas-logcenter
k8s-app: elasticsearch-logging
version: 6.8.8
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 1
selector:
matchLabels:
k8s-app: elasticsearch-logging
template:
metadata:
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
spec:
priorityClassName: system-cluster-critical
serviceAccountName: elasticsearch-logging
containers:
- image: acaas-registry.agree:9980/library/elasticsearch:6.8.8
name: elasticsearch-logging
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
resources:
requests:
memory: 2000Mi
limits:
memory: 2000Mi
envFrom:
- configMapRef:
name: es-statefulsets-configmaps
env:
- name: agree-logging
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
failureThreshold: 3
httpGet:
path: /_cluster/health
port: 9200
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-logging
initContainers:
- image: acaas-registry.agree:9980/library/alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
volumes:
- hostPath:
path: /es-data # 日志存储路径
type: ""
name: elasticsearch-logging

service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@slions-pc1 acaas]# cat es-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: agree-logging
labels:
app: acaas-logcenter
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
type: NodePort
ports:
- port: 9200
protocol: TCP
targetPort: db
nodePort: 30007
selector:
k8s-app: elasticsearch-logging
---
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-discovery
namespace: agree-logging
labels:
app: acaas-logcenter
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9300
protocol: TCP
targetPort: transport
selector:
k8s-app: elasticsearch-logging

部署filebeat

创建一个mynginx示例服务,查看其/var/log/containers/下的日志即为其控制台日志(标准输出),可以看到其是/var/log/pods/$<pod_name>/xxx.log下的软连接。

1
2
3
4
5
[root@slions-pc1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mynginx-5bb654c97-ng8jn 1/1 Running 0 13s
[root@slions-pc1 ~]# ll /var/log/containers/mynginx-5bb654c97-ng8jn_default_nginx-949db3d8631e6de802f0fd70c2ee8af52ebdae827c30bbc0e0f9ecef89e57ae6.log
lrwxrwxrwx 1 root root 94 12月 28 11:39 /var/log/containers/mynginx-5bb654c97-ng8jn_default_nginx-949db3d8631e6de802f0fd70c2ee8af52ebdae827c30bbc0e0f9ecef89e57ae6.log -> /var/log/pods/default_mynginx-5bb654c97-ng8jn_14e795c4-66e3-4195-b550-72b50f84d364/nginx/0.log

可以看到/var/log/pods/$<pod_name>/xxx.log也是软连接。

1
2
[root@slions-pc1 ~]# ll /var/log/pods/default_mynginx-5bb654c97-ng8jn_14e795c4-66e3-4195-b550-72b50f84d364/nginx/0.log
lrwxrwxrwx 1 root root 157 12月 28 11:39 /var/log/pods/default_mynginx-5bb654c97-ng8jn_14e795c4-66e3-4195-b550-72b50f84d364/nginx/0.log -> /export/containers/949db3d8631e6de802f0fd70c2ee8af52ebdae827c30bbc0e0f9ecef89e57ae6/949db3d8631e6de802f0fd70c2ee8af52ebdae827c30bbc0e0f9ecef89e57ae6-json.log

Docker默认的日志驱动(LogDriver)是json-driver,其会将日志以JSON文件的方式存储。所有容器输出到控制台的日志,都会以*-json.log的命名方式保存在/${Docker Root Dir}/containers/目录下。

知道了pod控制台日志的路径后可完成filebeat配置。

configmap

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: agree-logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/lib/docker/containers/*/*.log*
enable: true
scan_frequency: 3s # 检测新文件频率
close_inactive: 1m # 在指定时间没有被读取,将关闭文件句柄
close_timeout: 1h # harvester设置预定义关闭时间
clean_inactive: 48h # 从注册表文件中删除先前收获的文件的状态
ignore_older: 46h # 忽略超过设置值未更新的文件或者文件从来没有被harvester收集
max_bytes: 1000000 # 单个日志消息可以拥有的最大字节数
tags: ["acaas","testlog"] # 列表中添加标签,用于过滤
fields: # 新增字段
log_type: sjydemo111
fields_under_root: true # fields存储在输出文档的顶级位置
processors: # 将数据发送到配置的输出之前,可以使用处理器来过滤和增强数据。
- add_kubernetes_metadata: # 添加kubernetes元数据
in_cluster: true # 作为pods运行时需设置为true
setup.template.overwrite: true # 是否覆盖默认模板
setup.template.enabled: true # 是否开启自定义模板
output.elasticsearch.index: "fb-slions-%{+yyyy.MM.dd}" # 要输出的es index名称
setup.template.pattern: "fb-slions*" # 模板适配的索引
setup.template.name: "fb-slions" # 模板名称
setup.template.settings: # index级别的设置
index:
number_of_shards: 1
number_of_replicas: 0
queue.mem.events: 2000 # 写入的缓存队列的大小
queue.mem.flush.min_events: 1000 # 写入缓存的最小队列大小
logging.level: info # 日志级别
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch-logging}:${ELASTICSEARCH_PORT:9200}']
max_message_bytes: 1000000

rbac

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: agree-logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: agree-logging
labels:
k8s-app: filebeat

daemonset

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: agree-logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.8.8
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml # filebeat配置文件
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data # filebeat注册文件
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers # pod控制台日志存储路径
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /export/containers
- name: data
hostPath:
path: /filebeat-data
type: DirectoryOrCreate

部署kibana

configmap

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@slions-pc1 kibana]# cat kibana-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: agree-logging
name: kibana-config
labels:
app: kibana
data:
kibana.yml: |-
server.host: 0.0.0.0
elasticsearch:
hosts: "http://elasticsearch-logging.agree-logging.svc.cluster.local:9200"

deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[root@slions-pc1 kibana]# cat kibana-dep.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: agree-logging
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:6.8.8
ports:
- containerPort: 5601
name: webinterface
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch-logging.agree-logging.svc.cluster.local:9200"
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config/kibana.yml
readOnly: true
subPath: kibana.yml
volumes:
- name: config
configMap:
name: kibana-config

svc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@slions-pc1 kibana]# cat kibana-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana-api
namespace: agree-logging
spec:
type: NodePort
ports:
- name: http
port: 5601
targetPort: 5601
nodePort: 30006
selector:
app: kibana

验证

访问es地址:http://192.168.100.10:30007/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"name" : "masternode",
"cluster_name" : "elasticsearch-cluster",
"cluster_uuid" : "TgyaNTKTTai93ukKjRwqVA",
"version" : {
"number" : "6.8.8",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "2f4c224",
"build_date" : "2020-03-18T23:22:18.622755Z",
"build_snapshot" : false,
"lucene_version" : "7.7.2",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

访问es健康接口:http://192.168.100.10:30007/_cat/health?v

1
2
epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1640671797 06:09:57 elasticsearch-cluster green 1 1 14 14 0 0 0 0 - 100.0%

访问kibana地址:http://192.168.100.10:30006/

monitoring中会展示当前es与kibana的状态:

management中点击index management查看当前集群中的索引信息:

log中能实时查看已收集到的日志,并可根据过滤条件进行查询,因为在filebeat中设置了存储到es的索引模板,这里需要添加Log indicesfb-*

dev tools中可以通过接口方式来操作es

CATALOG
  1. 1. 环境描述
  2. 2. 安装EFK
    1. 2.1. 部署elasticsearch
      1. 2.1.1. configmap
      2. 2.1.2. rbac
      3. 2.1.3. statefulset
      4. 2.1.4. service
    2. 2.2. 部署filebeat
      1. 2.2.1. configmap
      2. 2.2.2. rbac
      3. 2.2.3. daemonset
    3. 2.3. 部署kibana
      1. 2.3.1. configmap
      2. 2.3.2. deployment
      3. 2.3.3. svc
  3. 3. 验证