乔克视界 乔克视界
🏠首页
  • 🛠️运维
  • 💻开发
  • 📊监控
  • 🔒安全
  • ✏️随笔
  • 🐋Docker
  • 🐹Golang
  • 🐍Python
  • 🤖AIOps
  • 🛠️DevOps
  • ☸️Kubernetes
  • 📈Prometheus
  • 🐘ELK
  • 🎈心情杂货
  • 📖读书笔记
  • 🧐面试
  • 💡实用技巧
  • 🔧博客搭建
🤝友链
ℹ️关于
⭐收藏
  • 📂分类
  • 🏷️标签
  • 🗃️归档

乔克

云原生爱好者
🏠首页
  • 🛠️运维
  • 💻开发
  • 📊监控
  • 🔒安全
  • ✏️随笔
  • 🐋Docker
  • 🐹Golang
  • 🐍Python
  • 🤖AIOps
  • 🛠️DevOps
  • ☸️Kubernetes
  • 📈Prometheus
  • 🐘ELK
  • 🎈心情杂货
  • 📖读书笔记
  • 🧐面试
  • 💡实用技巧
  • 🔧博客搭建
🤝友链
ℹ️关于
⭐收藏
  • 📂分类
  • 🏷️标签
  • 🗃️归档
  • Docker

  • Golang

  • AIOps

  • Python

  • DevOps

  • Kubernetes

  • Prometheus

  • ELK

    • 日志收集方案
    • 日志系统搭建
    • 完整搭建
    • 部署 ECK(Elastic Cloud on Kubernetes)
      • 部署 operator
      • 部署 ElasticSearch
      • 部署 kibana
      • 部署 filebeat
    • 部署 log-pilot
    • elastic stack 搭建
    • 使用 helm 安装 es 和 kibana
    • elastic 账户认证 401 问题
  • 专栏
  • ELK
乔克
2025-07-20
目录

部署 ECK(Elastic Cloud on Kubernetes)

ECK 是官方推出的在 Kubernetes 集群中的部署方式,其采用 operator 的方式进行部署。

# 部署 operator

kubectl apply -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml
1

然后通过一下命令查看日志输出

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
1

可以通过一下命令查看部署情况

# kubectl get pod -n elastic-system
NAME                      READY   STATUS    RESTARTS   AGE
elastic-operator-0        1/1     Running   0          19m
# kubectl get svc -n elastic-system
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
elastic-webhook-server    ClusterIP   10.102.3.82      <none>        443/TCP    19m
1
2
3
4
5
6

# 部署 ElasticSearch

编写 YAML 文件,如下

apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
  name: es
  namespace: elastic-system
spec:
  version: 7.4.1
  http:
    service:
      metadata:
        creationTimestamp: null
      spec: {}
    tls:
      certificate: {}
      selfSignedCertificate:
        disabled: true
  nodeSets:
    - config:
        node.data: true
        node.ingest: true
        node.master: true
        node.store.allow_mmap: true
        xpack.security.authc.realms:
          native:
            native1:
              order: 1
      count: 1
      name: master
      podTemplate:
        spec:
          containers:
            - env:
                - name: ES_JAVA_OPTS
                  value: -Xms1g -Xmx1g
              name: elasticsearch
              resources:
                limits:
                  cpu: 2
                  memory: 2Gi
                requests:
                  cpu: 2
                  memory: 2Gi
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteMany
            resources:
              requests:
                storage: 10Gi
            storageClassName: managed-nfs-storage
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52

通过 NFS 做的持久化

可以通过如下命令查看状态

# kubectl get elasticsearches.elasticsearch.k8s.elastic.co -n elastic-system
NAME   HEALTH   NODES   VERSION   PHASE   AGE
es     green    1       7.4.1     Ready   48m
# kubectl get pod -n elastic-system
NAME                            READY   STATUS              RESTARTS   AGE
elastic-operator-0              1/1     Running             1          56m
es-es-master-0                  1/1     Running             0          47m
1
2
3
4
5
6
7

查看 svc 的信息

# kubectl get svc -n elastic-system
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
elastic-webhook-server   ClusterIP   10.111.204.184   <none>        443/TCP          57m
es-es-http               ClusterIP   10.100.200.193   <none>        9200/TCP         48m
es-es-master             ClusterIP   None             <none>        <none>           47m
es-es-transport          ClusterIP   None             <none>        9300/TCP         48m
1
2
3
4
5
6

同时是会生成一些 secrets,如下:

# kubectl get secrets -n elastic-system
NAME                                   TYPE                                  DATA   AGE
default-token-xjqxq                    kubernetes.io/service-account-token   3      57m
elastic-operator-token-ttckk           kubernetes.io/service-account-token   3      57m
elastic-system-es-kibana-kibana-user   Opaque                                3      23m
elastic-webhook-server-cert            Opaque                                2      57m
es-es-elastic-user                     Opaque                                1      48m
es-es-http-ca-internal                 Opaque                                2      48m
es-es-http-certs-internal              Opaque                                3      48m
es-es-http-certs-public                Opaque                                2      48m
es-es-internal-users                   Opaque                                2      48m
es-es-master-es-config                 Opaque                                1      47m
es-es-remote-ca                        Opaque                                1      48m
es-es-transport-ca-internal            Opaque                                2      48m
es-es-transport-certificates           Opaque                                3      48m
es-es-transport-certs-public           Opaque                                1      48m
es-es-xpack-file-realm                 Opaque                                3      48m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

其中 quickstart-es-elastic-user 就是存放的登录 es 的密码,我们可以通过以下命令获取明文信息

kubectl get secret -n elastic-system es-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode
1

然后访问查看 ES 是否正常(集群内部访问)

curl -k https://elastic:nf963Smr0To2u6AA1dS0u93f@es-es-http:9200
1

# 部署 kibana

kibana 的 yaml 文件如下:

apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
  name: es-kibana
  namespace: elastic-system
spec:
  count: 1
  elasticsearchRef:
    name: es
  http:
    service:
      metadata:
        creationTimestamp: null
      spec: {}
    tls:
      certificate: {}
      selfSignedCertificate:
        disabled: true
  podTemplate:
    metadata:
      creationTimestamp: null
    spec:
      containers:
        - name: kibana
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 0.5
              memory: 1Gi
  version: 7.4.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

部署后查看 pod 状态。

# kubectl get pod -n elastic-system
NAME                            READY   STATUS              RESTARTS   AGE
elastic-operator-0              1/1     Running             1          58m
es-es-master-0                  1/1     Running             0          48m
es-kibana-kb-5c4468f8bf-msql2   1/1     Running             0          23m
1
2
3
4
5

查看 svc。

# kubectl get svc -n elastic-system
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
elastic-webhook-server   ClusterIP   10.111.204.184   <none>        443/TCP          64m
es-es-http               ClusterIP   10.100.200.193   <none>        9200/TCP         55m
es-es-master             ClusterIP   None             <none>        <none>           55m
es-es-transport          ClusterIP   None             <none>        9300/TCP         55m
es-kibana-kb-http        NodePort    10.105.148.116   <none>        5601:30696/TCP   30m
1
2
3
4
5
6
7

然后我们用 kubectl edit 将 es-kibana-kb-http  的 type 类型改为 NodePort。

登录 kibana,创建 index template

PUT /_template/app-template
{
    "index_patterns" : [
      "app-*"
    ],
    "settings" : {
      "index" : {
        "lifecycle" : {
          "name" : "slm-history-ilm-policy"
        },
        "number_of_shards" : "2",
        "number_of_replicas" : "0",
        "highlight.max_analyzed_offset": 10000000
      }
    },
    "mappings" : {
      "dynamic_templates" : [
        {
          "message_field" : {
            "path_match" : "message",
            "mapping" : {
              "norms" : false,
              "type" : "text"
            },
            "match_mapping_type" : "string"
          }
        },
        {
          "string_fields" : {
            "mapping" : {
              "norms" : false,
              "type" : "keyword"
            },
            "match_mapping_type" : "string",
            "match" : "*"
          }
        }
      ]
    },
    "aliases" : { }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

# 部署 filebeat

配置 filebeat 的 configMap.

apiVersion: v1
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      fields: {host: '${NODE_NAME}'}
      enabled: true
      paths:
        - /var/logs/webapps/*/*/*-info-*.log
      tail_files: true
      multiline.match: after
      multiline.negate: true
      multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}
      processors:
        - dissect:
              tokenizer: "/%{key1}/%{key2}/%{key3}/%{appName}/%{podName}/"
              field: "log.file.path"
              target_prefix: ""
    - type: log
      fields: {host: '${NODE_NAME}'}
      enabled: true
      paths:
        - /var/logs/webapps/*/*.log
      tail_files: true
      multiline.match: after
      multiline.negate: true
      multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}
      processors:
        - dissect:
              tokenizer: "/%{key1}/%{key2}/%{key3}/%{appName}/"
              field: "log.file.path"
              target_prefix: ""
    setup.ilm.enabled: false
    setup.template.pattern: "app-*"
    setup.template.name: "app-template"
    output.elasticsearch:
      pipeline: timestamp-pipeline
      hosts: ["es-es-http:9200"]
      username: elastic
      password: nf963Smr0To2u6AA1dS0u93f 
      index: "app-%{[appName]}-%{+yyyy.MM.dd}"
      bulk_max_size: 100
    processors:
      - rename:
            fields:
              - from: "log.file.path"
                to: "source"
              - from: "fields.host"
                to: "node"
            ignore_missing: true
      - drop_fields:
            when:
                has_fields: ['key1','key2','key3','agent','@metadata','ecs','input','host','log','fields']
            fields: ['key1','key2','key3','agent','@metadata','ecs','input','host','log','fields'] 
    logging.level: error
kind: ConfigMap
metadata:
  labels:
    k8s-app: filebeat
  name: filebeat-config
  namespace: elastic-system
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61

filebeat 的 yaml 如下。

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s-app: filebeat
  name: filebeat
  namespace: elastic-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      containers:
        - args:
            - "-c"
            - /etc/filebeat.yml
            - "-e"
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          image: docker.elastic.co/beats/filebeat:7.4.0
          imagePullPolicy: IfNotPresent
          name: filebeat
          resources:
            limits:
              cpu: 300m
              memory: 300Mi
            requests:
              cpu: 300m
              memory: 300Mi
          securityContext:
            runAsUser: 0
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /etc/filebeat.yml
              name: config
              readOnly: true
              subPath: filebeat.yml
            - mountPath: /usr/share/filebeat/data
              name: data
            - mountPath: /var/logs/webapps
              name: logpath
              readOnly: true
            - mountPath: /var/log
              name: varlog
              readOnly: true
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - configMap:
            defaultMode: 384
            name: filebeat-config
          name: config
        - hostPath:
            path: /var/lib/filebeat-data
            type: DirectoryOrCreate
          name: data
        - hostPath:
            path: /home/logs
            type: ""
          name: logpath
        - hostPath:
            path: /var/log
            type: ""
          name: varlog
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78

创建模板文件:

PUT /_template/app-template
{
    "index_patterns" : [
      "app-*"
    ],
    "settings" : {
      "index" : {
        "lifecycle" : {
          "name" : "slm-history-ilm-policy"
        },
        "number_of_shards" : "2",
        "number_of_replicas" : "0",
        "highlight.max_analyzed_offset": 10000000
      }
    },
    "mappings" : {
      "dynamic_templates" : [
        {
          "message_field" : {
            "path_match" : "message",
            "mapping" : {
              "norms" : false,
              "type" : "text"
            },
            "match_mapping_type" : "string"
          }
        },
        {
          "string_fields" : {
            "mapping" : {
              "norms" : false,
              "type" : "keyword"
            },
            "match_mapping_type" : "string",
            "match" : "*"
          }
        }
      ]
    },
    "aliases" : { }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

创建 pipeline:

PUT /_ingest/pipeline/timestamp-pipeline
{
    "description" : "fix timestamp",
    "processors" : [
      {
        "grok" : {
          "if" : "! ctx.appName.contains('weipeiapp')",
          "field" : "message",
          "patterns" : [
            "%{TIMESTAMP_ISO8601:timestamp} "
          ]
        },
        "remove" : {
          "if" : "! ctx.appName.contains('weipeiapp')",
          "field" : "@timestamp"
        }
      },
      {
        "date" : {
          "if" : "! ctx.appName.contains('weipeiapp')",
          "field" : "timestamp",
          "timezone" : "Asia/Shanghai",
          "formats" : [
            "yyyy-MM-dd HH:mm:ss.SSS"
          ]
        },
        "remove" : {
          "if" : "! ctx.appName.contains('weipeiapp')",
          "field" : "timestamp"
        }
      }
    ],
    "on_failure" : [
      {
        "set" : {
          "field" : "_index",
          "value" : "{{ _index }}"
        }
      }
    ]
  }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
上次更新: 2025/07/20, 11:26:22
完整搭建
部署 log-pilot

← 完整搭建 部署 log-pilot→

最近更新
01
elastic 账户认证 401 问题
07-20
02
使用 helm 安装 es 和 kibana
07-20
03
elastic stack 搭建
07-20
更多文章>
Theme by Vdoing | Copyright © 2019-2025 乔克 | MIT License | 渝ICP备20002153号 |
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式