Flink upgrade (cross version)
-
Delete the
ConfigMap, clean up theJoblist and find theConfigMapconfiguration, as shown in the following example:kubernetes.cluster-id: md-flink
kubernetes.namespace: defaultExecute the script to batch delete
ConfigMap.If something like
configmap "md-flink" deletedis not output, it means that the namespace or the prefix ofConfigMapis not correct, check again.# for i in $(kubectl -n [Replaced with value of kubernetes.namespace] get cm | awk '$1~"[Replaced with value of kubernetes.cluster-id]"{print $1}');do kubectl -n [Replaced with value ofkubernetes.namespace] delete cm $i;done
for i in $(kubectl -n default get cm | awk '$1~"md-flink"{print $1}');do kubectl -n default delete cm $i;done -
Download the mirror of the new version
Operations are required on each node server in a kubernetes cluster.
- Internet Access Available
- Internet Access Unavailable
crictl pull nocoly/flink:<version># Download link for flink offline mirror file. After downloading, upload it to the deployment server.
https://pdpublic.nocoly.com/offline/flink-linux-amd64-<version>.tar.gzLoad the offline mirror on the server.
gunzip -d flink-linux-amd64-<version>.tar.gz
ctr -n k8s.io image import flink-linux-amd64-<version>.tar -
Modify Configuration File
Update the
flink.yamlfile to specify the image versions used by theflink-jobmanagerandflink-taskmanagerservices:- name: jobmanager
image: nocoly/flink:VERSION
- name: taskmanager
image: nocoly/flink:VERSIONWhen upgrading to v1.19.710, click to view more details about adjustments in
flink.yaml-
Remove all existing configurations starting with
metrics, and add the following Kafka metric reporting configurations (replace Kafka addresses as per your environment):metrics.job.status.enable: STATE
metrics.reporters: kafka_reporter,kafka_reporter_running,kafka_reporter2,kafka_reporter_running2
metrics.reporter.kafka_reporter.factory.class: org.apache.flink.metrics.kafka.KafkaReporterFactory
metrics.reporter.kafka_reporter.bootstrap.servers: 192.168.10.7:9092,192.168.10.8:9092,192.168.10.9:9092 # Kafka addresses
metrics.reporter.kafka_reporter.chunk.size: 20000
metrics.reporter.kafka_reporter.interval: 60s
metrics.reporter.kafka_reporter.filter.metrics: numRecordsIn,numRecordsOut,runningTime
metrics.reporter.kafka_reporter.topic: flink_metrics_counter
metrics.reporter.kafka_reporter.taskNamePrefix: HAP0x5c2_
metrics.reporter.kafka_reporter_running.factory.class: org.apache.flink.metrics.kafka.KafkaReporterFactory
metrics.reporter.kafka_reporter_running.bootstrap.servers: 192.168.10.7:9092,192.168.10.8:9092,192.168.10.9:9092 # Kafka addresses
metrics.reporter.kafka_reporter_running.chunk.size: 20000
metrics.reporter.kafka_reporter_running.interval: 60s
metrics.reporter.kafka_reporter_running.filter.metrics: RUNNINGState
metrics.reporter.kafka_reporter_running.topic: flink_metrics_gauge
metrics.reporter.kafka_reporter_running.taskNamePrefix: HAP0x5c2_
metrics.reporter.kafka_reporter2.factory.class: org.apache.flink.metrics.kafka.KafkaReporterFactory
metrics.reporter.kafka_reporter2.bootstrap.servers: 192.168.10.7:9092,192.168.10.8:9092,192.168.10.9:9092 # Kafka addresses
metrics.reporter.kafka_reporter2.chunk.size: 20000
metrics.reporter.kafka_reporter2.interval: 60s
metrics.reporter.kafka_reporter2.filter.metrics: numRecordsIn,numRecordsOut,runningTime
metrics.reporter.kafka_reporter2.topic: flink_metrics_counter-hdp
metrics.reporter.kafka_reporter2.taskNamePrefix: HDP0x5c2_
metrics.reporter.kafka_reporter_running2.factory.class: org.apache.flink.metrics.kafka.KafkaReporterFactory
metrics.reporter.kafka_reporter_running2.bootstrap.servers: 192.168.10.7:9092,192.168.10.8:9092,192.168.10.9:9092 # Kafka addresses
metrics.reporter.kafka_reporter_running2.chunk.size: 20000
metrics.reporter.kafka_reporter_running2.interval: 60s
metrics.reporter.kafka_reporter_running2.filter.metrics: RUNNINGState
metrics.reporter.kafka_reporter_running2.topic: flink_metrics_gauge-hdp
metrics.reporter.kafka_reporter_running2.taskNamePrefix: HDP0x5c2_ -
Locate the
kind: Roleconfiguration section, and add patch permission for the configmaps resource under therules.verbsfield:kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: configmap-access
namespace: default
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["update", "get", "watch", "list", "create", "edit", "delete", "patch"] # Newly added patch permission
-
-
Restart the Service
kubectl apply -f flink.yaml -
Restart or publish tasks on the synchronization task list 💥 💥 💥