How to Deploy Flink Service Independently
Data integration is an extension module of the HAP system that users can choose to enable. For quick deployment, refer to Enable Data Integration Feature.
Quick deployment involves deploying the Flink service required by the data integration feature on the same server as the HAP microservices, which demands high hardware resource availability. If a single server cannot meet these requirements, refer to this document to deploy the Flink service independently on a new server. Server configuration can be referenced in Single Node Data Integration Server Configuration.
Install Docker
To install Docker, refer to the official installation instructions for different Linux versions or the Docker Installation Section in the deployment examples.
Check if Current HAP Has Independent MinIO Service
Execute the following command on the HAP server to check if there is an output line containing the keyword start minio
.
docker logs $(docker ps | grep hap-sc | awk '{print $1}') | grep minio
- If there is output, it means there is an independent MinIO service.
- If there is no output, it means there is no independent MinIO service.
HAP Microservice Adjustment
- Independent MinIO Service
- No Independent MinIO Service
The Flink service needs to access MinIO, Kafka, and MongoDB services. In this case, it's necessary to map the access ports of these three services from the storage component containers in the HAP single-node mode. If HAP is in cluster mode, no adjustments are needed.
Refer to the following configuration example to modify the docker-compose.yaml file:
-
Add port mappings for MinIO, Kafka, and MongoDB services.
Aggregation table functionality requires Flink to access MongoDB service. If you do not enable the aggregation table feature, MongoDB port mapping is not needed.
If you need to map MongoDB ports, MongoDB built into the single-node mode does not enable authentication by default. Please read the document on Data Security carefully before operating.
-
Configure HAP connection to Flink service environment variables.
app:
environment:
ENV_FLINK_URL: http://192.168.10.30:58081 # Add Flink service address, modify to actual IP address
sc:
ports:
- 9010:9010 # Add MinIO port mapping
- 9092:9092 # Add Kafka port mapping
- 27017:27017 # Add MongoDB port mappingdocker-compose.yaml Configuration File Example
version: '3'
services:
app:
image: nocoly/hap-community:6.3.3
environment: &app-environment
ENV_ADDRESS_MAIN: "https://hap.domain.com"
ENV_APP_VERSION: "6.3.3"
ENV_API_TOKEN: "******"
ENV_FLINK_URL: http://192.168.10.30:58081 # Add Flink service address, modify to actual IP address
ports:
- 8880:8880
volumes:
- ./volume/data/:/data/
- ../data:/data/hap/data
sc:
image: nocoly/hap-sc:3.1.0
environment:
<<: *app-environment
volumes:
- ./volume/data/:/data/
ports:
- 9010:9010 # Add MinIO port mapping
- 9092:9092 # Add Kafka port mapping
- 27017:27017 # Add MongoDB port mapping
volumes:
- ./volume/data/:/data/
After modifications, execute bash service.sh restartall
in the installation manager directory to restart the microservices.
The Flink service needs to access MinIO, Kafka, and MongoDB services. However, if the first deployed version is older and lacks a built-in MinIO service, only the access ports for Kafka and MongoDB services from the storage component containers in HAP single-node mode need to be mapped. If HAP is in cluster mode, no adjustments are needed.
Refer to the following configuration example to modify the docker-compose.yaml file:
-
Add port mappings for MinIO, Kafka, and MongoDB services.
Aggregation table functionality requires Flink to access MongoDB service. If you do not enable the aggregation table feature, MongoDB port mapping is not needed.
If you need to map MongoDB ports, MongoDB built into the single-node mode does not enable authentication by default. Please read the document on Data Security carefully before operating.
-
Configure HAP connection to Flink service environment variables.
app:
environment:
ENV_FLINK_URL: http://192.168.10.30:58081 # Add Flink service address, modify to actual IP address
sc:
ports:
- 9092:9092 # Add Kafka port mapping
- 27017:27017 # Add MongoDB port mappingdocker-compose.yaml Configuration File Example
version: '3'
services:
app:
image: nocoly/hap-community:6.3.3
environment: &app-environment
ENV_ADDRESS_MAIN: "https://hap.domain.com"
ENV_APP_VERSION: "6.3.3
ENV_API_TOKEN: "******"
ENV_FLINK_URL: http://192.168.10.30:58081 # Add Flink service address, modify to actual IP address
ports:
- 8880:8880
volumes:
- ./volume/data/:/data/
- ../data:/data/hap/data
sc:
image: nocoly/hap-sc:3.1.0
environment:
<<: *app-environment
volumes:
- ./volume/data/:/data/
ports:
- 9092:9092 # Add Kafka port mapping
- 27017:27017 # Add MongoDB port mapping
volumes:
- ./volume/data/:/data/
After modifications, execute bash service.sh restartall
in the installation manager directory to restart the microservices.
Flink Service Deployment
- Independent MinIO Service
- No Independent MinIO Service
-
Initialize swarm environment
docker swarm init
-
Create directory
mkdir -p /data/hap/script/volume/data
-
Download Flink image (Offline Package Download)
docker pull nocoly/hap-flink:1.17.1.530
-
Create configuration file
cat > /data/hap/script/flink.yaml <<EOF
version: '3'
services:
flink:
image: nocoly/hap-flink:1.17.1.530
entrypoint: ["/bin/bash"]
command: ["/run.sh"]
environment:
ENV_FLINK_S3_ACCESSKEY: "mdstorage"
ENV_FLINK_S3_SECRETKEY: "eBxExGQJNhGosgv5FQJiVNqH"
ENV_FLINK_S3_SSL: "false"
ENV_FLINK_S3_PATH_STYLE_ACCESS: "true"
ENV_FLINK_S3_ENDPOINT: "sc:9010"
ENV_FLINK_S3_BUCKET: "mdoc"
ENV_FLINK_LOG_LEVEL: "INFO"
ENV_FLINK_JOBMANAGER_MEMORY: "4096m"
ENV_FLINK_TASKMANAGER_MEMORY: "16384m"
ENV_FLINK_TASKMANAGER_SLOTS: "50"
ENV_KAFKA_ENDPOINTS: "sc:9092"
ports:
- 58081:8081
volumes:
- ./volume/data/:/data/
extra_hosts:
- "sc:192.168.10.28" # Modify to actual internal IP address of HAP server
EOF -
Configure startup script
cat > /data/hap/script/startflink.sh <<-EOF
docker stack deploy -c /data/hap/script/flink.yaml flink
EOF
chmod +x /data/hap/script/startflink.sh -
Start Data Integration Service
bash /data/hap/script/startflink.sh
- After Flink container starts, processes need approximately 5 minutes to complete startup.
- Stop Flink command:
docker stack rm flink
-
Initialize swarm environment
docker swarm init
-
Create directory
mkdir -p /data/hap/script/volume/data
-
Download Flink image (Offline Package Download)
docker pull nocoly/hap-flink:1.17.1.530
-
Create configuration file
cat > /data/hap/script/flink.yaml <<EOF
version: '3'
services:
flink:
image: nocoly/hap-flink:1.17.1.530
entrypoint: ["/bin/bash"]
command: ["/run.sh"]
environment:
ENV_FLINK_S3_ACCESSKEY: "mdstorage"
ENV_FLINK_S3_SECRETKEY: "eBxExGQJNhGosgv5FQJiVNqH"
ENV_FLINK_S3_SSL: "false"
ENV_FLINK_S3_PATH_STYLE_ACCESS: "true"
ENV_FLINK_S3_ENDPOINT: "flink-minio:9000"
ENV_FLINK_S3_BUCKET: "mdoc"
ENV_FLINK_LOG_LEVEL: "INFO"
ENV_FLINK_JOBMANAGER_MEMORY: "4096m"
ENV_FLINK_TASKMANAGER_MEMORY: "16384m"
ENV_FLINK_TASKMANAGER_SLOTS: "50"
ENV_KAFKA_ENDPOINTS: "sc:9092"
ports:
- 58081:8081
volumes:
- ./volume/data/:/data/
extra_hosts:
- "sc:192.168.10.28" # Modify to actual internal IP address of HAP server
flink-minio:
container_name: flink-minio
image: nocoly/hap-minio:RELEASE.2025-04-22T22-12-26Z
environment:
MINIO_ROOT_USER: "mdstorage"
MINIO_ROOT_PASSWORD: "eBxExGQJNhGosgv5FQJiVNqH"
volumes:
- ./volume/data/:/data/
command: minio server /data/flink-minio --console-address ":9001"
EOF -
Configure startup script
cat > /data/hap/script/startflink.sh <<-EOF
docker stack deploy -c /data/hap/script/flink.yaml flink
EOF
chmod +x /data/hap/script/startflink.sh -
Start Data Integration Service
bash /data/hap/script/startflink.sh
- After Flink container starts, processes need approximately 5 minutes to complete startup.
- Stop Flink command:
docker stack rm flink
-
Enter minio container to create the bucket needed for Flink.
docker exec -it flink-minio bash
mc alias set myminio http://127.0.0.1:9000 mdstorage eBxExGQJNhGosgv5FQJiVNqH
mc mb myminio/mdoc