Skip to main content

How to Deploy Flink Services Independently

Flink is an extension module in the HAP system. Users can choose whether to enable it according to their needs. For quick deployment guidelines, refer to Deploy Flink Services.

Quick deployment involves deploying Flink services on the same server as HAP microservices, requiring high availability of hardware resources. If a single server cannot meet the requirements, you can follow this document to deploy Flink services independently on a new server. Refer to Standalone Flink Server Configuration for server configuration details.

Install Docker

To install Docker, refer to the official installation instructions for different Linux distributions or check the Docker installation section in the deployment examples.

Check if the Current HAP Service Includes an Independent MinIO Service

On the HAP server, execute the following command to check if the output contains a line with the keyword start minio:

docker logs $(docker ps | grep sc | awk '{print $1}') | grep minio
  • If output exists, it indicates the presence of an independent MinIO service.
  • If no output exists, it means no independent MinIO service is available.

Adjustments to HAP Microservices

Flink Services require access to MinIO, Kafka, and MongoDB services. If operating under single-node mode, you need to map the ports of these three services from the storage component container. In cluster mode, no adjustments are necessary.

Please refer to the configuration example below to modify the docker-compose.yaml file:

  1. Add port mappings for MinIO, Kafka, and MongoDB services.

    Flink needs access to MongoDB services for the aggregation table functionality. If you do not use the aggregation table functionality, you can skip the MongoDB port mapping.

    If you plan to expose the MongoDB port, note that the MongoDB service integrated in single-node mode does not have authentication enabled by default. Before taking any actions, review the Data Security documentation carefully.

  2. Configure the environment variables for HAP to connect to the Flink service.

    app:
    environment:
    ENV_FLINK_URL: http://192.168.10.30:58081 # Newly added. Specify the Flink service URL. Update the IP address accordingly.

    sc:
    ports:
    - 9010:9010 # Newly added MinIO port mapping
    - 9092:9092 # Newly added Kafka port mapping
    - 27017:27017 # Newly added MongoDB port mapping
    Example docker-compose.yaml Configuration File
    version: '3'

    services:
    app:
    image: nocoly/hap:7.2.4
    environment: &app-environment
    ENV_ADDRESS_MAIN: "https://hap.domain.com"
    ENV_APP_VERSION: "7.2.4"
    ENV_API_TOKEN: "******"
    ENV_FLINK_URL: http://192.168.10.30:58081 # Newly added. Specify the Flink service URL. Update the IP address accordingly.
    ports:
    - 8880:8880
    volumes:
    - ./volume/data/:/data/
    - ../data:/data/hap/data

    sc:
    image: nocoly/sc:3.2.0
    environment:
    <<: *app-environment
    volumes:
    - ./volume/data/:/data/
    ports:
    - 9010:9010 # Newly added MinIO port mapping
    - 9092:9092 # Newly added Kafka port mapping
    - 27017:27017 # Newly added MongoDB port mapping

After modification, execute bash service.sh restartall in the Installation Manager directory to restart the microservices for changes to take effect.

  1. Initialize the swarm environment:

    docker swarm init
  2. Create directories:

    mkdir -p /data/hap/script/volume/data
  3. Download the Flink image (Offline package download):

    docker pull nocoly/flink:1.19.720
  4. Create the configuration file:

    cat > /data/hap/script/flink.yaml <<EOF
    version: '3'
    services:
    flink:
    image: nocoly/flink:1.19.720
    entrypoint: ["/bin/bash"]
    command: ["/run.sh"]
    environment:
    ENV_FLINK_S3_ACCESSKEY: "mdstorage"
    ENV_FLINK_S3_SECRETKEY: "eBxExGQJNhGosgv5FQJiVNqH"
    ENV_FLINK_S3_SSL: "false"
    ENV_FLINK_S3_PATH_STYLE_ACCESS: "true"
    ENV_FLINK_S3_ENDPOINT: "sc:9010"
    ENV_FLINK_S3_BUCKET: "mdoc"
    ENV_FLINK_LOG_LEVEL: "INFO"
    ENV_FLINK_JOBMANAGER_MEMORY: "4096m"
    ENV_FLINK_TASKMANAGER_MEMORY: "16384m"
    ENV_FLINK_TASKMANAGER_SLOTS: "50"
    ENV_KAFKA_ENDPOINTS: "sc:9092"
    ports:
    - 58081:8081
    volumes:
    - ./volume/data/:/data/
    extra_hosts:
    - "sc:192.168.10.28" # Update this to the actual internal IP address of the HAP server
    EOF
  5. Configure the startup script:

    cat > /data/hap/script/startflink.sh <<-EOF
    docker stack deploy -c /data/hap/script/flink.yaml flink
    EOF
    chmod +x /data/hap/script/startflink.sh
  6. Launch the Flink service:

    bash /data/hap/script/startflink.sh
    • Once the Flink container starts, it takes about 5 minutes for the process to complete initialization.
    • To stop the Flink service, use the command: docker stack rm flink.