Skip to main content

How to Enable Custom File Object Storage

HAP supports using custom file object storage (based on the S3 standard, such as Alibaba Cloud OSS, Tencent Cloud COS, Huawei Cloud OBS, Qiniu Cloud, etc.) to replace the default storage service (based on MinIO). You can enable this feature by following the steps below.

  1. Create a configuration file s3-config.json on the server, with the template as follows (taking Alibaba Cloud OSS as an example):

    {
    "mode": 1,
    "accessKeyID": "${Key}",
    "secretAccessKey": "${Secret}",
    "bucketEndPoint": "oss-cn-beijing.aliyuncs.com",
    "bucketName": {
    "mdmedia": "oss-mdtest",
    "mdpic": "oss-mdtest",
    "mdpub": "oss-mdtest",
    "mdoc": "oss-mdtest"
    },
    "region": "oss-cn-beijing"
    }

    Note: Currently, the HAP system uses 4 buckets: mdmedia, mdpic, mdpub, mdoc. You can map the actual buckets used in the bucketName field of the configuration file.

  2. Mount the Configuration

    In standalone mode, modify the docker-compose.yaml for the microservice application to mount the s3-config.json configuration file to the path /usr/local/file/s3-config.json inside the hap-sc container. In cluster mode, modify the yaml file corresponding to the file storage service.

      sc:
    volumes:
    - ${Path where the s3-config.json file is located on the host}:/usr/local/file/s3-config.json
  3. Initialize Prefabricated Files

    You can use the tools provided by various object storage vendors to upload the prefabricated file package to the corresponding bucket in the cloud according to the bucket mapping in the s3-config.json file. For example: HAP Private Deployment Alibaba Cloud OSS Initialization Guide

    If you are migrating the file storage for an already in-use HAP system, you need to use the mc command inside the hap-sc container to migrate the bucket data from the built-in MinIO service to your object storage bucket according to the mapping in the s3-config.json file.

  4. Restart the HAP Microservices