How to Enable Custom File Object Storage
HAP supports using custom file object storage (based on the S3 standard, such as Alibaba Cloud OSS, Tencent Cloud COS, Huawei Cloud OBS, Qiniu Cloud, etc.) to replace the default storage service (based on MinIO). You can enable this feature by following the steps below.
-
Create a configuration file
s3-config.jsonon the server, with the template as follows (taking Alibaba Cloud OSS as an example):{
"mode": 1,
"accessKeyID": "${Key}",
"secretAccessKey": "${Secret}",
"bucketEndPoint": "oss-cn-beijing.aliyuncs.com",
"bucketName": {
"mdmedia": "oss-mdtest",
"mdpic": "oss-mdtest",
"mdpub": "oss-mdtest",
"mdoc": "oss-mdtest"
},
"region": "oss-cn-beijing"
}Note: Currently, the HAP system uses 4 buckets:
mdmedia,mdpic,mdpub,mdoc. You can map the actual buckets used in thebucketNamefield of the configuration file. -
Mount the Configuration
In standalone mode, modify the
docker-compose.yamlfor the microservice application to mount thes3-config.jsonconfiguration file to the path/usr/local/file/s3-config.jsoninside thehap-sccontainer. In cluster mode, modify theyamlfile corresponding to the file storage service.sc:
volumes:
- ${Path where the s3-config.json file is located on the host}:/usr/local/file/s3-config.json -
Initialize Prefabricated Files
You can use the tools provided by various object storage vendors to upload the prefabricated file package to the corresponding bucket in the cloud according to the bucket mapping in the
s3-config.jsonfile. For example: HAP Private Deployment Alibaba Cloud OSS Initialization GuideIf you are migrating the file storage for an already in-use HAP system, you need to use the
mccommand inside thehap-sccontainer to migrate the bucket data from the built-in MinIO service to your object storage bucket according to the mapping in thes3-config.jsonfile. -
Restart the HAP Microservices