How to Enable Custom File Object Storage
HAP supports using custom file object storage (based on the S3 standard, such as Alibaba Cloud OSS, Tencent Cloud COS, Huawei Cloud OBS, Qiniu Cloud, etc.) to replace the default storage service (based on MinIO). You can enable this feature by following the steps below.
-
Create a configuration file
s3-config.json
on the server, with the template as follows (taking Alibaba Cloud OSS as an example):{
"mode": 1,
"accessKeyID": "${Key}",
"secretAccessKey": "${Secret}",
"bucketEndPoint": "oss-cn-beijing.aliyuncs.com",
"bucketName": {
"mdmedia": "oss-mdtest",
"mdpic": "oss-mdtest",
"mdpub": "oss-mdtest",
"mdoc": "oss-mdtest"
},
"region": "oss-cn-beijing"
}Note: Currently, the HAP system uses 4 buckets:
mdmedia
,mdpic
,mdpub
,mdoc
. You can map the actual buckets used in thebucketName
field of the configuration file. -
Mount the Configuration
In standalone mode, modify the
docker-compose.yaml
for the microservice application to mount thes3-config.json
configuration file to the path/usr/local/file/s3-config.json
inside thehap-sc
container. In cluster mode, modify theyaml
file corresponding to the file storage service.sc:
volumes:
- ${Path where the s3-config.json file is located on the host}:/usr/local/file/s3-config.json -
Initialize Prefabricated Files
You can use the tools provided by various object storage vendors to upload the prefabricated file package to the corresponding bucket in the cloud according to the bucket mapping in the
s3-config.json
file. For example: HAP Private Deployment Alibaba Cloud OSS Initialization GuideIf you are migrating the file storage for an already in-use HAP system, you need to use the
mc
command inside thehap-sc
container to migrate the bucket data from the built-in MinIO service to your object storage bucket according to the mapping in thes3-config.json
file. -
Restart the HAP Microservices