Skip to main content

Backup

  • The following directory takes the time of 20221111184140 as an example, in the format of year, month, day, hour, minute, and second.
  • The host uses the default data directory /data/hap/ (which can be viewed by cat /etc/pdcaptain.json or cat service.sh | grep installDir= in the root of the manager).

Dump (available in v3.7.1+)

No need to stop HAP service

docker exec -it $(docker ps | grep community | awk '{print $1}') bash -c 'source /entrypoint.sh && backup mysql mongodb file'

The default backup file will be placed in the directory /data/hap/script/volume/data/backup/20221111184140/, if you need to change the path, you can add a new mount in docker-compose.yaml as follows:

volumes:
- /backup/:/data/backup/

In this case, the backup file will be generated in the /backup/ of the host.

backup/
└── 20221201193829
├── backupFile.log
├── backupMongodb.log
├── backupMysql.log
├── file
│ ├──...
├── mongodb
├── mongodb │ ├──...
└─ mysql
├── ...

In the directory backup, execute the command tar -zcvf 20221111184140.tar.gz ./20221201193829 to compress the dumped file.

Copy File

In the directory /data/hap/script/volume/data/, the mysql, mongodb, storage, kafka, zookeeper, redis, elasticsearch-8 folders all contain data that needs to be backed up.

Stop HAP service and execute the following command to pack the data:

mkdir -p /backup && cd /data/hap/script/volume/data/ && tar -zcvf /backup/20221111184140.tar.gz ./mysql ./mongodb ./storage ./kafka ./zookeeper ./redis ./elasticsearch-8