Deployment Issues
How to reinstall?
- Stop the HAP service that may have been running, execute
bash . /service.sh stopall
(output stopped if successful) in root directory of manager; - Back up the files of the HAP service,
mv /data/hap/ /home/hapbak/
(you can customize the target location of the backup, usually not needed for the first deployment, it can berm -rf /data/hap/
); - Confirm again that it is cleaned up. Execute
docker ps | grep hap
,netstat -ntpl | grep 38881
, andps -ef | grep 'hap\|service.sh' | grep -v grep
respectively to ensure that the output is empty; - Restart the manager with
bash . /service.sh start
, and visithttp://{server IP}:38881
to install again.
Failed to initialize?
You can check the reason of failure by executing the command cat /data/hap/script/hap.log
. In most cases, it is due to insufficient space, mirror not exist, port occupied, iptables failed, etc. If iptables failed, the reason is that closing the firewall will clear the iptables rules, so you need to restart Docker and regenerate default iptables rules, and [reinstall](#How to reinstall) after fixing the problem.
Unable to access the Internet in the container?
Using firewalld as a firewall, if the container can not access the external network (under normal circumstances, HAP services do not need to interoperate with the Internet, but inevitably some system features need the Internet, such as sending mail, and SMS), you need to add the following firewall rules. Execute both 2 commands (172.16.0.1/ 12 is the network segment in the container):
Allow the network segment in the container to access the Internet (permanent effect)
firewall-cmd --zone=trusted --add-source=172.16.0.1/12 --permanent
Allow the network segment in the container to access the Internet (temporary effect)
firewall-cmd --zone=trusted --add-source=172.16.0.1/12
How to set the startup (CentOS as an example)?
Make sure it has execution permissions chmod +x /etc/rc.d/rc.local
Modify /etc/rc.d/rc.local
(make sure the file has executable permissions), adding the following script:
sleep 30
docker system prune -f
/bin/rm -f {manager absolute path}/service.pid
/bin/bash {manager absolute path}/service.sh startall
HAP service does not start properly after restarting the server?
Execute bash . /service.sh stopall
in the root directory of the manager, if service.pid
still exists, delete rm -f service.pid
, execute bash . /service.sh startall
and wait for the command to finish.
What if the key is lost and server id is not displayed?
- Stop the service, and execute
bash . /service.sh stopall
in the root directory of manager; - Execute the command
ps -ef | grep 'hap\|service.sh' | grep -v grep
(if there is output, kill all corresponding pid); - Execute
bash . /service.sh startall
and wait for the command to finish.
After restarting HAP service due to power failure and other unexpected circumstances, workflow, statistics and other features still can not be used as normal?
- Stop the service, and execute
bash . /service.sh stopall
in the root directory of manager; - Back up
/data/hap/
; - Execute the command
rm -rf /data/hap/script/volume/data/{kafka,zookeeper}/*
to remove abnormal data from the message queue, which will not cause data loss in normal conditions (unless there is an unfinished workflow); - Execute
bash . /service.sh startall
in the root directory of manager and wait for the command to finish.
Documents cannot be previewed online?
Since the document preview service needs to read files from the microservice application, if the intranet environment cannot access the system's external access address, the document preview will fail. In this case, you need to add the environment variable ENV_FILE_INNER_URI
to the doc configuration of docker-compose.yaml
to specify the system intranet access address. Refer to the following:
services:
doc:
environment:
ENV_FILE_INNER_URI: "10.140.100.6:8880"
Failed to export a worksheet as Excel?
Due to the amount of data in some worksheets, when exporting, there will be 504 Gateway Time-out, etc. This is generally due to the default timeout time and file size limits of the proxy layer, the following rules can be added (nginx as an example):
location ~ /excelapi {
proxy_set_header Host $http_host;
proxy_read_timeout 1800s;
client_max_body_size 256m;
proxy_pass http://hap; # Here to modify according to the actual upstream name
}
Interface timed out for uploading attachments?
When uploading files over 4MB, it takes a slice upload mode by default. In some cases, the network may cause a timeout for uploading a particular slice, thus failed to upload the entire file. To solve this problem, add the following rules (nginx as an example)
location ~ /file {
proxy_set_header Host $http_host;
proxy_read_timeout 3600s;
client_max_body_size 20480m;
proxy_pass http://hap; # Here to modify according to the actual upstream name
}
Failed to load workflow list, or unable to download attachments?
In the case of adding proxy to HAP service, you need to ensure that the relevant environment variables in /data/hap/script/docker-compose.yaml
are the same as the access address when using proxy address for access, and you also need to ensure that the proxy configuration contains all the recommended configuration. View more details in set proxy.
How to enable subpath deployment?
Adjust the environment variable ENV_MINGDAO_SUBPATH
in docker-compose.yaml
corresponding to the microservice application, as follows:
services:
app:
environment:
ENV_MINGDAO_SUBPATH: "/hap" # Add, e.g. /hap
How to enable two access addresses?
Set environment variables ENV_EXT_MINGDAO_PROTO
, ENV_EXT_MINGDAO_HOST
, ENV_EXT_MINGDAO_PORT
in docker-compose.yaml
corresponding to the microservice application (*a set of configuration corresponding to ENV_MINGDAO_PROTO
, ENV_MINGDAO_HOST
, ENV_MINGDAO_PORT
). Expose port 18880 (the corresponding host port is customizable, here still use 18880), and resolve http://hap1.domain.com
to port 18880 of the host (if you use the host's internal and external IP directly, you can ignore the domain name resolution configuration), as follows:
services:
app:
environment:
ENV_EXT_MINGDAO_PROTO: "http"
ENV_EXT_MINGDAO_HOST: "hap1.domain.com"
ENV_EXT_MINGDAO_PORT: "80"
ports:
- 8880:8880
- 18880:18880
How to modify the default storage path?
New installation
Version 3.6.0 and later
Before starting HAP manager (before executing . /service.sh start
), modify the installDir parameter value in service.sh
.
Version before 3.6.0
Before starting HAP manager (before executing . /service.sh start
), create /etc/pdcaptain.json
, and specify the dataDir property. The related files will be stored in the directory /app/hap
after the startup, with the following configuration:
{
"dataDir": "/app/hap"
}
Migration
Version 3.6.0 and later
For the installed HAP, modify the installDir parameter value in service.sh
. If the HAP service is stopped, move all the files in /data/hap
to installDir and restart the service.
Version before 3.6.0
For the installed HAP, create /etc/pdcaptain.json
and specify the data storage directory dataDir. If the HAP service is stopped, move all the files in the /data/hap
to dataDir and restart the service.
How to access each storage component externally?
In standalone deployment mode, each dependent storage component, including mysql
, mongodb
, redis
, kafka
, file
, elasticsearch
, is started in the container by default, and the ports of these components are not exposed to the external by default. If you need external connections, you can expose the corresponding ports by modifying the ports field in docker-compose.yaml
corresponding to the microservice application, as follows:
Please read the documentation about data security carefully before operation ⚠️⚠️⚠️
services:
app:
ports
- 8880:8880
- 3306:3306 # mysql
- 27017:27017 # mongodb
- 6379:6379 # redis
- 9092:9092 # kafka
- 9000:9000 # file
- 9200:9200 # elasticsearch
How to partially enable external storage components in standalone deployment mode?
Set the environment variable ENV_STANDALONE_DISABLE_SERVICES
in docker-compose.yaml
corresponding to the microservice application, which supports setting mysql
, mongodb
, redis
, kafka
, file
, elasticsearch
, as follows:
services:
app:
environment:
ENV_STANDALONE_DISABLE_SERVICES: "redis,file" # If there is more than one, separate them with English commas
To enable the custom storage component, you need to configure the connection address of the corresponding service via environment variable, refer to environment variable description.
How to customize the maximum memory for mongodb?
Set the environment variable ENV_MONGODB_CACHEGB
in docker-compose.yaml
corresponding to the microservice application. It defaults to (physical memory-10)/2 (in G), as follows:
services:
app:
environment:
ENV_MONGODB_CACHEGB: "3"
How to be referenced by other systems via IFrame?
By default, it is supported to be used by IFrame embedded in the same domain. If you want to be introduced by other systems, you can set environment variable ENV_FRAME_OPTIONS
by modifying docker-compose.yaml
corresponding to microservice application, supporting ALLOWALL
, SAMEORIGIN
, DENY
, ALLOW-FROM uri
.
services:
app:
environment:
ENV_FRAME_OPTIONS: "ALLOWALL"
## Set IP whitelist access policy
To restrict access based on the source IP, you need to configure it based on the first layer of the client's accessing proxy. Below is the configuration example:
# Example 1
http {
server {
......
# Allow access from IP 192.168.0.1
allow 192.168.0.1;
# Allow access from IPs within the 192.168.0.1/32 subnet
allow 192.168.0.1/32;
# Deny access from all other IP addresses
deny all;
location / {
......
}
}
}
# Example 2
http {
server {
......
location / {
# Allow access from IP 192.168.0.1
allow 192.168.0.1;
# Allow access from IPs within the 192.168.0.1/32 subnet
allow 192.168.0.1/32;
# Deny access from all other IP addresses
deny all;
}
}
}
How to customize the timeout for webhook execution in workflows?
Set the environment variable ENV_WORKFLOW_WEBHOOK_TIMEOUT
in docker-compose.yaml
corresponding to the microservice application, with a default of 10 (in seconds), as follows:
services:
app:
environment:
ENV_WORKFLOW_WEBHOOK_TIMEOUT: "30"
How to customize the timeout for code block execution in workflows?
Set the environment variable ENV_WORKFLOW_COMMAND_TIMEOUT
in docker-compose.yaml
corresponding to the microservice application, with a default of 10 (in seconds), as follows:
services:
app:
environment:
ENV_WORKFLOW_COMMAND_TIMEOUT: "30"
How to customize the maximum memory for code block execution in workflows?
Set the environment variable ENV_WORKFLOW_COMMAND_MAXMEMORY
in docker-compose.yaml
corresponding to the microservice application, with a default of 64 (in M), as follows:
services:
app:
environment:
ENV_WORKFLOW_COMMAND_MAXMEMORY: "128"
How to customize consumer threads of message queues in workflows?
Before modifying the consumer threads of message queues, you need to adjust the number of partitions of the related Topic (WorkFlow
, WorkFlow-Batch
, WorkFlow-Button
, WorkFlow-Process
, WorkSheet
, WorkSheet- Batch
) in Kafka, with a default of 3. The number of consumer threads set cannot be greater than the number of partitions.
-
Enter the container:
docker exec -it $(docker ps | grep '\-sc' | awk '{print $1}') bash
-
Execute the following command to see the number of existing partitions and message stacking
/usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server ${ENV_KAFKA_ENDPOINTS:=127.0.0.1:9092} --describe --group md-workflow-consumer
-
If necessary, the following commands can be executed separately to adjust the number of partitions (e.g. adjust to 10 partitions)
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic WorkFlow
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic WorkFlow-Batch
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic WorkFlow-Button
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic WorkFlow-Process
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic WorkSheet
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic WorkSheet-Batch -
Set the environment variable
ENV_WORKFLOW_CONSUMER_THREADS
indocker-compose.yaml
corresponding to the microservice application, with a default of 3 (do not set the value too large, making sure the server can handle it), as follows:services:
app:
environment:
ENV_WORKFLOW_CONSUMER_THREADS: "10"
How to customize the delays for workflows triggerred by event from worksheet?
Set the environment variable ENV_WORKFLOW_TRIGER_DELAY_SECONDS
in docker-compose.yaml
corresponding to the microservice application, with a default of 5 (in seconds), as follows:
services:
app:
environment:
ENV_WORKFLOW_TRIGER_DELAY_SECONDS: "1"
How to customize consumer threads of message queues in worksheets?
Before modifying the consumer threads of message queues, you need to adjust the number of partitions of the related Topic (ws-editcontrols
, ws-passiverelation
) in Kafka, with a default of 3. Do not set the number of consumer threads larger than the number of partitions.
-
Enter the container:
docker exec -it $(docker ps | grep '\-sc' | awk '{print $1}') bash
-
Execute the following command to see the number of partitions and message stacking
/usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server ${ENV_KAFKA_ENDPOINTS:=127.0.0.1:9092} --describe --group worksheet-passiverelation
/usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server ${ENV_KAFKA_ENDPOINTS:=127.0.0.1:9092} --describe --group worksheet-editcontrols -
If necessary, the following commands can be executed separately to adjust the number of partitions (e.g. adjust to 10 partitions)
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic ws-editcontrols
/usr/local/kafka/bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 10 --topic ws-passiverelation -
Set the environment variable
ENV_WORKSHEET_CONSUMER_THREADS
indocker-compose.yaml
corresponding to the microservice application, with a default of 2 (do not set the value too large, making sure the server can handle it), as follows:services:
app:
environment:
ENV_WORKSHEET_CONSUMER_THREADS: "10"
How to customize the style of function module?
-
Create custom style files (like freestyle.css), locate and get element identifiers (like html tag, id, class) through the function of reviewing elements on the web page, and set custom styles (like hiding function modules, adjusting fonts or background colors)
/* hide the message module */
#chat {display:none!important } -
In the
docker-compose.yaml
corresponding to the microservice application, mount the custom style file into the container (/usr/local/MDPrivateDeployment/www/staticfiles/mdcss/freestyle.css) and restart the service to take effect: ``services:
app:
volumes:
- freestyle.css host path:/usr/local/MDPrivateDeployment/www/staticfiles/mdcss/freestyle.css
Go to https://github.com/mingdaocom/pd-openweb for source code to complete more custom development.
How to add global Javascript scripts?
By adding a global Javascript script, you can introduce some data analysis engines such as Baidu Statistics, Google Statistics, GrowingIO, SENSORS, etc. to monitor the usage status of the system.
-
Create a file of custom extension scripts (like freestyle.js) and copy the third-party provided scripts into the file. Generally, the third-party scripts need to refer to basic user information such as user unique identifiers, which developers can obtain through the global Javascript object
md.global.Account
provided by the HAP system. -
In the
docker-compose.yaml
corresponding to the microservice application, mount the file of the extension script into the container (/usr/local/MDPrivateDeployment/www/staticfiles/mdjs/freestyle.js) and restart the service to take effect: ```:services:
app:
volumes:
- freestyle.js host path:/usr/local/MDPrivateDeployment/www/staticfiles/mdjs/freestyle.js
Go to https://github.com/mingdaocom/pd-openweb for source code to complete more custom development.
How to customize the expiration time of login status?
Set the environment variable ENV_SESSION_TIMEOUT_MINUTES
in docker-compose.yaml
corresponding to the microservice application, with a default of 10080 (in minutes), as follows:
services:
app:
environment:
ENV_SESSION_TIMEOUT_MINUTES: "30"
How to customize whether or not verification code appears every time you log in?
Set the environment variable ENV_LOGIN_CAPTCHA_LIMIT_COUNT
in docker-compose.yaml
of the microservice application. If set to 0, the verification code will appear every time, as follows:
services:
app:
environment:
ENV_LOGIN_CAPTCHA_LIMIT_COUNT: "0"
How to customize the number of consecutive login failures, start locking, and lock duration
Add the environment variables ENV_LOGIN_LOCK_LIMIT_COUNT
to the configuration file, with a default value of 5, and ENV_LOGIN_LOCK_MINUTES
, with a default value of 20 (in minutes), as follows:
services:
app:
environment:
ENV_LOGIN_LOCK_LIMIT_COUNT: "4"
ENV_LOGIN_LOCK_MINUTES: "30"
How to customize the number of consecutive login failures under the same IP and start locking and locking duration
Add the environment variables ENV_LOGIN_IP_LOCK_LIMIT_COUNT
and ENV_LOGIN_IP_LOCK_MINUTES
(in minutes) to the configuration file, as follows:
To enable, it is necessary to ensure that the proxy layer has added
X-Real-IP
(Proxy Reference, which is used to determine the same IP address); In addition, special attention should be paid to the fact that if it is an internal network environment, the client IP may be the same, and once enabled, all personnel will be affected.
services:
app:
environment:
ENV_LOGIN_IP_LOCK_LIMIT_COUNT: "10"
ENV_LOGIN_IP_LOCK_MINUTES: "30"
How to customize the black and white list for the format of uploaded files?
Set environment variable ENV_FILEEXT_BLOCKLIST
(blacklist, default value: .exe,.vbs,.bat,.cmd,.com,.sh
) and ENV_FILEEXT_ALLOWLIST
(whitelist, default value: .exe,.vbs,.bat,.cmd,.com,.sh
) in docker-compose.yaml
corresponding to [standalone mode] of microservice application or [cluster mode] of file storage service to control the uploaded file formats. When ENV_FILEEXT_ALLOWLIST
is set, ENV_FILEEXT_BLOCKLIST
will be invalid automatically, as follows:
services:
app:
environment:
ENV_FILEEXT_BLOCKLIST: ".exe,.sh,.html"
ENV_FILEEXT_ALLOWLIST: ".docx,.txt,.png"
How to customize the expiration time of token for attachment uploads or downloads?
Set the environment variable ENV_FILE_UPLOAD_TOKEN_EXPIRE_MINUTES
in docker-compose.yaml
corresponding to the microservice application. The expiration time of the token for file uploading defaults to 120 (in minutes, maximum: 5256000); set the environment variable ENV_FILE_DOWNLOAD_TOKEN_EXPIRE_MINUTES
and the expiration time of the token for file downloading defaults to 60 (in minutes, maximum: 5256000), as follows:
services:
app:
environment:
ENV_FILE_UPLOAD_TOKEN_EXPIRE_MINUTES: "10"
ENV_FILE_DOWNLOAD_TOKEN_EXPIRE_MINUTES: "10"
How to enable the OCR control when configuring form?
-
Enable Tencent Cloud General OCR service at https://cloud.tencent.com/product/generalocr
-
Go to the menu under the user name in the upper right corner, [Access Management] > [Access Key] > [API Key Management], to create a new key or get an existing key
-
Set the environment variables
ENV_OCR_SECRETID
(corresponding toSecretId
of the key) andENV_OCR_SECRETKEY
(corresponding toSecretKey
of the key) indocker-compose.yaml
of the microservice applicationservices:
app:
environment:
ENV_OCR_SECRETID: "SecretId"
ENV_OCR_SECRETKEY: "SecretKey"
How to configure AMap API Key
The default AMap API Key in the system is shared and has a limit on the number of calls. Exceeding the maximum usage limit may cause related functions to not function properly. Therefore, it is recommended to configure your own API Key (which can be purchased if the usage exceeds the limit).
-
Through the AMap Open Platform https://lbs.amap.com Register a developer account (personal or corporate)
-
Create an application and set the key. The service platform selects web(JS API), and upon successful creation, a key and security key will be generated
-
Set the environment variable
ENV_AMAP_APP_KEY
andENV_AMAP_SECRET_KEY
in the YAML configuration file corresponding to the microservice applicationservices:
app:
environment:
ENV_AMAP_APP_KEY: "Key"
ENV_AMAP_SECRET_KEY: "Security Key"
How to enable AI features
In HAP products, worksheet field suggestions、automatic generation of workflow code blocks、intelligent translation in multiple languages all support AI capabilities for assistance. Enabling requires adding environment variable ENV_OPENAI_URL
、ENV_OPENAI_KEY
(The model parameter in ENV_OPENAI_URL
is the name of the model used and supports customization).
services:
app:
environment:
ENV_OPENAI_URL: "https://api.openai.com/v1/chat/completions?model=gpt-3.5-turbo"
ENV_OPENAI_KEY: "API key"
How to reference online translation resource files
By default, translation resources reference files carried within the microservice mirror version. Due to the unpredictable iteration cycle of versions, in order to quickly reference the latest translation resource files, the product supports configuring external translation resource addresses (with a relatively high update frequency).
Add the environment variable ENV_TRANSLATION_SOURCE_HOST
to the yaml configuration file corresponding to the microservice application, as follows:
services:
app:
environment:
ENV_TRANSLATION_SOURCE_HOST: "https://file.domain.com/pd"
Attention:
- The deployment environment needs to support access
https://file.domain.com
- It is not recommended to use the front-end for secondary development (there may be copywriting alienation and translation mismatch)
How to change the Web front-end to refer to CDN resources?
-
Purchase CDN service from cloud market (e.g. Tencent Cloud, Alibaba Cloud). If you already have a CDN service, ignore it.
-
Set the acceleration domain name and configure back to the source site (the access address of HAP system)
-
Set the environment variable
ENV_CDN_URI
indocker-compose.yaml
corresponding to the microservice application, such ashttp://hapcdn.domain.com
services:
app:
environment:
ENV_CDN_URI: "acceleration domain name"