- Created by Ann Base, last modified on Jul 06, 2022
This article describes how to upgrade the Smart ID Digital Access component from version 6.0.0 and above to 6.0.5 and above for single node appliance as well as High Availability (HA) or distributed setup.
You only need to perform these steps once to set the system to use docker and swarm. Once this is all set, future upgrades will become much easier.
There are two options, described below, for upgrading from 6.0.0 to 6.0.4 to 6.0.5 and above:
- Migrate - This section describes how to upgrade, as well as migrate, the Digital Access instance from appliance to a new Virtual Machine (VM) by exporting all data/configuration files with the help of the script provided. (Recommended)
- Upgrade - This section describes how to upgrade Digital Access in the existing appliance to the newer version.
Download latest updated scripts
Make sure you download the upgrade.tgz file again in case you have downloaded it before 29th October 2021 to get the latest updated scripts.
This article is valid for upgrade from Digital Access 6.0.0 and above to 6.0.5 or above
For Upgrade from 6.0.5 to 6.0.6 and above follow:
Upgrade Digital Access component from 6.0.5 or above
Related information
- Make sure that you have the correct resources available (memory, CPU and hard disk) as per requirement on new machines.
- Install docker, xmlstarlet, although upgrade/migrate script will install docker and xmlstarlet if not installed already. But if it is an offline upgrade (no internet connection on machine) then install the latest version of docker and xmlstarlet before running the migration steps.
- The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
- TCP port 2377 for cluster management communications.
- TCP and UDP port 7946 for communication among nodes.
- UDP port 4789 for overlay network traffic.
- For High Availability setup only:
- Make sure you have the similar set of machines, since the placement of services will be same as on the existing setup. For example, if you have two appliances in the High Availability setup, you must have two new machines to migrate the setup.
- Identify the nodes, as the new setup should have equal number of machines. You must create mapping of machines from the old setup to the new setup.
In docker swarm deployment, one machine is the manager node (node on which the administration service runs) and other nodes are worker nodes.
- Steps on existing appliance/setup
Copy upgrade.tgz to all nodes, and extract the .tgz file.
Extracttar -xzf upgrade.tgz
- Run upgrade.sh to migrate files/configuration from the existing setup. It will create a .tgz file in upgrade/scripts/da_migrate_6.0.x.xxxxx.tgz. Copy this file to the new machine.
Run the below commands with
--manager
on the manager node and--worker
on the worker nodes:Run upgrade scriptsudo bash upgrade/scripts/upgrade.sh --manager --export_da (on node running administration-service) sudo bash upgrade/scripts/upgrade.sh --worker --export_da (on all other worker nodes)
After running the commands above you will be asked: "Do you wish to stop the existing services ?- [y/n]". It is recommended to select y for yes. The same configuration and database settings will be copied over to the new setup and there is a possibility of connecting the new instance too with the same database, if the database settings and other configurations are not modified before starting the services.
If you select n for no, the services on the older existing machines will not stop.
The system will now create a dump of the locally running PostgreSQL database. Note that the database dump will only be created in the admin service node.
- Steps on the new setup
Copy upgrade.tgz to all nodes, and extract the .tgz file.
Extracttar -xzf upgrade.tgz
Edit the configuration files. (Only applicable for High Availability or distributed setup)
Edit configuration filesNavigate to the docker-compose folder (<path to upgrade folder>/docker-compose) and edit these files:
- docker-compose.yml
- network.yml
- versiontag.yml
docker-compose.yml
For each service, add one section in the docker-compose.yml file.
Change the values for the following keys:
- Service name
- Hostname
- Constraints
For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.
policy: # configure image tag from versiontag.yaml hostname: policy deploy: mode: replicated replicas: 1 placement: constraints: #If you need to set constraints using node name #- node.hostname ==<node name> # use node label [node.labels.da-policy-service == true ] resources: limits: cpus: "0.50" memory: 512M reservations: cpus: "0.10" memory: 128M volumes: - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z - /etc/localtime:/etc/localtime - /etc/timezone:/etc/timezone logging: options: max-size: 10m policy1: # configure image tag from versiontag.yaml hostname: policy1 deploy: mode: replicated replicas: 1 placement: constraints: #If you need to set constraints using node name #- node.hostname ==<node name> # use node label [node.labels.da-policy-service1 == true ] resources: limits: cpus: "0.50" memory: 512M reservations: cpus: "0.10" memory: 128M volumes: - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z - /etc/localtime:/etc/localtime - /etc/timezone:/etc/timezone logging: options: max-size: 10m
network.yml
For each service, add network configuration in the network.yml file. If you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.
- Service name: Service name should be identical to what is mentioned in docker-compose.yml
Example:
policy: ports: - target: 4443 published: 4443 mode: host networks: - da-overlay policy1: ports: - target: 4443 published: 4443 mode: host networks: - da-overlay
Also, make sure all the listeners that are used for access point load balance are exposed in network.yml.
versiontag.yml
Add one line for each service in this file.
For example, if you have two policy services with the names policy and policy1, you will have two lines for each service.
Example:
policy : image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx policy1: image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
- Place the da_migrate_6.0.x.xxxxx.tgz file inside the scripts folder, upgrade/scripts/.
- Run the upgrade script to import files/configuration from the older setup and upgrade to the latest version.
- Verify the Digital Access tag in versiontag.yml (<path to upgrade folder>/docker-compose/versiontag.yml) file. Same tag will be installed as part of upgrade. If it is not correct tag, please update it.
Although the upgrade script installs docker and pulls the images from the repository, it is recommended to install docker and pull the images before running the upgrade. This will reduce the script run time and also the downtime of the system.
- Note: In case of offline upgrade, load the Digital Access docker images to the machine. Also, if you are using internal postgres, load postgres:9.6-alpine image on the manager node.
- pull images
sudo bash upgrade/scripts/pull_image.sh
Run the import command:
On the manager node
Run import command on manager nodesudo bash upgrade/scripts/upgrade.sh --manager --import_da (on node running administration-service)
- To set Docker Swarm, provide your manager node host IP address.
In case you are using an external database, select No to skip postgres installation.
(Only applicable for High Availability or distributed setup)
The script prints a token in the output. This token will be used while setting up worker nodes.
Example:
docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377
Here the token part is:
SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377If you cannot find the token in the upgrade script output on the manager node, get the cluster join token by running this command:
Get cluster join token (Only applicable for High Availability or distributed setup)sudo docker swarm join-token worker
On worker nodes
Run import command on worker nodes (Only applicable for High Availability or distributed setup)sudo bash upgrade/scripts/upgrade.sh --worker --import_da --token <token value> --ip-port <ip:port> (on all other worker nodes)
- Follow the screen messages and complete the upgrade. Check for any error in the logs. During the upgrade, it will extract the da_migrate_6.0.x.xxxxx.tgz. files and create the same directory structure as it was on the older setup.
- On the manager node, it will install PostgreSQL database as docker container and import database dump from the older machine.
- After the scripts are executed, the .tgz file will still be there. Delete it once it is confirmed that the upgrade process has been completed successfully.
Verify and identify nodesVerify if all nodes are part of the cluster by running this command:
Verify if all nodes are part of clustersudo docker node ls
Example:
Identify nodes ID, master and worker where the service will be distributed.
Identify nodessudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
Output from this command:
{ready 192.168.86.129}
The IP address will help to identify the Digital Access node.
Add new labels for each serviceAdd new labels for each service which you want to run. Choose any name based on requirement, but make sure they are in accordance with what you have defined in the constraint section in the docker-compose.yml file.
Use these commands to add label for each service:
Commands to add labelssudo docker node update --label-add da-policy-service=true <manager node ID> sudo docker node update --label-add da-authentication-service=true <manager node ID> sudo docker node update --label-add da-administration-service=true <manager node ID> sudo docker node update --label-add da-access-point=true <manager node ID> sudo docker node update --label-add da-distribution-service=true <manager node ID> sudo docker node update --label-add da-policy-service1=true <worker node ID> sudo docker node update --label-add da-access-point1=true <worker node ID>
Deploy your Digital Access stack using this command.
Verify that the required images are available on the nodes. Then run the start-all.sh script in the manager node.
Deploy Digital Access stacksudo bash /opt/nexus/scripts/start-all.sh
Do updates in Digital Access Admin- Log in to Digital Access Admin. If you use an internal database for configurations, provide the host machine IP address to connect the databases (HAG, Oath, Oauth).
- Publish the configurations.
- Change the internal host and port for each added service according to the docker-compose.yml and network.yml files.
- Go to Manage System > Distribution Services and select “Listen on All Interfaces” in case of the ports that are to be exposed.
- Go to Manage System >Access Points and provide the IP address instead of the service name. Also, enable the "Listen on all Interfaces" option.
- If you want to enable the XPI and SOAP services, the expose port ID should be 0.0.0.0 in Digital Access Admin.
- If there is a host entry for DNS on appliance, you need to provide an additional host entry for the same in the docker-compose file.
Redeploy the services using this command on the manager node.
Restartsudo bash /opt/nexus/scripts/start-all.sh
- The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
- TCP port 2377 for cluster management communications.
- TCP and UDP port 7946 for communication among nodes.
- UDP port 4789 for overlay network traffic.
- Make sure there is a backup/snapshot of the machine before starting the upgrade.
- Copy upgrade.tgz to the manager node (node where administration service is running) and all worker (other) nodes.
Extract the tar file on all nodes.
Extracttar -xzf upgrade.tgz
- Download DA docker images on all machines (OPTIONAL)
Run the script pull_image.sh on all machines (in case of HA or distributed mode). This script will download docker images for all DA services with the version mentioned in versiontag.yml. This helps in reducing the downtime for upgrade. If you choose to download images via the upgrade script, the script will smartly download images on all nodes based on the configuration set up in docker-compose file. For instance, if you have a setup of 2 VMs/appliances. On one you have the admin, policy, authentication, distribution and accesspoint and on other you have policy and authentication. Then the upgrade script will download only the required images on respective machines.
Pull imagessudo bash upgrade/scripts/pull_image.sh
Before starting the upgrade, it is important to edit the configuration files based on your setup. This is required in case of a high availability or a distributed mode setup.
Only the configuration files in the manager node (machine running the admin service) need to be configured in the below way.
Navigate to the docker-compose folder (/upgrade/docker-compose) and edit these files:
- docker-compose.yml
- network.yml
- versiontag.yml
docker-compose.yml
For each service, add one section in the docker-compose.yml file.
Below is an example of how to set policy service on 2 different nodes - policy and policy1 are the service names of these services. Similar changes will also have to be replicated for other services as well.
It is important to get the constraints node labels correct. The node management in swarm is done by giving labels (metadata) to the nodes.
policy: hostname: policy deploy: mode: replicated replicas: 1 placement: constraints: [node.labels.da-policy-service == true ] resources: limits: cpus: "0.50" memory: 512M reservations: cpus: "0.10" memory: 128M volumes: - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z - /etc/localtime:/etc/localtime - /etc/timezone:/etc/timezone logging: options: max-size: 10m policy1: hostname: policy1 deploy: mode: replicated replicas: 1 placement: constraints: [node.labels.da-policy-service1 == true ] resources: limits: cpus: "0.50" memory: 512M reservations: cpus: "0.10" memory: 128M volumes: - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z - /etc/localtime:/etc/localtime - /etc/timezone:/etc/timezone logging: options: max-size: 10m
network.yml
For each service, add network configuration in the network.yml file. This file specifies about the network used and the ports exposed by each service.
Below is an example of how to set network configuration for 2 policy services that we defined in the docker-compose yml file above - policy and policy1 are the service names of these services.
Similar changes will also have to be replicated for other services as well.
policy : ports: - target: 4443 published: 4443 mode: host networks: - da-overlay policy1: ports: - target: 4443 published: 4443 mode: host networks: - da-overlay
Also, make sure all the listeners that are used for access point Load balance are exposed on network.yml.
versiontag.yml
A change also needs to be done in versiontag.yml to mention the docker image tags of all services. This file determines which version of DA images will be downloaded.
Below is an example for policy service - policy and policy1. Similar entries will also be present for other services.
policy : image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.1.x.xxxxx policy1: image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.1.x.xxxxx
To upgrade the manager node:
Run the upgrade script with this command line option:
Run upgrade scriptsudo bash upgrade/scripts/upgrade.sh --manager
- Provide this machine's IP address while setting up swarm. It will make this machine as the manager node in the swarm.
Say No to postgres installation in case you want to use an external database.
Get the cluster join token by running this command on manager node:
Get cluster join tokensudo docker swarm join-token worker
Output of the above command will be :
docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377
Keep a note of the token, IP and port from the above output. This is used by the worker nodes to join to this swarm (as worker).
<token> = SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq
<ip:port> = 192.168.253.139:2377
Run the upgrade script in each worker node, follow the command line options replacing the token and ip:port values from above step.
Run upgrade script on worker nodesudo bash upgrade/scripts/upgrade.sh --worker --token <token> --ip-port <ip:port>
Come to the manager node and verify if all the nodes are part of the cluster by running the below command.
Once the upgrade is done on all worker nodes, Identify and verify the nodes in swarm cluster show a "Ready" status and "Active" availability. Make a note of the manager and worker node IDs.
The Manager Status showing the value "Leader" is the manager node and the rest are worker nodes.Verify if all nodes are part of clustersudo docker node ls
Here you will have to note down the node IDs for all nodes as it will be required in further steps
Add label metadata for each service that you are running. Make sure these labels match the constraints section [node.label] in the docker-compose.yml file.
Labels provide a way of managing services on multiple nodes.
Below is an example of updating the node labels for 1 manager node and 1 worker node. Repeat the below for multiple worker nodes.
Commands to add labelssudo docker node update --label-add da-policy-service=true <manager_node_ID> sudo docker node update --label-add da-authentication-service=true <manager_node_ID> sudo docker node update --label-add da-administration-service=true <manager_node_ID> sudo docker node update --label-add da-access-point=true <manager_node_ID> sudo docker node update --label-add da-distribution-service=true <manager_node_ID> sudo docker node update --label-add da-policy-service1=true <worker_node_ID> sudo docker node update --label-add da-access-point1=true <worker_node_ID>
Run the below command to inspect and check what labels are added to each node. This is to make sure the correct labels for services are added in different nodes.
Restart servicessudo docker node inspect <manager_node_ID> // do the same for worker nodes as well
The Labels section below will show the services that are labelled as per the above step. When we run start.sh, swarm will read the labels and accordingly bring up the docker containers for those services.
These labels will indicate which services will be running on which nodes.- Make sure the password for the reporting database is correct as set before in the administrations service's customize.conf file.
Now run the start-all.sh script on the manager node to start the configured services in all nodes.
Deploy Digital Access stacksudo bash /opt/nexus/scripts/start-all.sh
- Log in to Digital Access Admin. If you use an internal database for configurations, provide the host machine IP address to connect to the databases (HAG, Oath, Oauth).
- Also, change the Internal host for all services to respective service name instead of IP/127.0.0.1. These service names should match the hostname in the docker-compose.yml file for all services.
- Change the access-point ports as shown below (Portal port - 10443)
- Publish the configurations.
- Change the "Internal Host" and port for each added service according to the docker-compose.yml and network.yml files.
- If there is any host entry for DNS on appliance, Then provide an additional host entry for the same in docker-compose file.
- If you want to enable the XPI and SOAP services, the expose port ID should be 0.0.0.0 in Digital Access Admin.
Restart the services using this command on the manager node:
Deploy Digital Access stacksudo bash /opt/nexus/scripts/start-all.sh